
Recherche avancée
Médias (3)
-
Valkaama DVD Cover Outside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Valkaama DVD Cover Inside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
Autres articles (82)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (11836)
-
How to obtain time markers for video splitting using python/OpenCV
30 mars 2016, par Bleddyn Raw-ReesHi I’m new to the world of programming and computer vision so please bare with me.
I’m working on my MSc project which is researching automated deletion of low value content in digital file stores. I’m specifically looking at the sort of long shots that often occur in natural history filming whereby a static camera is left rolling in order to capture the rare snow leopard or whatever. These shots may only have some 60s of useful content with perhaps several hours of worthless content either side.
As a first step I have a simple motion detection program from Adrian Rosebrock’s tutorial [http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/#comment-393376]. Next I intend to use FFMPEG to split the video.
What I would like help with is how to get in and out points based on the first and last points that motion is detected in the video.
Here is the code should you wish to see it...
# import the necessary packages
import argparse
import datetime
import imutils
import time
import cv2
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video", help="path to the video file")
ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
args = vars(ap.parse_args())
# if the video argument is None, then we are reading from webcam
if args.get("video", None) is None:
camera = cv2.VideoCapture(0)
time.sleep(0.25)
# otherwise, we are reading from a video file
else:
camera = cv2.VideoCapture(args["video"])
# initialize the first frame in the video stream
firstFrame = None
# loop over the frames of the video
while True:
# grab the current frame and initialize the occupied/unoccupied
# text
(grabbed, frame) = camera.read()
text = "Unoccupied"
# if the frame could not be grabbed, then we have reached the end
# of the video
if not grabbed:
break
# resize the frame, convert it to grayscale, and blur it
frame = imutils.resize(frame, width=500)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (21, 21), 0)
# if the first frame is None, initialize it
if firstFrame is None:
firstFrame = gray
continue
# compute the absolute difference between the current frame and
# first frame
frameDelta = cv2.absdiff(firstFrame, gray)
thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]
# dilate the thresholded image to fill in holes, then find contours
# on thresholded image
thresh = cv2.dilate(thresh, None, iterations=2)
(_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# loop over the contours
for c in cnts:
# if the contour is too small, ignore it
if cv2.contourArea(c) < args["min_area"]:
continue
# compute the bounding box for the contour, draw it on the frame,
# and update the text
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
text = "Occupied"
# draw the text and timestamp on the frame
cv2.putText(frame, "Room Status: {}".format(text), (10, 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
(10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)
# show the frame and record if the user presses a key
cv2.imshow("Security Feed", frame)
cv2.imshow("Thresh", thresh)
cv2.imshow("Frame Delta", frameDelta)
key = cv2.waitKey(1) & 0xFF
# if the `q` key is pressed, break from the lop
if key == ord("q"):
break
# cleanup the camera and close any open windows
camera.release()
cv2.destroyAllWindows()Thanks !
-
Revision 96419 : La librairie en version 1.9.2 v1.9.12 Add support for Direct Stream ...
3 avril 2016, par kent1@… — LogLa librairie en version 1.9.2
v1.9.12
Add support for Direct Stream Digital (DSD) / DSD Storage Facility (DSF) file format
Add detection (not parsing) of WebP image format
bugfix #1910 : Quicktime embedded images
v1.9.11
#64 - update constructor syntax for PHP 7
#62 - infinite loop in large PNG files
#61 - ID3v2 remove BOM from frame descriptions
#60 - missing "break" in module.audio-video.quicktime.php
#59 - .gitignore comments
#58 - inconsistency in relation to module.tag.id3v2.php
#57 - comparing instead of assign
#56 - unsupported MIME type "audio/x-wave"
#55 - readme.md variable reference
#54 - QuickTime ? false 1000fps
#53 - Quicktime / ID3v2 multiple genres
#52 - sys_get_temp_dir in GetDataImageSize ?
demo.joinmp3.php enhancements
m4b (audiobook) chapters not parsed correctly
sqlite3 caching not working
bugfix #1903 - Quicktime meta atom not parsed -
ffmpeg - filter_complex list too long
2 mars 2016, par BaumiLet’s say I want to overlay a clock in the video using special font, color, etc to video that is aprox 30 min long. I end up with command :
ffmpeg -y -i in.mp4 -filter_complex "
[0:v]drawtext=fontfile=/var/www/sites/manage/elements/digital-7.ttf:text='00\:00':fontcolor=white@1.0:fontsize=26:x=100:y=65:enable='between(t,0,7)'[tmp];
[tmp]drawtext=fontfile=/var/www/sites/manage/elements/digital-7.ttf:text='00\:01':fontcolor=white@1.0:fontsize=26:x=100:y=65:enable='between(t,7,8)'[tmp];
[tmp]drawtext=fontfile=/var/www/sites/manage/elements/digital-7.ttf:text='00\:02':fontcolor=white@1.0:fontsize=26:x=100:y=65:enable='between(t,8,9)'[tmp];
[tmp]drawtext=fontfile=/var/www/sites/manage/elements/digital-7.ttf:text='00\:03':fontcolor=white@1.0:fontsize=26:x=100:y=65:enable='between(t,9,10)'[tmp];
[tmp]drawtext=fontfile=/var/www/sites/manage/elements/digital-7.ttf:text='00\:04':fontcolor=white@1.0:fontsize=26:x=100:y=65:enable='between(t,10,11)'[tmp];
......."
-map "[tmp]" -map 0:a -acodec copy -c:v h264 out.mp4This clock is not the only overlay I have so finally I have end up with command 216kB long but this I cannot even run in bash because of argument list being too long.
I wanted to re-encode the video only once. Is there any other way I can do that ?
Thanks !