
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (92)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Création définitive du canal
12 mars 2010, parLorsque votre demande est validée, vous pouvez alors procéder à la création proprement dite du canal. Chaque canal est un site à part entière placé sous votre responsabilité. Les administrateurs de la plateforme n’y ont aucun accès.
A la validation, vous recevez un email vous invitant donc à créer votre canal.
Pour ce faire il vous suffit de vous rendre à son adresse, dans notre exemple "http://votre_sous_domaine.mediaspip.net".
A ce moment là un mot de passe vous est demandé, il vous suffit d’y (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (5682)
-
ffmpeg intensity histogram adjustment
30 septembre 2016, par jlarschI am using ffmpeg for background correction of a video and I would like to improve the intensity scaling of the output.
My gray scale videos have dark moving objects on a light background. In 8 bit pixel intensities, the light background has pixel values around 240, the dark objects have intensities of around 120.
outside of ffmpeg, I generate a background image by taking the median frame over some number of frames.
Then, I use ffmpeg to blend/divide each frame by the background image. (I want to use division, not subtraction of the background).
[there is also some cropping in my ffmpeg command, but it is irrelevant to my question]'ffmpeg.exe', '-i', u'inputVideo.avi', '-i', u'bgMed.tif', '-y', '-r', '160', '-filter_complex', "[1:0] setsar=sar=1 [1sared]; [0:0][1sared] blend=all_mode='divide':repeatlast=1,format=gray,split=1 [int1];[int1]crop=1097:1097:12:11:[out1]", '-map', '[out1]', '-c:v', 'libxvid', '-q:v', '5', '-g', '10', u'outputVideo'
This procedure is basically working but the resulting video frames look too washed out. This is probably expected ? I am guessing ffmpeg does the division and produces an internal float result which it then maps back to an 8 bit output. I would like to stretch the histogram of the result from the division. It would be preferable to stretch before the mapping to 8 bit for a finer dynamic range.
In my example, I am assuming that the background division produces a result frame that has a value close to 1 for the background, and values close to 0.5 for the dark objects. Then, ffmpeg seems to be mapping the full range 0-1 into 8 bit 0-255. I would like it to map the range 0.5-1 of the division result into the 8 bit range of the output. is this possible somehow ? Or how else can I achieve a similar result ?
-
How to obtain time markers for video splitting using python/OpenCV
30 mars 2016, par Bleddyn Raw-ReesHi I’m new to the world of programming and computer vision so please bare with me.
I’m working on my MSc project which is researching automated deletion of low value content in digital file stores. I’m specifically looking at the sort of long shots that often occur in natural history filming whereby a static camera is left rolling in order to capture the rare snow leopard or whatever. These shots may only have some 60s of useful content with perhaps several hours of worthless content either side.
As a first step I have a simple motion detection program from Adrian Rosebrock’s tutorial [http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/#comment-393376]. Next I intend to use FFMPEG to split the video.
What I would like help with is how to get in and out points based on the first and last points that motion is detected in the video.
Here is the code should you wish to see it...
# import the necessary packages
import argparse
import datetime
import imutils
import time
import cv2
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video", help="path to the video file")
ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
args = vars(ap.parse_args())
# if the video argument is None, then we are reading from webcam
if args.get("video", None) is None:
camera = cv2.VideoCapture(0)
time.sleep(0.25)
# otherwise, we are reading from a video file
else:
camera = cv2.VideoCapture(args["video"])
# initialize the first frame in the video stream
firstFrame = None
# loop over the frames of the video
while True:
# grab the current frame and initialize the occupied/unoccupied
# text
(grabbed, frame) = camera.read()
text = "Unoccupied"
# if the frame could not be grabbed, then we have reached the end
# of the video
if not grabbed:
break
# resize the frame, convert it to grayscale, and blur it
frame = imutils.resize(frame, width=500)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (21, 21), 0)
# if the first frame is None, initialize it
if firstFrame is None:
firstFrame = gray
continue
# compute the absolute difference between the current frame and
# first frame
frameDelta = cv2.absdiff(firstFrame, gray)
thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]
# dilate the thresholded image to fill in holes, then find contours
# on thresholded image
thresh = cv2.dilate(thresh, None, iterations=2)
(_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# loop over the contours
for c in cnts:
# if the contour is too small, ignore it
if cv2.contourArea(c) < args["min_area"]:
continue
# compute the bounding box for the contour, draw it on the frame,
# and update the text
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
text = "Occupied"
# draw the text and timestamp on the frame
cv2.putText(frame, "Room Status: {}".format(text), (10, 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
(10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)
# show the frame and record if the user presses a key
cv2.imshow("Security Feed", frame)
cv2.imshow("Thresh", thresh)
cv2.imshow("Frame Delta", frameDelta)
key = cv2.waitKey(1) & 0xFF
# if the `q` key is pressed, break from the lop
if key == ord("q"):
break
# cleanup the camera and close any open windows
camera.release()
cv2.destroyAllWindows()Thanks !
-
Merge commit ’a9f3f5fadb57bae3f3ff0be69e56b2c6014f2513’
21 juillet 2014, par Michael NiedermayerMerge commit ’a9f3f5fadb57bae3f3ff0be69e56b2c6014f2513’
* commit ’a9f3f5fadb57bae3f3ff0be69e56b2c6014f2513’ :
Revert "tiff : support reading gray+alpha at 8 bits"Not merged, the pix fmt its not unknown
Merged-by : Michael Niedermayer <michaelni@gmx.at>