Recherche avancée

Médias (91)

Autres articles (88)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

Sur d’autres sites (11715)

  • FFMPEG API - Recording video and audio - Performance issues

    24 juillet 2015, par Solidus

    I’m developing an app which is able to record video from a webcam and audio from a microphone. I also need to process user input at the same time as they can, for example, draw on top of the video, in real time. I’ve been using QT but unfortunately the camera module does not work on windows which led me to use ffmpeg to record the video/audio.

    My Camera module is now working well besides a slight problem with syncing (which I asked for a solution in another thread). The problem is that when I start recording the performance drops (around 50-60% cpu usage). I’m not sure where or if I can improve on this. I figure the problem comes from the running threads. I have followed the dranger tutorial to help me develop my code and I am use 3 threads in total. One for capturing the video and audio packets, adding them to a queue, one to process the video packets and one to process the audio packets. While processing the video packets I also have to convert the frame (which comes raw from the camera feed) to a RGB format so that QT is able to show it. The frame also has to be converted to YUV420P to be saved to the mp4 file. This could also be hindering performance.

    Every time I try to get a packet from the queue I verify if it is empty and, if it is, I tell the thread to sleep until more data is available as this helps saving CPU usage. The problem is sometimes the threads don’t wake up on time and the queue starts filling up with packages adding a cumulative delay, which never stops.

    Bellow is a part of the code I am using for the 3 threads.

    One thread is capturing both audio and video packets and adding them on a queue :

    void FFCapture::grabFrames(int videoStream, int audioStream) {
       grabActive = true;
       videoQueue.clear();
       audioQueue.clear();

       AVPacket pkt;
       while (av_read_frame(iFormatContext, &pkt) >= 0 && active) {
           if(pkt.stream_index == videoStream) {
               enqueueVideoPacket(pkt);
           }
           else if(pkt.stream_index == audioStream) {
               enqueueAudioPacket(pkt);
           }
           else {
               av_free_packet(&pkt);
           }
           //QThread::msleep(20);
       }
       //Wake up threads that might be sleeping
       videoWait.wakeOne();
       audioWait.wakeOne();
       grabActive = false;
       cleanupAll();
    }

    And then I have one thread for each stream (video and audio) :

    void FFCapture::captureVideoFrames() {
       videoActive = true;
       outStream.videoPTS = 2;

       int min = 0;
       if(cacheMs > 0) {
           QThread::msleep(cacheMs);
           min = getVideoQueueSize();
       }

       while(active || (!active && (getVideoQueueSize() > min))) {
           qDebug() << "Video:" << videoQueue.size() << min;
           AVPacket pkt;
           if(dequeueVideoPacket(pkt, min) >= 0) {
               if(processVideoPacket(&pkt) < 0) {
                   av_free_packet(&pkt);
                   break;
               }
               av_free_packet(&pkt);
           }
       }
       videoActive = false;
       cleanupAll();
    }

    void FFCapture::captureAudioFrames() {
       audioActive = true;
       outStream.audioPTS = 0;

       int min = 0;
       if(cacheMs > 0) {
           QThread::msleep(cacheMs);
           min = getAudioQueueSize();
       }

       while(active || (!active && (getAudioQueueSize() > min))) {
           qDebug() << "Audio:" << audioQueue.size() << min;
           AVPacket pkt;

           if(dequeueAudioPacket(pkt, min) >= 0) {
               if(recording) {
                   if(processAudioPacket(&pkt) < 0) break;
               }
               else av_free_packet(&pkt);
           }
       }
       audioActive = false;
       cleanupAll();
    }

    When I remove a packet from the queue I verify if it is empty and if it is I tell the thread to wait for more data. The code is as follows :

    void FFCapture::enqueueVideoPacket(const AVPacket &pkt) {
       QMutexLocker locker(&videoQueueMutex);
       videoQueue.enqueue(pkt);
       videoWait.wakeOne();
    }

    int FFCapture::dequeueVideoPacket(AVPacket &pkt, int sizeConstraint) {
       QMutexLocker locker(&videoQueueMutex);
       while(1) {
           if(videoQueue.size() > sizeConstraint) {
               pkt = videoQueue.dequeue();
               return 0;
           }
           else if(!active) {
               return -1;
           }
           else {
               videoWait.wait(&videoQueueMutex);
           }
       }
       return -2; //Should never happen. Just to avoid compile error.
    }
  • How to obtain time markers for video splitting using python/OpenCV

    30 mars 2016, par Bleddyn Raw-Rees

    Hi I’m new to the world of programming and computer vision so please bare with me.

    I’m working on my MSc project which is researching automated deletion of low value content in digital file stores. I’m specifically looking at the sort of long shots that often occur in natural history filming whereby a static camera is left rolling in order to capture the rare snow leopard or whatever. These shots may only have some 60s of useful content with perhaps several hours of worthless content either side.

    As a first step I have a simple motion detection program from Adrian Rosebrock’s tutorial [http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/#comment-393376]. Next I intend to use FFMPEG to split the video.

    What I would like help with is how to get in and out points based on the first and last points that motion is detected in the video.

    Here is the code should you wish to see it...

    # import the necessary packages
    import argparse
    import datetime
    import imutils
    import time
    import cv2

    # construct the argument parser and parse the arguments
    ap = argparse.ArgumentParser()
    ap.add_argument("-v", "--video", help="path to the video file")
    ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
    args = vars(ap.parse_args())

    # if the video argument is None, then we are reading from webcam
    if args.get("video", None) is None:
    camera = cv2.VideoCapture(0)
    time.sleep(0.25)

    # otherwise, we are reading from a video file
    else:
       camera = cv2.VideoCapture(args["video"])

    # initialize the first frame in the video stream
    firstFrame = None

    # loop over the frames of the video
    while True:
       # grab the current frame and initialize the occupied/unoccupied
       # text
       (grabbed, frame) = camera.read()
       text = "Unoccupied"

       # if the frame could not be grabbed, then we have reached the end
       # of the video
       if not grabbed:
           break

       # resize the frame, convert it to grayscale, and blur it
       frame = imutils.resize(frame, width=500)
       gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
       gray = cv2.GaussianBlur(gray, (21, 21), 0)

       # if the first frame is None, initialize it
       if firstFrame is None:
           firstFrame = gray
           continue

       # compute the absolute difference between the current frame and
       # first frame
       frameDelta = cv2.absdiff(firstFrame, gray)
       thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]

       # dilate the thresholded image to fill in holes, then find contours
       # on thresholded image
       thresh = cv2.dilate(thresh, None, iterations=2)
       (_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

       # loop over the contours
       for c in cnts:
           # if the contour is too small, ignore it
           if cv2.contourArea(c) < args["min_area"]:
               continue

           # compute the bounding box for the contour, draw it on the frame,
           # and update the text
           (x, y, w, h) = cv2.boundingRect(c)
           cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
           text = "Occupied"

       # draw the text and timestamp on the frame
       cv2.putText(frame, "Room Status: {}".format(text), (10, 20),
           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
       cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
           (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)

       # show the frame and record if the user presses a key
       cv2.imshow("Security Feed", frame)
       cv2.imshow("Thresh", thresh)
       cv2.imshow("Frame Delta", frameDelta)
       key = cv2.waitKey(1) & 0xFF

       # if the `q` key is pressed, break from the lop
       if key == ord("q"):
           break

    # cleanup the camera and close any open windows
    camera.release()
    cv2.destroyAllWindows()

    Thanks !

  • How to obtain time markers for video splitting using python/OpenCV

    10 novembre 2018, par Bleddyn Raw-Rees

    I’m working on my MSc project which is researching automated deletion of low value content in digital file stores. I’m specifically looking at the sort of long shots that often occur in natural history filming whereby a static camera is left rolling in order to capture the rare snow leopard or whatever. These shots may only have some 60s of useful content with perhaps several hours of worthless content either side.

    As a first step I have a simple motion detection program from Adrian Rosebrock’s tutorial [http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/#comment-393376]. Next I intend to use FFMPEG to split the video.

    What I would like help with is how to get in and out points based on the first and last points that motion is detected in the video.

    Here is the code should you wish to see it...

    # import the necessary packages
    import argparse
    import datetime
    import imutils
    import time
    import cv2

    # construct the argument parser and parse the arguments
    ap = argparse.ArgumentParser()
    ap.add_argument("-v", "--video", help="path to the video file")
    ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
    args = vars(ap.parse_args())

    # if the video argument is None, then we are reading from webcam
    if args.get("video", None) is None:
    camera = cv2.VideoCapture(0)
    time.sleep(0.25)

    # otherwise, we are reading from a video file
    else:
       camera = cv2.VideoCapture(args["video"])

    # initialize the first frame in the video stream
    firstFrame = None

    # loop over the frames of the video
    while True:
       # grab the current frame and initialize the occupied/unoccupied
       # text
       (grabbed, frame) = camera.read()
       text = "Unoccupied"

       # if the frame could not be grabbed, then we have reached the end
       # of the video
       if not grabbed:
           break

       # resize the frame, convert it to grayscale, and blur it
       frame = imutils.resize(frame, width=500)
       gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
       gray = cv2.GaussianBlur(gray, (21, 21), 0)

       # if the first frame is None, initialize it
       if firstFrame is None:
           firstFrame = gray
           continue

       # compute the absolute difference between the current frame and
       # first frame
       frameDelta = cv2.absdiff(firstFrame, gray)
       thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]

       # dilate the thresholded image to fill in holes, then find contours
       # on thresholded image
       thresh = cv2.dilate(thresh, None, iterations=2)
       (_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

       # loop over the contours
       for c in cnts:
           # if the contour is too small, ignore it
           if cv2.contourArea(c) < args["min_area"]:
               continue

           # compute the bounding box for the contour, draw it on the frame,
           # and update the text
           (x, y, w, h) = cv2.boundingRect(c)
           cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
           text = "Occupied"

       # draw the text and timestamp on the frame
       cv2.putText(frame, "Room Status: {}".format(text), (10, 20),
           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
       cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
           (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)

       # show the frame and record if the user presses a key
       cv2.imshow("Security Feed", frame)
       cv2.imshow("Thresh", thresh)
       cv2.imshow("Frame Delta", frameDelta)
       key = cv2.waitKey(1) & 0xFF

       # if the `q` key is pressed, break from the lop
       if key == ord("q"):
           break

    # cleanup the camera and close any open windows
    camera.release()
    cv2.destroyAllWindows()