Recherche avancée

Médias (39)

Mot : - Tags -/audio

Autres articles (62)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

Sur d’autres sites (9435)

  • Compiling FFMPEG for android app

    7 mars 2016, par Sanat Pandey

    I have to extract audio from Video file and join multiple images to single video with audio, therefore I have to compile FFMPEG source for our android app. But when I run my build_android.sh it returns some unexpected error. Please suggest any solution regarding the same.

    build_android.sh :

    #!/bin/bash

    NDK=/Users/sanatpandey/Desktop/android-sdk-macosx/android-ndk-r10e
    PLATFORM=$NDK/platforms/android-9/arch-arm/
    PREBUILT=$NDK/toolchains/arm-linux-androideabi-4.6/prebuilt/darwin-x86_64

    function build_one
    {
       ./configure --target-os=linux \
           --prefix=$PREFIX \
           --enable-cross-compile \
           --extra-libs="-lgcc" \
           --arch=arm \
           --cc=$PREBUILT/bin/arm-linux-androideabi-gcc \
           --cross-prefix=$PREBUILT/bin/arm-linux-androideabi- \
           --nm=$PREBUILT/bin/arm-linux-androideabi-nm \
           --sysroot=$PLATFORM \
           --extra-cflags=" -O3 -fpic -DANDROID -DHAVE_SYS_UIO_H=1 -Dipv6mr_interface=ipv6mr_ifindex \
           -fasm -Wno-psabi -fno-short-enums  -fno-strict-aliasing -finline-limit=300 $OPTIMIZE_CFLAGS " \
           --disable-shared \
           --enable-static \
           --extra-ldflags="-Wl,-rpath-link=$PLATFORM/usr/lib -L$PLATFORM/usr/lib  -nostdlib -lc -lm -ldl -llog" \
           --enable-parsers \
           --enable-encoders  \
           --enable-decoders \
           --disable-muxers \
           --enable-demuxers \
           --enable-swscale  \
           --disable-ffmpeg \
           --disable-ffplay \
           --disable-ffprobe \
           --disable-ffserver \
           --enable-network \
           --enable-indevs \
           --disable-bsfs \
           --enable-filters \
           --enable-protocols  \
           --enable-asm \
           $ADDITIONAL_CONFIGURE_FLAG

           #make clean
           make  -j4 install
           $PREBUILT/bin/arm-linux-androideabi-ar d libavcodec/libavcodec.a inverse.o

           $PREBUILT/bin/arm-linux-androideabi-ld -rpath-link=$PLATFORM/usr/lib \
               -L$PLATFORM/usr/lib  -soname libffmpeg.so -shared -nostdlib \
               -Bsymbolic --whole-archive --no-undefined -o $PREFIX/libffmpeg.so \
               libavcodec/libavcodec.a libavdevice/libavdevice.a libavfilter/libavfilter.a \
               libavformat/libavformat.a libavutil/libavutil.a libswscale/libswscale.a \
               libswresample/libswresample.a -lc -lm -lz -ldl -llog \
               --dynamic-linker=/system/bin/linker \
               $PREBUILT/lib/gcc/arm-linux-androideabi/4.6/libgcc.a
    }

    #arm v6 ¹öÀü ÄÄÆÄÀÏ
    CPU=armv6
    OPTIMIZE_CFLAGS="-marm -march=$CPU"
    PREFIX=./android/$CPU
    ADDITIONAL_CONFIGURE_FLAG=
    #build_one

    #arm v7vfpv3 ¹öÀü ÄÄÆÄÀÏ
    CPU=armv7-a
    OPTIMIZE_CFLAGS="-mfloat-abi=softfp -mfpu=vfpv3-d16 -marm -march=$CPU "
    PREFIX=./android/$CPU
    ADDITIONAL_CONFIGURE_FLAG=
    #build_one

    #arm v7vfp ¹öÀü ÄÄÆÄÀÏ
    CPU=armv7-a
    OPTIMIZE_CFLAGS="-mfloat-abi=softfp -mfpu=vfp -marm -march=$CPU "
    PREFIX=./android/$CPU-vfp
    ADDITIONAL_CONFIGURE_FLAG=
    build_one

    #arm v7n ¹öÀü ÄÄÆÄÀÏ
    CPU=armv7-a
    OPTIMIZE_CFLAGS="-mfloat-abi=softfp -mfpu=neon -marm -march=$CPU -mtune=cortex-a8"
    PREFIX=./android/$CPU
    ADDITIONAL_CONFIGURE_FLAG=--enable-neon
    #build_one

    #arm v6+vfp ¹öÀü ÄÄÆÄÀÏ
    CPU=armv6
    OPTIMIZE_CFLAGS="-DCMP_HAVE_VFP -mfloat-abi=softfp -mfpu=vfp -marm -march=$CPU"
    PREFIX=./android/${CPU}_vfp
    ADDITIONAL_CONFIGURE_FLAG=
    #build_one

    Error :

    :android-sdk-macosx sanatpandey$ ./build_android.sh
    ./build_android.sh : line 1 : rtf1ansiansicpg1252cocoartf1347cocoasubrtf570 : command not found
    ./build_android.sh : line 2 : syntax error near unexpected token

    }'
    ./build_android.sh: line 2:

    \fonttbl\f0\fmodern\fcharset0 Courier ;
    :android-sdk-macosx sanatpandey$

    Thanks in advance

  • How to obtain time markers for video splitting using python/OpenCV

    30 mars 2016, par Bleddyn Raw-Rees

    Hi I’m new to the world of programming and computer vision so please bare with me.

    I’m working on my MSc project which is researching automated deletion of low value content in digital file stores. I’m specifically looking at the sort of long shots that often occur in natural history filming whereby a static camera is left rolling in order to capture the rare snow leopard or whatever. These shots may only have some 60s of useful content with perhaps several hours of worthless content either side.

    As a first step I have a simple motion detection program from Adrian Rosebrock’s tutorial [http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/#comment-393376]. Next I intend to use FFMPEG to split the video.

    What I would like help with is how to get in and out points based on the first and last points that motion is detected in the video.

    Here is the code should you wish to see it...

    # import the necessary packages
    import argparse
    import datetime
    import imutils
    import time
    import cv2

    # construct the argument parser and parse the arguments
    ap = argparse.ArgumentParser()
    ap.add_argument("-v", "--video", help="path to the video file")
    ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
    args = vars(ap.parse_args())

    # if the video argument is None, then we are reading from webcam
    if args.get("video", None) is None:
    camera = cv2.VideoCapture(0)
    time.sleep(0.25)

    # otherwise, we are reading from a video file
    else:
       camera = cv2.VideoCapture(args["video"])

    # initialize the first frame in the video stream
    firstFrame = None

    # loop over the frames of the video
    while True:
       # grab the current frame and initialize the occupied/unoccupied
       # text
       (grabbed, frame) = camera.read()
       text = "Unoccupied"

       # if the frame could not be grabbed, then we have reached the end
       # of the video
       if not grabbed:
           break

       # resize the frame, convert it to grayscale, and blur it
       frame = imutils.resize(frame, width=500)
       gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
       gray = cv2.GaussianBlur(gray, (21, 21), 0)

       # if the first frame is None, initialize it
       if firstFrame is None:
           firstFrame = gray
           continue

       # compute the absolute difference between the current frame and
       # first frame
       frameDelta = cv2.absdiff(firstFrame, gray)
       thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]

       # dilate the thresholded image to fill in holes, then find contours
       # on thresholded image
       thresh = cv2.dilate(thresh, None, iterations=2)
       (_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

       # loop over the contours
       for c in cnts:
           # if the contour is too small, ignore it
           if cv2.contourArea(c) < args["min_area"]:
               continue

           # compute the bounding box for the contour, draw it on the frame,
           # and update the text
           (x, y, w, h) = cv2.boundingRect(c)
           cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
           text = "Occupied"

       # draw the text and timestamp on the frame
       cv2.putText(frame, "Room Status: {}".format(text), (10, 20),
           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
       cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
           (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)

       # show the frame and record if the user presses a key
       cv2.imshow("Security Feed", frame)
       cv2.imshow("Thresh", thresh)
       cv2.imshow("Frame Delta", frameDelta)
       key = cv2.waitKey(1) & 0xFF

       # if the `q` key is pressed, break from the lop
       if key == ord("q"):
           break

    # cleanup the camera and close any open windows
    camera.release()
    cv2.destroyAllWindows()

    Thanks !

  • How do I get FFMPEG to build a video using the same timing as my input ?

    15 avril 2016, par Forest J. Handford

    I’m trying to create a video of screen actions a user takes by piping screenshots to FFMPEG from a C# console application. I’m sending 10 frames per second. The final video has exactly as many frames as I sent (ie : a 10 second vid has 100 frames). The time, however, of the video does not match. With the below code I get 7m 47s worth of video from 490751 ms of input. I’ve found that PTS gets me a little closer, but it feels like I’m doing something wrong.

       private const int VID_FRAME_FPS = 10;
       private const double PTS = 2.4444;

       /// <summary>
       /// Generates the Videos by gathering frames and processing via FFMPEG.
       /// Deletes the generated Frame images after successfully compiling the video.
       /// </summary>
       public static void RecordScreen(string pathToOutput)
       {
           Logger.log.Info("Launching FFMPEG ....");
           String arg = "-f image2pipe -i pipe:.bmp -filter:v \"setpts = " + PTS + " * PTS\" -r " + VID_FRAME_FPS + " -pix_fmt yuv420p -qscale:v 5 -vcodec libvpx -bufsize 30000k -y \"" + pathToOutput + "\\VidOut.webm\"";
           //String arg = "-f image2pipe -i pipe:.bmp -filter:v \"setpts = " + PTS + " * PTS\" -r " + VID_FRAME_FPS + " -pix_fmt yuv420p -qscale:v 5 -vcodec libx264 -bufsize 30000k -y \"" + pathToOutput + "\\VidOut.mp4\"";
           Process launchingFFMPEG = new Process
           {
               StartInfo = new ProcessStartInfo
               {
                   FileName = "ffmpeg",
                   Arguments = arg,
                   UseShellExecute = false,
                   CreateNoWindow = true,
                   RedirectStandardInput = true
               }
           };
           launchingFFMPEG.Start();

           System.Drawing.Image img;
           Stopwatch stopWatch = Stopwatch.StartNew(); //creates and start the instance of Stopwatch
           int sleep;

           Stopwatch vidTime = Stopwatch.StartNew();

           do
           {
               img = Capture.GetScreen();
               img.Save(launchingFFMPEG.StandardInput.BaseStream, System.Drawing.Imaging.ImageFormat.Bmp);
               img.Dispose();

               sleep = 10 * VID_FRAME_FPS - (int)stopWatch.ElapsedMilliseconds;
               if (sleep > 0)
               {
                   Logger.log.Info("Captured frame, sleeping " + sleep + " milliseconds.");
                   Thread.Sleep(sleep);
               }
               stopWatch.Restart();
           } while (workerThread.IsAlive);
           Logger.log.Debug("Video Time: " + vidTime.ElapsedMilliseconds);
           launchingFFMPEG.StandardInput.Flush();
           launchingFFMPEG.StandardInput.Close();
           launchingFFMPEG.Close();
       }

    Is there a way to do this without PTS ? If I need PTS, what is the correct value ? It seems that PTS of 2.565656 is close to correct.

    All the related documentation points to just using -r (the framerate command) but that doesn’t work (as I’m using it).

    Note : I’m only using H.264 for debugging with ffprobe, I plan to switch back to webm when this is resolved. I’m trying to avoid H.256 and MP4 patents.