Recherche avancée

Médias (0)

Mot : - Tags -/presse-papier

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (40)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Initialisation de MediaSPIP (préconfiguration)

    20 février 2010, par

    Lors de l’installation de MediaSPIP, celui-ci est préconfiguré pour les usages les plus fréquents.
    Cette préconfiguration est réalisée par un plugin activé par défaut et non désactivable appelé MediaSPIP Init.
    Ce plugin sert à préconfigurer de manière correcte chaque instance de MediaSPIP. Il doit donc être placé dans le dossier plugins-dist/ du site ou de la ferme pour être installé par défaut avant de pouvoir utiliser le site.
    Dans un premier temps il active ou désactive des options de SPIP qui ne le (...)

Sur d’autres sites (9254)

  • How to fetch both live video frame and its timestamp from ffmpeg on Windows

    22 février 2017, par vijiboy

    Searching for an alternative as OpenCV would not provide timestamps for live camera stream (on Windows), which are required in my computer vision algorithm, I found ffmpeg and this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
    The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.

    Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.

    I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected ?

    Expected : It should write video frames to disk as well as print timestamp details.
    Actual : It writes video files but does not get the timestamp (showinfo) details.

    Here’s the code I tried :

    import subprocess as sp
    import numpy
    import cv2

    command = [ 'ffmpeg',
               '-i', 'e:\sample.wmv',
               '-pix_fmt', 'rgb24',
               '-vcodec', 'rawvideo',
               '-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
               '-an','-sn', #-an, -sn disables audio and sub-title processing respectively
               '-f', 'image2pipe', '-'] # we need to output to a pipe

    pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???

    for i in range(10):
       raw_image = pipe.stdout.read(1280*720*3)
       img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter
       print "showinfo output", img_info
       image1 =  numpy.fromstring(raw_image, dtype='uint8')
       image2 = image1.reshape((720,1280,3))  

       # write video frame to file just to verify
       videoFrameName = 'Video_Frame{0}.png'.format(i)
       cv2.imwrite(videoFrameName,image2)

       # throw away the data in the pipe's buffer.
       pipe.stdout.flush()
       pipe.stderr.flush()

    So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...

  • Raw Audio from NAudio

    20 février 2017, par Ken

    I want to record raw audio from WASAPI loopback by NAudio and pipe to FFmpeg for streaming via memory stream. As from this document, FFmpeg can get input as Raw However, I got result speed at 8 10x !
    Here is my code :

    waveInput = new WasapiLoopbackCapture();
    waveInput.DataAvailable += new EventHandler<waveineventargs>((object sender, WaveInEventArgs e) =>
    {
       lock (e.Buffer)
       {
           if (waveInput == null)
               return;
           try
           {
               using (System.IO.MemoryStream memoryStream = new System.IO.MemoryStream())
               {
                   memoryStream.Write(e.Buffer, 0, e.Buffer.Length);
                   memoryStream.WriteTo(ffmpeg.StandardInput.BaseStream);
               }
           }
           catch (Exception)
           {
               throw;
           }
       }
    });
    waveInput.StartRecording();
    </waveineventargs>

    FFmpeg Arguments :

    ffmpegProcess.StartInfo.Arguments = String.Format("-f s16le -i pipe:0 -y output.wav");

    1. Can someone please explain this situation and give me a solution ?
    2. Should I add Wav header to the Memory Stream then pipe to FFmpeg as Wav format ?

    The Working Solution

    waveInput = new WasapiLoopbackCapture();
    waveInput.DataAvailable += new EventHandler<waveineventargs>((object sender, WaveInEventArgs e) =>
    {
       lock (e.Buffer)
       {
           if (waveInput == null)
               return;
           try
           {
               using (System.IO.MemoryStream memoryStream = new System.IO.MemoryStream())
               {
                   memoryStream.Write(e.Buffer, 0, e.BytesRecorded);
                   memoryStream.WriteTo(ffmpeg.StandardInput.BaseStream);
               }
           }
           catch (Exception)
           {
               throw;
           }
       }
    });
    waveInput.StartRecording();
    </waveineventargs>

    FFMpeg Arguments :

    ffmpegProcess.StartInfo.Arguments = string.Format("-f f32le -ac 2 -ar 44.1k -i pipe:0 -c:a copy -y output.wav");
  • How to fetch live video frame and its timestamp from ffmpeg to python

    15 février 2017, par vijiboy

    Searching for an alternative as OpenCV would not provide timestamps for live camera stream, which are required in my computer vision algorithm, I found this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/

    Working up the code on windows I still could’nt get the frame timestamps.
    I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected ?

    Expected : It should write video frames to disk as well as print timestamp details.
    Actual : It writes video files but does not get the timestamp (showinfo) details.

    Here’s the code I tried :

    import subprocess as sp
    import numpy
    import cv2

    command = [ 'ffmpeg',
               '-i', 'e:\sample.wmv',
               '-pix_fmt', 'rgb24',
               '-vcodec', 'rawvideo',
               '-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
               '-an','-sn', #-an, -sn disables audio and sub-title processing respectively
               '-f', 'image2pipe', '-'] # we need to output to a pipe

    pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.STDOUT) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???

    for i in range(10):
       raw_image = pipe.stdout.read(1280*720*3)
       img_info = pipe.stdout.read(244) # 244 characters is the current output of showinfo video filter
       print "showinfo output", img_info
       image1 =  numpy.fromstring(raw_image, dtype='uint8')
       image2 = image1.reshape((720,1280,3))  

       # write video frame to file just to verify
       videoFrameName = 'Video_Frame{0}.png'.format(i)
       cv2.imwrite(videoFrameName,image2)

       # throw away the data in the pipe's buffer.
       pipe.stdout.flush()

    So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...