Recherche avancée

Médias (2)

Mot : - Tags -/kml

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Soumettre améliorations et plugins supplémentaires

    10 avril 2011

    Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
    Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)

Sur d’autres sites (11163)

  • How to fetch both live video frame and its timestamp from ffmpeg on Windows

    22 février 2017, par vijiboy

    Searching for an alternative as OpenCV would not provide timestamps for live camera stream (on Windows), which are required in my computer vision algorithm, I found ffmpeg and this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
    The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.

    Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.

    I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected ?

    Expected : It should write video frames to disk as well as print timestamp details.
    Actual : It writes video files but does not get the timestamp (showinfo) details.

    Here’s the code I tried :

    import subprocess as sp
    import numpy
    import cv2

    command = [ 'ffmpeg',
               '-i', 'e:\sample.wmv',
               '-pix_fmt', 'rgb24',
               '-vcodec', 'rawvideo',
               '-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
               '-an','-sn', #-an, -sn disables audio and sub-title processing respectively
               '-f', 'image2pipe', '-'] # we need to output to a pipe

    pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???

    for i in range(10):
       raw_image = pipe.stdout.read(1280*720*3)
       img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter
       print "showinfo output", img_info
       image1 =  numpy.fromstring(raw_image, dtype='uint8')
       image2 = image1.reshape((720,1280,3))  

       # write video frame to file just to verify
       videoFrameName = 'Video_Frame{0}.png'.format(i)
       cv2.imwrite(videoFrameName,image2)

       # throw away the data in the pipe's buffer.
       pipe.stdout.flush()
       pipe.stderr.flush()

    So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...

  • Raw Audio from NAudio

    20 février 2017, par Ken

    I want to record raw audio from WASAPI loopback by NAudio and pipe to FFmpeg for streaming via memory stream. As from this document, FFmpeg can get input as Raw However, I got result speed at 8 10x !
    Here is my code :

    waveInput = new WasapiLoopbackCapture();
    waveInput.DataAvailable += new EventHandler<waveineventargs>((object sender, WaveInEventArgs e) =>
    {
       lock (e.Buffer)
       {
           if (waveInput == null)
               return;
           try
           {
               using (System.IO.MemoryStream memoryStream = new System.IO.MemoryStream())
               {
                   memoryStream.Write(e.Buffer, 0, e.Buffer.Length);
                   memoryStream.WriteTo(ffmpeg.StandardInput.BaseStream);
               }
           }
           catch (Exception)
           {
               throw;
           }
       }
    });
    waveInput.StartRecording();
    </waveineventargs>

    FFmpeg Arguments :

    ffmpegProcess.StartInfo.Arguments = String.Format("-f s16le -i pipe:0 -y output.wav");

    1. Can someone please explain this situation and give me a solution ?
    2. Should I add Wav header to the Memory Stream then pipe to FFmpeg as Wav format ?

    The Working Solution

    waveInput = new WasapiLoopbackCapture();
    waveInput.DataAvailable += new EventHandler<waveineventargs>((object sender, WaveInEventArgs e) =>
    {
       lock (e.Buffer)
       {
           if (waveInput == null)
               return;
           try
           {
               using (System.IO.MemoryStream memoryStream = new System.IO.MemoryStream())
               {
                   memoryStream.Write(e.Buffer, 0, e.BytesRecorded);
                   memoryStream.WriteTo(ffmpeg.StandardInput.BaseStream);
               }
           }
           catch (Exception)
           {
               throw;
           }
       }
    });
    waveInput.StartRecording();
    </waveineventargs>

    FFMpeg Arguments :

    ffmpegProcess.StartInfo.Arguments = string.Format("-f f32le -ac 2 -ar 44.1k -i pipe:0 -c:a copy -y output.wav");
  • How to fetch live video frame and its timestamp from ffmpeg to python

    15 février 2017, par vijiboy

    Searching for an alternative as OpenCV would not provide timestamps for live camera stream, which are required in my computer vision algorithm, I found this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/

    Working up the code on windows I still could’nt get the frame timestamps.
    I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected ?

    Expected : It should write video frames to disk as well as print timestamp details.
    Actual : It writes video files but does not get the timestamp (showinfo) details.

    Here’s the code I tried :

    import subprocess as sp
    import numpy
    import cv2

    command = [ 'ffmpeg',
               '-i', 'e:\sample.wmv',
               '-pix_fmt', 'rgb24',
               '-vcodec', 'rawvideo',
               '-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
               '-an','-sn', #-an, -sn disables audio and sub-title processing respectively
               '-f', 'image2pipe', '-'] # we need to output to a pipe

    pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.STDOUT) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???

    for i in range(10):
       raw_image = pipe.stdout.read(1280*720*3)
       img_info = pipe.stdout.read(244) # 244 characters is the current output of showinfo video filter
       print "showinfo output", img_info
       image1 =  numpy.fromstring(raw_image, dtype='uint8')
       image2 = image1.reshape((720,1280,3))  

       # write video frame to file just to verify
       videoFrameName = 'Video_Frame{0}.png'.format(i)
       cv2.imwrite(videoFrameName,image2)

       # throw away the data in the pipe's buffer.
       pipe.stdout.flush()

    So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...