Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (58)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

Sur d’autres sites (14889)

  • How to fetch both live video frame and its timestamp from ffmpeg on Windows

    22 février 2017, par vijiboy

    Searching for an alternative as OpenCV would not provide timestamps for live camera stream (on Windows), which are required in my computer vision algorithm, I found ffmpeg and this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
    The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.

    Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.

    I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected ?

    Expected : It should write video frames to disk as well as print timestamp details.
    Actual : It writes video files but does not get the timestamp (showinfo) details.

    Here’s the code I tried :

    import subprocess as sp
    import numpy
    import cv2

    command = [ 'ffmpeg',
               '-i', 'e:\sample.wmv',
               '-pix_fmt', 'rgb24',
               '-vcodec', 'rawvideo',
               '-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
               '-an','-sn', #-an, -sn disables audio and sub-title processing respectively
               '-f', 'image2pipe', '-'] # we need to output to a pipe

    pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???

    for i in range(10):
       raw_image = pipe.stdout.read(1280*720*3)
       img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter
       print "showinfo output", img_info
       image1 =  numpy.fromstring(raw_image, dtype='uint8')
       image2 = image1.reshape((720,1280,3))  

       # write video frame to file just to verify
       videoFrameName = 'Video_Frame{0}.png'.format(i)
       cv2.imwrite(videoFrameName,image2)

       # throw away the data in the pipe's buffer.
       pipe.stdout.flush()
       pipe.stderr.flush()

    So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...

  • Raw Audio from NAudio

    20 février 2017, par Ken

    I want to record raw audio from WASAPI loopback by NAudio and pipe to FFmpeg for streaming via memory stream. As from this document, FFmpeg can get input as Raw However, I got result speed at 8 10x !
    Here is my code :

    waveInput = new WasapiLoopbackCapture();
    waveInput.DataAvailable += new EventHandler<waveineventargs>((object sender, WaveInEventArgs e) =>
    {
       lock (e.Buffer)
       {
           if (waveInput == null)
               return;
           try
           {
               using (System.IO.MemoryStream memoryStream = new System.IO.MemoryStream())
               {
                   memoryStream.Write(e.Buffer, 0, e.Buffer.Length);
                   memoryStream.WriteTo(ffmpeg.StandardInput.BaseStream);
               }
           }
           catch (Exception)
           {
               throw;
           }
       }
    });
    waveInput.StartRecording();
    </waveineventargs>

    FFmpeg Arguments :

    ffmpegProcess.StartInfo.Arguments = String.Format("-f s16le -i pipe:0 -y output.wav");

    1. Can someone please explain this situation and give me a solution ?
    2. Should I add Wav header to the Memory Stream then pipe to FFmpeg as Wav format ?

    The Working Solution

    waveInput = new WasapiLoopbackCapture();
    waveInput.DataAvailable += new EventHandler<waveineventargs>((object sender, WaveInEventArgs e) =>
    {
       lock (e.Buffer)
       {
           if (waveInput == null)
               return;
           try
           {
               using (System.IO.MemoryStream memoryStream = new System.IO.MemoryStream())
               {
                   memoryStream.Write(e.Buffer, 0, e.BytesRecorded);
                   memoryStream.WriteTo(ffmpeg.StandardInput.BaseStream);
               }
           }
           catch (Exception)
           {
               throw;
           }
       }
    });
    waveInput.StartRecording();
    </waveineventargs>

    FFMpeg Arguments :

    ffmpegProcess.StartInfo.Arguments = string.Format("-f f32le -ac 2 -ar 44.1k -i pipe:0 -c:a copy -y output.wav");
  • How to fetch live video frame and its timestamp from ffmpeg to python

    15 février 2017, par vijiboy

    Searching for an alternative as OpenCV would not provide timestamps for live camera stream, which are required in my computer vision algorithm, I found this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/

    Working up the code on windows I still could’nt get the frame timestamps.
    I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected ?

    Expected : It should write video frames to disk as well as print timestamp details.
    Actual : It writes video files but does not get the timestamp (showinfo) details.

    Here’s the code I tried :

    import subprocess as sp
    import numpy
    import cv2

    command = [ 'ffmpeg',
               '-i', 'e:\sample.wmv',
               '-pix_fmt', 'rgb24',
               '-vcodec', 'rawvideo',
               '-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
               '-an','-sn', #-an, -sn disables audio and sub-title processing respectively
               '-f', 'image2pipe', '-'] # we need to output to a pipe

    pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.STDOUT) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???

    for i in range(10):
       raw_image = pipe.stdout.read(1280*720*3)
       img_info = pipe.stdout.read(244) # 244 characters is the current output of showinfo video filter
       print "showinfo output", img_info
       image1 =  numpy.fromstring(raw_image, dtype='uint8')
       image2 = image1.reshape((720,1280,3))  

       # write video frame to file just to verify
       videoFrameName = 'Video_Frame{0}.png'.format(i)
       cv2.imwrite(videoFrameName,image2)

       # throw away the data in the pipe's buffer.
       pipe.stdout.flush()

    So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...