Recherche avancée

Médias (91)

Autres articles (69)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (8457)

  • How to process video stream ?

    27 avril 2016, par sharpener

    I would like to ask some experienced multimedia professional how to proceed with following task :

    Given URL provides video stream and we would like to get access to decoded frames (byte stream in memory) in managed Win7+ application (C#). We don’t want to render/present the frames the standard way. The video format is known but not fixed (might get changed between two successive sessions, but we will know the parameters).

    So far, I have found there are several methods and I have build following picture in my mind :

    1. ffmpeg wrapper
      • Pros
        1. Self contained (no dependency to windows technologies)
        2. Powerful
      • Cons
        1. Little more complex to understand
        2. Lot of different wrapping variants (FFmpeg.NET, ffmpeg-sharp, ffmpeg-shard, FFmpeg.AutoGen, ...)
    2. DirectShow wrapper
      • Pros
        1. Widely used/supported technology (variaous filters freely available)
        2. Nice/detailed documentation on MSDN
      • Cons
        1. Quite old
        2. Considered obsolete from the point of author’s view (available only for desktop model on runtime >= Win8)
    3. MediaFoundation wrapper
      • Pros
        1. Theoretical successor of DirectShow, so should be available in the future
      • Cons
        1. Seems to be not as good as DirectShow
        2. Not very popular, limited "community" support
    4. FFmpegInterop wrapper
      • Pros
        1. Microsoft’s open source wrapper alternative
      • Cons
        1. Not available for runtime < Win8
  • How to match the video bitrate of opencv with ffmpeg ?

    24 avril 2019, par user10890282

    I am trying to read three cameras simultaneously one of which is done through a video grabber card. I want to be able to stream the data and also write out videos from these sources. I used FFmpeg to write out the data from the video grabber card and OpenCV writer for the normal USB cameras. But the bit rates of the files do not match nor does the size. This becomes a problem as I post-process the files. I tried to convert the OpenCV written files using FFmpeg later but the duration of the files still remain different though the bit rates have changed. I really would appreciate pointers to know how to go about this from the script itself. The ideal case would be that I use FFmpeg for all the three sources with the same settings. But how can I do this without spitting out all the information on the console as writing out the files, since there would have to be three sources written out simultaneously ? Could someone tell me how to have multiple FFmpeg threads going on for writing out files while streaming videos using OpenCV ?

    I have attached my current code with OpenCV writer and FFmpeg for one source :

    from threading import Thread
    import cv2
    import time
    import sys
    import subprocess as sp
    import os
    import datetime
    old_stdout=sys.stdout



    maindir= "E:/Trial1/"
    os.chdir(maindir)
    maindir=os.getcwd()

    class VideoWriterWidget(object):
       def __init__(self, video_file_name, src=0):
           if (src==3):
               self.frame_name= "Cam_"+str(src)+"(Right)"
           if(src==2):
               self.frame_name= "Cam_"+str(src)+"(Left)"
           # Create a VideoCapture object
           #self.frame_name =str(src)
           self.video_file = video_file_name+"_"
           self.now=datetime.datetime.now()
           self.ts=datetime.datetime.now()
           self.video_file_nameE="{}.avi".format("Endo_"+self.ts.strftime("%Y%m%d_%H-%M-%S"))
           self.video_file_name = "{}.avi".format(video_file_name+self.ts.strftime("%Y%m%d_%H-%M-%S"))
           self.FFMPEG_BIN = "C:/ffmpeg/bin/ffmpeg.exe"
           self.command=[self.FFMPEG_BIN,'-y','-f','dshow','-rtbufsize','1024M','-video_size','640x480','-i', 'video=Datapath VisionAV Video 01','-pix_fmt', 'bgr24', '-r','60', self.video_file_name]
           self.capture = cv2.VideoCapture(src)

           # Default resolutions of the frame are obtained (system dependent)

           self.frame_width = int(self.capture.get(3))#480
           self.frame_height = int(self.capture.get(4))# 640


           # Set up codec and output video settings
           if(src==2 or src==3):
               self.codec = cv2.VideoWriter_fourcc('M','J','P','G')
               self.output_video = cv2.VideoWriter(self.video_file_name, self.codec, 30, (self.frame_width, self.frame_height))

           # Start the thread to read frames from the video stream
               self.thread = Thread(target=self.update, args=(src,))
               self.thread.daemon = True
               self.thread.start()

           # Start another thread to show/save frames
               self.start_recording()
               print('initialized {}'.format(self.video_file))

           if (src==0):
               self.e_recording_thread = Thread(target=self.endo_recording_thread, args=())
               self.e_recording_thread.daemon = True
               self.e_recording_thread.start()
               print('initialized endo recording')


       def update(self,src):
           # Read the next frame from the stream in a different thread
           while True:
               if self.capture.isOpened():
                   (self.status, self.frame) = self.capture.read()
                   if (src==3):
                       self.frame= cv2.flip(self.frame,-1)


       def show_frame(self):
           # Display frames in main program
           if self.status:
               cv2.namedWindow(self.frame_name, cv2.WINDOW_NORMAL)
               cv2.imshow(self.frame_name, self.frame)


           # Press Q on keyboard to stop recording0000
           key = cv2.waitKey(1)
           if key == ord('q'):#
               self.capture.release()
               self.output_video.release()
               cv2.destroyAllWindows()
               exit(1)

       def save_frame(self):
           # Save obtained frame into video output file
           self.output_video.write(self.frame)

       def start_recording(self):
           # Create another thread to show/save frames
           def start_recording_thread():
               while True:
                   try:
                       self.show_frame()
                       self.save_frame()
                   except AttributeError:
                       pass
           self.recording_thread = Thread(target=start_recording_thread, args=())
           self.recording_thread.daemon = True
           self.recording_thread.start()

       def endo_recording_thread(self):
            self.Pr1=sp.call(self.command)



    if __name__ == '__main__':
       src1 = 'Your link1'
       video_writer_widget1 = VideoWriterWidget('Endo_', 0)
       src2 = 'Your link2'
       video_writer_widget2 = VideoWriterWidget('Camera2_', 2)
       src3 = 'Your link3'
       video_writer_widget3 = VideoWriterWidget('Camera3_', 3)

       # Since each video player is in its own thread, we need to keep the main thread alive.
       # Keep spinning using time.sleep() so the background threads keep running
       # Threads are set to daemon=True so they will automatically die
       # when the main thread dies
       while True:
         time.sleep(5)
  • Why the output of the ffmpeg-python doesn't match the image shape ?

    9 novembre 2019, par Swi Jason

    I used the ffmpeg-python module to convert video to images. Specifically, I used the code provided by the official git repo of ffmpeg-python, as below

    out, _ = (
       ffmpeg
       .input(in_filename)
       .filter('select', 'gte(n,{})'.format(frame_num))
       .output('pipe:', vframes=1, format='image2', vcodec='mjpeg')
       .run(capture_stdout=True)
    )
    im = np.frombuffer(out, 'uint8')
    print(im.shape[0]/3/1080)
    # 924.907098765432

    The original video is of size (1920, 1080) and pix_fmt ’yuv420p’, but the outputs of the above code is not 1920.

    I have figured out by myself that the output of ffmpeg.run() is not a decoded image array, but a byte string encoded by JPEG format. To restore the image into a numpy array, simply use the cv2.imdecode() function. For example,

    im = cv2.imdecode(im, cv2.IMREAD_COLOR)

    However, I can’t use opencv on my embeded Linux system. So my question now is that, can I get numpy output from ffmpeg-python directly, without the need of converting it by opencv ?