Recherche avancée

Médias (16)

Mot : - Tags -/mp3

Autres articles (47)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (9226)

  • Why the output of the ffmpeg-python doesn't match the image shape ?

    9 novembre 2019, par Swi Jason

    I used the ffmpeg-python module to convert video to images. Specifically, I used the code provided by the official git repo of ffmpeg-python, as below

    out, _ = (
       ffmpeg
       .input(in_filename)
       .filter('select', 'gte(n,{})'.format(frame_num))
       .output('pipe:', vframes=1, format='image2', vcodec='mjpeg')
       .run(capture_stdout=True)
    )
    im = np.frombuffer(out, 'uint8')
    print(im.shape[0]/3/1080)
    # 924.907098765432

    The original video is of size (1920, 1080) and pix_fmt ’yuv420p’, but the outputs of the above code is not 1920.

    I have figured out by myself that the output of ffmpeg.run() is not a decoded image array, but a byte string encoded by JPEG format. To restore the image into a numpy array, simply use the cv2.imdecode() function. For example,

    im = cv2.imdecode(im, cv2.IMREAD_COLOR)

    However, I can’t use opencv on my embeded Linux system. So my question now is that, can I get numpy output from ffmpeg-python directly, without the need of converting it by opencv ?

  • How to match the video bitrate of opencv with ffmpeg ?

    24 avril 2019, par user10890282

    I am trying to read three cameras simultaneously one of which is done through a video grabber card. I want to be able to stream the data and also write out videos from these sources. I used FFmpeg to write out the data from the video grabber card and OpenCV writer for the normal USB cameras. But the bit rates of the files do not match nor does the size. This becomes a problem as I post-process the files. I tried to convert the OpenCV written files using FFmpeg later but the duration of the files still remain different though the bit rates have changed. I really would appreciate pointers to know how to go about this from the script itself. The ideal case would be that I use FFmpeg for all the three sources with the same settings. But how can I do this without spitting out all the information on the console as writing out the files, since there would have to be three sources written out simultaneously ? Could someone tell me how to have multiple FFmpeg threads going on for writing out files while streaming videos using OpenCV ?

    I have attached my current code with OpenCV writer and FFmpeg for one source :

    from threading import Thread
    import cv2
    import time
    import sys
    import subprocess as sp
    import os
    import datetime
    old_stdout=sys.stdout



    maindir= "E:/Trial1/"
    os.chdir(maindir)
    maindir=os.getcwd()

    class VideoWriterWidget(object):
       def __init__(self, video_file_name, src=0):
           if (src==3):
               self.frame_name= "Cam_"+str(src)+"(Right)"
           if(src==2):
               self.frame_name= "Cam_"+str(src)+"(Left)"
           # Create a VideoCapture object
           #self.frame_name =str(src)
           self.video_file = video_file_name+"_"
           self.now=datetime.datetime.now()
           self.ts=datetime.datetime.now()
           self.video_file_nameE="{}.avi".format("Endo_"+self.ts.strftime("%Y%m%d_%H-%M-%S"))
           self.video_file_name = "{}.avi".format(video_file_name+self.ts.strftime("%Y%m%d_%H-%M-%S"))
           self.FFMPEG_BIN = "C:/ffmpeg/bin/ffmpeg.exe"
           self.command=[self.FFMPEG_BIN,'-y','-f','dshow','-rtbufsize','1024M','-video_size','640x480','-i', 'video=Datapath VisionAV Video 01','-pix_fmt', 'bgr24', '-r','60', self.video_file_name]
           self.capture = cv2.VideoCapture(src)

           # Default resolutions of the frame are obtained (system dependent)

           self.frame_width = int(self.capture.get(3))#480
           self.frame_height = int(self.capture.get(4))# 640


           # Set up codec and output video settings
           if(src==2 or src==3):
               self.codec = cv2.VideoWriter_fourcc('M','J','P','G')
               self.output_video = cv2.VideoWriter(self.video_file_name, self.codec, 30, (self.frame_width, self.frame_height))

           # Start the thread to read frames from the video stream
               self.thread = Thread(target=self.update, args=(src,))
               self.thread.daemon = True
               self.thread.start()

           # Start another thread to show/save frames
               self.start_recording()
               print('initialized {}'.format(self.video_file))

           if (src==0):
               self.e_recording_thread = Thread(target=self.endo_recording_thread, args=())
               self.e_recording_thread.daemon = True
               self.e_recording_thread.start()
               print('initialized endo recording')


       def update(self,src):
           # Read the next frame from the stream in a different thread
           while True:
               if self.capture.isOpened():
                   (self.status, self.frame) = self.capture.read()
                   if (src==3):
                       self.frame= cv2.flip(self.frame,-1)


       def show_frame(self):
           # Display frames in main program
           if self.status:
               cv2.namedWindow(self.frame_name, cv2.WINDOW_NORMAL)
               cv2.imshow(self.frame_name, self.frame)


           # Press Q on keyboard to stop recording0000
           key = cv2.waitKey(1)
           if key == ord('q'):#
               self.capture.release()
               self.output_video.release()
               cv2.destroyAllWindows()
               exit(1)

       def save_frame(self):
           # Save obtained frame into video output file
           self.output_video.write(self.frame)

       def start_recording(self):
           # Create another thread to show/save frames
           def start_recording_thread():
               while True:
                   try:
                       self.show_frame()
                       self.save_frame()
                   except AttributeError:
                       pass
           self.recording_thread = Thread(target=start_recording_thread, args=())
           self.recording_thread.daemon = True
           self.recording_thread.start()

       def endo_recording_thread(self):
            self.Pr1=sp.call(self.command)



    if __name__ == '__main__':
       src1 = 'Your link1'
       video_writer_widget1 = VideoWriterWidget('Endo_', 0)
       src2 = 'Your link2'
       video_writer_widget2 = VideoWriterWidget('Camera2_', 2)
       src3 = 'Your link3'
       video_writer_widget3 = VideoWriterWidget('Camera3_', 3)

       # Since each video player is in its own thread, we need to keep the main thread alive.
       # Keep spinning using time.sleep() so the background threads keep running
       # Threads are set to daemon=True so they will automatically die
       # when the main thread dies
       while True:
         time.sleep(5)
  • FFmpeg av_read_frame not reading frames properly ?

    16 juin 2019, par Sir DrinksCoffeeALot

    Alright, so I’ve downloaded some raw UHD sequences in .yuv format and encoded them with ffmpeg in .mp4 container (h264 4:4:4, 100% quality, 25fps). When i use ffprobe to find out how many frames are encoded i get 600, so that’s 24 secs of video.

    BUT, when i run those encoded video sequences through av_read_frame i only get like 40-50% of frames processed before av_read_frame returns error code -12. So I’m wild guessing that there are some data packages in middle of the streams which get read by av_read_frame and forces a function to return -12.

    What my questions are, how should i deal with this problem so i can encode full number of frames (600) ? When av_read_frame returns value different from 0 should i av_free_packet and proceed to read next frame ? Since av_read_frame returns values < 0 for error codes, which error code is used for EOF so i can insulate end of file code ?