Recherche avancée

Médias (0)

Mot : - Tags -/masques

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (78)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (6924)

  • How to match the video bitrate of opencv with ffmpeg ?

    24 avril 2019, par user10890282

    I am trying to read three cameras simultaneously one of which is done through a video grabber card. I want to be able to stream the data and also write out videos from these sources. I used FFmpeg to write out the data from the video grabber card and OpenCV writer for the normal USB cameras. But the bit rates of the files do not match nor does the size. This becomes a problem as I post-process the files. I tried to convert the OpenCV written files using FFmpeg later but the duration of the files still remain different though the bit rates have changed. I really would appreciate pointers to know how to go about this from the script itself. The ideal case would be that I use FFmpeg for all the three sources with the same settings. But how can I do this without spitting out all the information on the console as writing out the files, since there would have to be three sources written out simultaneously ? Could someone tell me how to have multiple FFmpeg threads going on for writing out files while streaming videos using OpenCV ?

    I have attached my current code with OpenCV writer and FFmpeg for one source :

    from threading import Thread
    import cv2
    import time
    import sys
    import subprocess as sp
    import os
    import datetime
    old_stdout=sys.stdout



    maindir= "E:/Trial1/"
    os.chdir(maindir)
    maindir=os.getcwd()

    class VideoWriterWidget(object):
       def __init__(self, video_file_name, src=0):
           if (src==3):
               self.frame_name= "Cam_"+str(src)+"(Right)"
           if(src==2):
               self.frame_name= "Cam_"+str(src)+"(Left)"
           # Create a VideoCapture object
           #self.frame_name =str(src)
           self.video_file = video_file_name+"_"
           self.now=datetime.datetime.now()
           self.ts=datetime.datetime.now()
           self.video_file_nameE="{}.avi".format("Endo_"+self.ts.strftime("%Y%m%d_%H-%M-%S"))
           self.video_file_name = "{}.avi".format(video_file_name+self.ts.strftime("%Y%m%d_%H-%M-%S"))
           self.FFMPEG_BIN = "C:/ffmpeg/bin/ffmpeg.exe"
           self.command=[self.FFMPEG_BIN,'-y','-f','dshow','-rtbufsize','1024M','-video_size','640x480','-i', 'video=Datapath VisionAV Video 01','-pix_fmt', 'bgr24', '-r','60', self.video_file_name]
           self.capture = cv2.VideoCapture(src)

           # Default resolutions of the frame are obtained (system dependent)

           self.frame_width = int(self.capture.get(3))#480
           self.frame_height = int(self.capture.get(4))# 640


           # Set up codec and output video settings
           if(src==2 or src==3):
               self.codec = cv2.VideoWriter_fourcc('M','J','P','G')
               self.output_video = cv2.VideoWriter(self.video_file_name, self.codec, 30, (self.frame_width, self.frame_height))

           # Start the thread to read frames from the video stream
               self.thread = Thread(target=self.update, args=(src,))
               self.thread.daemon = True
               self.thread.start()

           # Start another thread to show/save frames
               self.start_recording()
               print('initialized {}'.format(self.video_file))

           if (src==0):
               self.e_recording_thread = Thread(target=self.endo_recording_thread, args=())
               self.e_recording_thread.daemon = True
               self.e_recording_thread.start()
               print('initialized endo recording')


       def update(self,src):
           # Read the next frame from the stream in a different thread
           while True:
               if self.capture.isOpened():
                   (self.status, self.frame) = self.capture.read()
                   if (src==3):
                       self.frame= cv2.flip(self.frame,-1)


       def show_frame(self):
           # Display frames in main program
           if self.status:
               cv2.namedWindow(self.frame_name, cv2.WINDOW_NORMAL)
               cv2.imshow(self.frame_name, self.frame)


           # Press Q on keyboard to stop recording0000
           key = cv2.waitKey(1)
           if key == ord('q'):#
               self.capture.release()
               self.output_video.release()
               cv2.destroyAllWindows()
               exit(1)

       def save_frame(self):
           # Save obtained frame into video output file
           self.output_video.write(self.frame)

       def start_recording(self):
           # Create another thread to show/save frames
           def start_recording_thread():
               while True:
                   try:
                       self.show_frame()
                       self.save_frame()
                   except AttributeError:
                       pass
           self.recording_thread = Thread(target=start_recording_thread, args=())
           self.recording_thread.daemon = True
           self.recording_thread.start()

       def endo_recording_thread(self):
            self.Pr1=sp.call(self.command)



    if __name__ == '__main__':
       src1 = 'Your link1'
       video_writer_widget1 = VideoWriterWidget('Endo_', 0)
       src2 = 'Your link2'
       video_writer_widget2 = VideoWriterWidget('Camera2_', 2)
       src3 = 'Your link3'
       video_writer_widget3 = VideoWriterWidget('Camera3_', 3)

       # Since each video player is in its own thread, we need to keep the main thread alive.
       # Keep spinning using time.sleep() so the background threads keep running
       # Threads are set to daemon=True so they will automatically die
       # when the main thread dies
       while True:
         time.sleep(5)
  • Why the output of the ffmpeg-python doesn't match the image shape ?

    9 novembre 2019, par Swi Jason

    I used the ffmpeg-python module to convert video to images. Specifically, I used the code provided by the official git repo of ffmpeg-python, as below

    out, _ = (
       ffmpeg
       .input(in_filename)
       .filter('select', 'gte(n,{})'.format(frame_num))
       .output('pipe:', vframes=1, format='image2', vcodec='mjpeg')
       .run(capture_stdout=True)
    )
    im = np.frombuffer(out, 'uint8')
    print(im.shape[0]/3/1080)
    # 924.907098765432

    The original video is of size (1920, 1080) and pix_fmt ’yuv420p’, but the outputs of the above code is not 1920.

    I have figured out by myself that the output of ffmpeg.run() is not a decoded image array, but a byte string encoded by JPEG format. To restore the image into a numpy array, simply use the cv2.imdecode() function. For example,

    im = cv2.imdecode(im, cv2.IMREAD_COLOR)

    However, I can’t use opencv on my embeded Linux system. So my question now is that, can I get numpy output from ffmpeg-python directly, without the need of converting it by opencv ?

  • How to use find to match substring of files to be concatted ?

    23 mars 2023, par Russell Batt

    complete beginner here, so sorry if this is painfully obvious. I'm trying to write a shell script to ffmpeg concat protocol a bunch of split video files together by looping over the files and dynamically adding the correct parts to be joined together. For example, turning this :

    


    titlea-00001234-01.ts
    
titlea-00001234-02.ts
    
titlea-00001234-03.ts
    
titleb-00001234-01.ts
    
titleb-00004321-02.ts
    
titleb-00004321-03.ts

    


    into this :

    


    titlea-00001234.mp4
    
titleb-00004321.mp4

    


    by doing this

    


    ffmpeg -i "concat:titlea-00001234-01.ts|titlea-00001234-02.ts|titlea-00001234-03.ts" -c copy titlea-00001234.mp4
ffmpeg -i "concat:titleb-00001234-01.ts|titleb-00001234-03.ts|titleb-00001234-03.ts" -c copy titleb-00001234.mp4


    


    But what I'm having trouble with is using find to add the correct parts after "concat:".

    


    Here's my best attempt :

    


    #!/bin/bash
for i in /path/to/files/*.ts
    do
        if [[ "$i" =~ (-01) ]]
        then
            j="${i##*/}"
            k="${j%-0*}"
            ffmpeg -i `concat:( find /path/to/files/ -type f -name "{$k}*" -exec printf "%s\0" {} + )` -c copy /path/to/output/"{$k%.ts}.mp4"
        else
            echo "no files to process"
            exit 0
        fi
    done


    


    But this gives an error of "No such file or directory".

    
Edit This solution worked perfectly for my needs https://stackoverflow.com/a/75807616/21403800 Thanks @pjh and everyone else for taking the time to help