Recherche avancée

Médias (33)

Mot : - Tags -/creative commons

Autres articles (34)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

Sur d’autres sites (6251)

  • Split a movie into 1000+ shots using PyAV in a single pass ?

    3 mai 2019, par Andrew Klaassen

    I need to split a 44 minute MP4 into 1000 shots (i.e. separate MP4s) with ffmpeg. I want to do it quickly (i.e. in a single pass rather than 1000 passes), I need perfect frame accuracy, and I need to do it in Windows.

    The Windows command-line length limit is stopping me from doing this, and I’m wondering if someone could show me an example of how to do this using a library like PyAV or Avpy. (Libraries like ffmpeg-python and ffmpy won’t help, since they simply construct an ffmpeg command line and run it, leading to the same Windows command-line length issue that I already have.)

    After much testing and gnashing of teeth, I’ve learned that the only way to get perfect frame accuracy from ffmpeg, 100% of the time, is to use the "select" filter. ("-ss" in the newest versions of ffmpeg is frame accurate 99% of the time ; unfortunately, that’s not good enough for this application.)

    There are two ways to use "select" for this. There’s the slow way, which I’m doing now, and which requires having ffmpeg open the file 1000 times :

    for (start, end, name) in shots:
       audio_start = start / frame_rate
       audio_end = end + 1 / frame_rate
       cmd = [
           path_to_ffmpeg,
           '-y',
           '-i', input_movie,
           '-vf', r'select=between(n\,%s\,%s),setpts=PTS-STARTPTS' % (start, end),
           '-af', 'atrim=%s:%s,asetpts=PTS-STARTPTS' % (audio_start, audio_end),
           '-c:v', 'libx264',
           '-c:a', 'aac',
           '-write_tmcd', '0',
           '-g', '1',
           '-r', str(frame_rate),
           name + '.mp4',
           '-af', 'atrim=%s:%s' % (audio_start, audio_end),
           name + '.wav',
       ]
       subprocess.call(cmd)

    And there’s the fast way, which causes the Windows command line to explode when there are too many shots. The long command line leads to a failure to run :

    cmd = [
       path_to_ffmpeg,
       '-y',
       '-i',
       input_movie,
    ]
    for (start, end, name) in shots:
       audio_start = start / frame_rate
       audio_end = end + 1 / frame_rate
       cmd.extend([
           '-vf', r'select=between(n\,%s\,%s),setpts=PTS-STARTPTS' % (start, end),
           '-af', 'atrim=%s:%s,asetpts=PTS-STARTPTS' % (audio_start, audio_end),
           '-c:v', 'libx264',
           '-c:a', 'aac',
           '-write_tmcd', '0',
           '-g', '1',
           '-r', str(frame_rate),
           name + '.mp4',
           '-af', 'atrim=%s:%s' % (audio_start, audio_end),
           name + '.wav',
       ]
    subprocess.call(cmd)

    I’ve looked through the documentation of PyAV and Avpy, but I haven’t been able to figure out whether the second form of my function is something I could do there, or how I’d go about doing it. If it is possible, would someone be able to write a function equivalent to my second function, using either library ?

  • Cutting movie with ffmpeg result in audio/video desync

    15 mai 2020, par T4ng10r

    I've concate long ago set of movies taken during some lecture. Now I want to cut them for each question/answer.

    



    I do it like this.

    



    


    ffmpeg -ss 00:00:34.7 -t 00:10:44.6 -y -i input_movie.mp4 -vcodec copy -acodec copy output_1.mp4

    
 


    ffmpeg -ss 00:11:22.2 -y -i input_movie.mp4 -vcodec copy -acodec copy output_2.mp4

    


    



    Yet, for the second part I can't set proper starting point so audio and video would be in sync.
    
Usually I could fix it with small tweeks in cut start time (like .1, .2, and so on). For this case this doesn't work.
    
When I play second cut in mplayer video is few second behind audio (where audio is cut properly). When I jump forward and back - all is again in sync.

    



    Where's the problem ? How to fix it ?

    


  • Handling an arbitrary number of start and stop time pairings to cut a movie file down

    9 juin 2019, par Kieran

    I am writing a function that takes a list of tuples and a file path string as arguments and outputs a cut down video that only includes the frames that fall inside the start/stop pairings provided.

    I’m getting stuck because I am not sure whether the .trim() method of the ’infile’ object is altering the existing object or or creating a new one or doing something else entirely.

    the list of start/stop frame pairings can be arbitrarily long, every example I have found has been for a specific number of start and stop pairings and I can’t find anything describing what data structure needs to be passed back to ffmpeg.concat().

    My code is displayed below :

    import ffmpeg

    frameStamps = [(50,75),(120,700),(1250,1500)]
    videoFilePath = 'C:/Users/Kieran/Videos/testMovie.mp4'
    outputFolder = 'C:/Users/Kieran/Videos/'

    def slice_video(frameStamps, videoFilePath, outputFolder):

       originalFile = ffmpeg.input(videoFilePath)

       for stamp in frameStamps:
           ffmpeg.concat(originalFile.trim(start_frame=stamp[0], end_frame=stamp[1]))


       ffmpeg.output(outputFolder + 'testoutput.mp4')
       ffmpeg.run()

    slice_video(frameStamps, videoFilePath, outputFolder)

    Now I am able to get the following when I individually print out originalFile.trim() which are getting recognised in the console as "FilterableStream" objects

    trim(end_frame=75, start_frame=50)[None] <29b4fb0736ec>
    trim(end_frame=700, start_frame=120)[None] <c66c4e1a48f5>
    trim(end_frame=1500, start_frame=1250)[None] &lt;13e0697a5288>  
    </c66c4e1a48f5>

    and I have tried passing them back as a list, dictionary and tuple and haven’t been able to get it working

    Output Errors :

     File "C:/Users/Kieran/Example.py", line 21, in slice_video
    ffmpeg.output(outputFolder + 'testoutput.mp4')

     File "C:\ProgramData\Anaconda3\lib\site-packages\ffmpeg\_ffmpeg.py", line 94, in output
    return OutputNode(streams, output.__name__, kwargs=kwargs).stream()

     File "C:\ProgramData\Anaconda3\lib\site-packages\ffmpeg\nodes.py", line 282, in __init__
    kwargs=kwargs

     File "C:\ProgramData\Anaconda3\lib\site-packages\ffmpeg\nodes.py", line 170, in __init__
    self.__check_input_len(stream_map, min_inputs, max_inputs)

     File "C:\ProgramData\Anaconda3\lib\site-packages\ffmpeg\nodes.py", line 149, in __check_input_len
    raise ValueError('Expected at least {} input stream(s); got {}'.format(min_inputs, len(stream_map)))

    ValueError: Expected at least 1 input stream(s); got 0