Recherche avancée

Médias (91)

Autres articles (86)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

Sur d’autres sites (11712)

  • Crossfade many audio files into one with FFmpeg ?

    3 octobre 2018, par setouk

    Using FFmpeg, I am trying to combine many audio files into one long one, with a crossfade between each of them. To keep the numbers simple, let’s say I have 10 input files, each 5 minutes, and I want a 10 second crossfade between each. (Resulting duration would be 48:30.) Assume all input files have the same codec/bitrate.

    I was pleasantly surprises to find how simple it was to crossfade two files :

    ffmpeg -i 0.mp3 -i 1.mp3 -vn -filter_complex acrossfade=d=10:c1=tri:c2=tri out.mp3

    But the acrossfade filter does not allow 3+ inputs. So my naive solution is to repeatedly run ffmpeg, crossfading the previous intermediate output with the next input file. It’s not ideal. It leads me to two questions :

    1. Does acrossfade losslessly copy the streams ? (Except where they’re actively crossfading, of course.) Or do the entire input streams get reencoded ?

    If the input streams are entirely reencoded, then my naive approach is very bad. In the example above (calling acrossfade 9 times), the first 4:50 of the first file would be reencoded 9 times ! If I’m combining 50 files, the first file gets reencoded 49 times !

    2. To avoid multiple runs and the reencoding issue, can I achieve the many-crossfade behavior in a single ffmpeg call ?

    I imagine I would need some long filtergraph, but I haven’t figured it out yet. Does anyone have an example of crossfading just 3 input files ? From that I could automate the filtergraphs for longer chains.

    Thanks for any tips !

  • Trimming video without encoding

    14 octobre 2020, par Cristian Jorge A. Kusch

    I need to trim a video without encoding into sections that later i can put back together with the concat demuxer of ffmpeg.
I understand that to cut precisely without encoding it is necessary to cut at keyframes, i'm using this command to get the keyframes :

    


    ffprobe -loglevel error -select_streams v:0 -show_entries packet=pts_time,flags -of csv=print_section=0 -input | awk -F',' '/K/ {print $1}'


    


    and they also correspond to the times that Avidemux reports for the keyframes.
I'm using this command to trim the source :

    


    ffmpeg -ss 00:0:2.4607 -noaccurate_seek -avoid_negative_ts 1 -i input.mp4 -to 00:0:3.545 -c copy /content/losslessffmpeg.mp4


    


    But it always ends up with different durations. I would expect that if i cut between keyframe at time 2.460792 and keyframe at time 3.545208 i would get a video of lenght that is the difference between those two times, but it is never the case.
Any help ?
thanks

    


  • FFMpeg fails to detect input stream when outputting to pipe's stdout

    27 septembre 2020, par La bla bla

    We have h264 frames as individual files, we read them to a python wrapper and piping them to ffmpeg.

    


    ffmpeg subprocess is launched using

    


        command = ["ffmpeg",
               "-hide_banner",
               "-vcodec", "h264",
               "-i", "pipe:0",
               "-video_size", "5120x3072",
               '-an', '-sn',  # we want to disable audio processing (there is no audio)
               '-pix_fmt', 'bgr24',
               "-vcodec", "rawvideo",
               '-f', 'image2pipe', '-']
    pipe = sp.Popen(command, stdin=sp.PIPE, stdout=sp.PIPE, bufsize=10 ** 8)


    


    Our goal is to use ffmpeg to convert the individual h264 frames into raw BGR data that we can manipulate using OpenCV.

    


    the files are read in a background thread and piped using

    


        ...
    for path in files:
        with open(path, "rb") as f:
            data = f.read()
            pipe.stdin.write(data)


    


    When we try to read the ffmpeg's output pipe using

    


        while True:
        # Capture frame-by-frame
        raw_image = pipe.stdout.read(width * height * 3)


    


    we get

    


    [h264 @ 0x1c31000] Could not find codec parameters for stream 0 (Video: h264, none): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
pipe:0: could not find codec parameters
Input #0, h264, from 'pipe:0':
  Duration: N/A, bitrate: N/A
    Stream #0:0: Video: h264, none, 25 tbr, 1200k tbn, 50 tbc
Output #0, image2pipe, to 'pipe:':
Output file #0 does not contain any stream


    


    However, when I change the sp.Popen command to be

    


    
    f = open('ffmpeg_output.log', 'wt')
    pipe = sp.Popen(command, stdin=sp.PIPE, stdout=f, bufsize=10 ** 8) # Note: the stdout is not f


    


    we get the gibberish (i.e, binary data) in the ffmpeg_output.log file, and the console reads

    


    [h264 @ 0xf20000] Stream #0: not enough frames to estimate rate; consider increasing probesize
[h264 @ 0xf20000] decoding for stream 0 failed
Input #0, h264, from 'pipe:0':
  Duration: N/A, bitrate: N/A
    Stream #0:0: Video: h264 (Baseline), yuv420p, 5120x3072, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Output #0, image2pipe, to 'pipe:':
  Metadata:
    encoder         : Lavf56.40.101
    Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 5120x3072, q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc
    Metadata:
      encoder         : Lavc56.60.100 rawvideo
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
Invalid UE golomb code
    Last message repeated 89 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code


    


    Why does ffmpeg cares if its stdout is a file or a pipe ?