Recherche avancée

Médias (1)

Mot : - Tags -/getid3

Autres articles (53)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (9198)

  • Crossfade many audio files into one with FFmpeg ?

    3 octobre 2018, par setouk

    Using FFmpeg, I am trying to combine many audio files into one long one, with a crossfade between each of them. To keep the numbers simple, let’s say I have 10 input files, each 5 minutes, and I want a 10 second crossfade between each. (Resulting duration would be 48:30.) Assume all input files have the same codec/bitrate.

    I was pleasantly surprises to find how simple it was to crossfade two files :

    ffmpeg -i 0.mp3 -i 1.mp3 -vn -filter_complex acrossfade=d=10:c1=tri:c2=tri out.mp3

    But the acrossfade filter does not allow 3+ inputs. So my naive solution is to repeatedly run ffmpeg, crossfading the previous intermediate output with the next input file. It’s not ideal. It leads me to two questions :

    1. Does acrossfade losslessly copy the streams ? (Except where they’re actively crossfading, of course.) Or do the entire input streams get reencoded ?

    If the input streams are entirely reencoded, then my naive approach is very bad. In the example above (calling acrossfade 9 times), the first 4:50 of the first file would be reencoded 9 times ! If I’m combining 50 files, the first file gets reencoded 49 times !

    2. To avoid multiple runs and the reencoding issue, can I achieve the many-crossfade behavior in a single ffmpeg call ?

    I imagine I would need some long filtergraph, but I haven’t figured it out yet. Does anyone have an example of crossfading just 3 input files ? From that I could automate the filtergraphs for longer chains.

    Thanks for any tips !

  • Trimming video without encoding

    14 octobre 2020, par Cristian Jorge A. Kusch

    I need to trim a video without encoding into sections that later i can put back together with the concat demuxer of ffmpeg.
I understand that to cut precisely without encoding it is necessary to cut at keyframes, i'm using this command to get the keyframes :

    


    ffprobe -loglevel error -select_streams v:0 -show_entries packet=pts_time,flags -of csv=print_section=0 -input | awk -F',' '/K/ {print $1}'


    


    and they also correspond to the times that Avidemux reports for the keyframes.
I'm using this command to trim the source :

    


    ffmpeg -ss 00:0:2.4607 -noaccurate_seek -avoid_negative_ts 1 -i input.mp4 -to 00:0:3.545 -c copy /content/losslessffmpeg.mp4


    


    But it always ends up with different durations. I would expect that if i cut between keyframe at time 2.460792 and keyframe at time 3.545208 i would get a video of lenght that is the difference between those two times, but it is never the case.
Any help ?
thanks

    


  • FFMpeg fails to detect input stream when outputting to pipe's stdout

    27 septembre 2020, par La bla bla

    We have h264 frames as individual files, we read them to a python wrapper and piping them to ffmpeg.

    


    ffmpeg subprocess is launched using

    


        command = ["ffmpeg",
               "-hide_banner",
               "-vcodec", "h264",
               "-i", "pipe:0",
               "-video_size", "5120x3072",
               '-an', '-sn',  # we want to disable audio processing (there is no audio)
               '-pix_fmt', 'bgr24',
               "-vcodec", "rawvideo",
               '-f', 'image2pipe', '-']
    pipe = sp.Popen(command, stdin=sp.PIPE, stdout=sp.PIPE, bufsize=10 ** 8)


    


    Our goal is to use ffmpeg to convert the individual h264 frames into raw BGR data that we can manipulate using OpenCV.

    


    the files are read in a background thread and piped using

    


        ...
    for path in files:
        with open(path, "rb") as f:
            data = f.read()
            pipe.stdin.write(data)


    


    When we try to read the ffmpeg's output pipe using

    


        while True:
        # Capture frame-by-frame
        raw_image = pipe.stdout.read(width * height * 3)


    


    we get

    


    [h264 @ 0x1c31000] Could not find codec parameters for stream 0 (Video: h264, none): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
pipe:0: could not find codec parameters
Input #0, h264, from 'pipe:0':
  Duration: N/A, bitrate: N/A
    Stream #0:0: Video: h264, none, 25 tbr, 1200k tbn, 50 tbc
Output #0, image2pipe, to 'pipe:':
Output file #0 does not contain any stream


    


    However, when I change the sp.Popen command to be

    


    
    f = open('ffmpeg_output.log', 'wt')
    pipe = sp.Popen(command, stdin=sp.PIPE, stdout=f, bufsize=10 ** 8) # Note: the stdout is not f


    


    we get the gibberish (i.e, binary data) in the ffmpeg_output.log file, and the console reads

    


    [h264 @ 0xf20000] Stream #0: not enough frames to estimate rate; consider increasing probesize
[h264 @ 0xf20000] decoding for stream 0 failed
Input #0, h264, from 'pipe:0':
  Duration: N/A, bitrate: N/A
    Stream #0:0: Video: h264 (Baseline), yuv420p, 5120x3072, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Output #0, image2pipe, to 'pipe:':
  Metadata:
    encoder         : Lavf56.40.101
    Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 5120x3072, q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc
    Metadata:
      encoder         : Lavc56.60.100 rawvideo
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
Invalid UE golomb code
    Last message repeated 89 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code


    


    Why does ffmpeg cares if its stdout is a file or a pipe ?