Recherche avancée

Médias (91)

Autres articles (49)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (10625)

  • FFMPEG atempo introducing phasing for multichannel mono audio tracks

    27 septembre 2021, par BrainNoWerk

    Is this a bug, or expected behaviour ? When converting materials from PAL to NTSC I invoke atempo as follows :

    


    -map 0:a:? -af atempo=24000/25025 ^
-c:a pcm_s24le


    


    I use this in a windows batch file (hence the caret) as a catch-all for all files that need to be converted, not having to deal with how many audio channels might be present or in what order.

    


    However, when my input was a broadcast MXF with 10channel mono audio (1 per stream) it introduced wild phasing between the tracks.

    


    Merging the tracks into a single stream to be processed by atempo resulted in no phasing.

    


    -filter_complex "[0:a:0][0:a:1][0:a:2][0:a:3][0:a:4][0:a:5][0:a:6][0:a:7][0:a:8][0:a:9] amerge=inputs=10, atempo=24000/25025[FRC]" ^
-map "[FRC]" -c:a pcm_s24le


    


    Is this expected behaviour ? I can't see any documentation detailing the need to first use amerge before invoking atempo.

    


    If indeed this step is necessary, is there a way to "wildcard" the amerge operation so that I don't have to manually enter all the audio channels, and then the "inputs=" ? This would allow me to make it more universal.

    


    This is my first question on stack overflow, so please be gentle. I've come here to find so many answers to my FFMPEG questions in the past—but this seems to be an edge case I can't get much detail on.

    


    Thanks !

    


    EDIT :

    


    This output using the wildcard produces phasing :

    


    C:\Windows>ffmpeg -ss 00:05:13.0 -r 24000/1001 -i "\\bdfs11\array21\Eps101_1920x1080_20_51_DV_CC_25fps_20210622.mov" -t 00:00:22.0 -map 0:v:0 -c:v mpeg2video -profile:v 0 -level:v 2 -b:v 50000k -minrate 50000k -maxrate 50000k -pix_fmt yuv422p -vtag xd5d -force_key_frames "expr:gte(t,n_forced*1)" -streamid 0:481 -streamid 1:129 -map 0:a:? -af atempo=24000/25025 -c:a pcm_s24le "R:\2_SERIES\%~n1_25to23976_works.%Container%" -y
ffmpeg version N-94566-gddd92ba2c6 Copyright (c) 2000-2019 the FFmpeg developers
      built with gcc 9.1.1 (GCC) 20190807
      configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
      libavutil      56. 33.100 / 56. 33.100
      libavcodec     58. 55.100 / 58. 55.100
      libavformat    58. 30.100 / 58. 30.100
      libavdevice    58.  9.100 / 58.  9.100
      libavfilter     7. 58.100 /  7. 58.100
      libswscale      5.  6.100 /  5.  6.100
      libswresample   3.  6.100 /  3.  6.100
      libpostproc    55.  6.100 / 55.  6.100
    [mov,mp4,m4a,3gp,3g2,mj2 @ 06ea4cc0] Could not find codec parameters for stream 12 (Subtitle: none (c708 / 0x38303763), 1920x1080, 21 kb/s): unknown codec
    Consider increasing the value for the 'analyzeduration' and 'probesize' options
    Guessed Channel Layout for Input Stream #0.2 : mono
    Guessed Channel Layout for Input Stream #0.3 : mono
    Guessed Channel Layout for Input Stream #0.4 : mono
    Guessed Channel Layout for Input Stream #0.5 : mono
    Guessed Channel Layout for Input Stream #0.6 : mono
    Guessed Channel Layout for Input Stream #0.7 : mono
    Guessed Channel Layout for Input Stream #0.8 : mono
    Guessed Channel Layout for Input Stream #0.9 : mono
    Guessed Channel Layout for Input Stream #0.10 : mono
    Guessed Channel Layout for Input Stream #0.11 : mono
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '\\bdfs11\array21\Eps101_1920x1080_20_51_DV_CC_25fps_20210622.mov':
      Metadata:
        major_brand     : qt
        minor_version   : 537199360
        compatible_brands: qt
        creation_time   : 2021-06-22T17:39:50.000000Z
      Duration: 00:59:08.16, start: 0.000000, bitrate: 217983 kb/s
        Stream #0:0(eng): Video: prores (HQ) (apch / 0x68637061), yuv422p10le(tv, bt709, progressive), 1920x1080, 206438 kb/s, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 25 tbn, 25 tbc (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Video Media Handler
          encoder         : Apple ProRes 422 HQ
          timecode        : 00:59:59:00
        Stream #0:1(eng): Data: none (tmcd / 0x64636D74) (default)
        Metadata:
          rotate          : 0
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Time Code Media Handler
          reel_name       : untitled
          timecode        : 00:59:59:00
        Stream #0:2(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
        Stream #0:3(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
        Stream #0:4(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
        Stream #0:5(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
        Stream #0:6(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
        Stream #0:7(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
        Stream #0:8(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
        Stream #0:9(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
        Stream #0:10(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
        Stream #0:11(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
        Stream #0:12(eng): Subtitle: none (c708 / 0x38303763), 1920x1080, 21 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Closed Caption Media Handler
    Stream mapping:
      Stream #0:0 -> #0:0 (prores (native) -> mpeg2video (native))
      Stream #0:2 -> #0:1 (pcm_s24le (native) -> pcm_s24le (native))
      Stream #0:3 -> #0:2 (pcm_s24le (native) -> pcm_s24le (native))
      Stream #0:4 -> #0:3 (pcm_s24le (native) -> pcm_s24le (native))
      Stream #0:5 -> #0:4 (pcm_s24le (native) -> pcm_s24le (native))
      Stream #0:6 -> #0:5 (pcm_s24le (native) -> pcm_s24le (native))
      Stream #0:7 -> #0:6 (pcm_s24le (native) -> pcm_s24le (native))
      Stream #0:8 -> #0:7 (pcm_s24le (native) -> pcm_s24le (native))
      Stream #0:9 -> #0:8 (pcm_s24le (native) -> pcm_s24le (native))
      Stream #0:10 -> #0:9 (pcm_s24le (native) -> pcm_s24le (native))
      Stream #0:11 -> #0:10 (pcm_s24le (native) -> pcm_s24le (native))
    Press [q] to stop, [?] for help
    [mpeg2video @ 06f8aa40] Automatically choosing VBV buffer size of 746 kbyte
    Output #0, mxf, to 'R:\2_SERIES\Eps101_1920x1080_20_51_DV_CC_25fps_20210622_25to23976_works.mxf':
      Metadata:
        major_brand     : qt
        minor_version   : 537199360
        compatible_brands: qt
        encoder         : Lavf58.30.100
        Stream #0:0(eng): Video: mpeg2video (4:2:2) (xd5d / 0x64356478), yuv422p, 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 50000 kb/s, 23.98 fps, 23.98 tbn, 23.98 tbc (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Video Media Handler
          timecode        : 00:59:59:00
          encoder         : Lavc58.55.100 mpeg2video
        Side data:
          cpb: bitrate max/min/avg: 50000000/50000000/50000000 buffer size: 6111232 vbv_delay: 18446744073709551615
        Stream #0:1(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
          encoder         : Lavc58.55.100 pcm_s24le
        Stream #0:2(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
          encoder         : Lavc58.55.100 pcm_s24le
        Stream #0:3(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
          encoder         : Lavc58.55.100 pcm_s24le
        Stream #0:4(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
          encoder         : Lavc58.55.100 pcm_s24le
        Stream #0:5(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
          encoder         : Lavc58.55.100 pcm_s24le
        Stream #0:6(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
          encoder         : Lavc58.55.100 pcm_s24le
        Stream #0:7(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
          encoder         : Lavc58.55.100 pcm_s24le
        Stream #0:8(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
          encoder         : Lavc58.55.100 pcm_s24le
        Stream #0:9(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
          encoder         : Lavc58.55.100 pcm_s24le
        Stream #0:10(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
        Metadata:
          creation_time   : 2021-06-22T17:39:50.000000Z
          handler_name    : Apple Sound Media Handler
          encoder         : Lavc58.55.100 pcm_s24le
    frame=  527 fps= 52 q=2.0 Lsize=  166106kB time=00:00:22.00 bitrate=61851.7kbits/s speed=2.19x
    video:133971kB audio:30938kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.726204%


    


    This is the output that produces no phasing

    


    C:\Windows>ffmpeg -ss 00:05:13.0 -r 24000/1001 -i "\\bdfs11\array21\Eps101_1920x1080_20_51_DV_CC_25fps_20210622.mov" -t 00:00:22.0 -map 0:v:0 -c:v mpeg2video -profile:v 0 -level:v 2 -b:v 50000k -minrate 50000k -maxrate 50000k -pix_fmt yuv422p -vtag xd5d -force_key_frames "expr:gte(t,n_forced*1)" -streamid 0:481 -streamid 1:129 -filter_complex "[0:a:0][0:a:1][0:a:2][0:a:3][0:a:4][0:a:5][0:a:6][0:a:7][0:a:8][0:a:9] amerge=inputs=10, atempo=24000/25025[FRC]" -map "[FRC]" -c:a pcm_s24le "R:\2_SERIES\Eps101_1920x1080_20_51_DV_CC_25fps_20210622_25to23976_works.mxf" -y
ffmpeg version N-94566-gddd92ba2c6 Copyright (c) 2000-2019 the FFmpeg developers
  built with gcc 9.1.1 (GCC) 20190807
  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
  libavutil      56. 33.100 / 56. 33.100
  libavcodec     58. 55.100 / 58. 55.100
  libavformat    58. 30.100 / 58. 30.100
  libavdevice    58.  9.100 / 58.  9.100
  libavfilter     7. 58.100 /  7. 58.100
  libswscale      5.  6.100 /  5.  6.100
  libswresample   3.  6.100 /  3.  6.100
  libpostproc    55.  6.100 / 55.  6.100
[mov,mp4,m4a,3gp,3g2,mj2 @ 064f5580] Could not find codec parameters for stream 12 (Subtitle: none (c708 / 0x38303763), 1920x1080, 21 kb/s): unknown codec
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Guessed Channel Layout for Input Stream #0.2 : mono
Guessed Channel Layout for Input Stream #0.3 : mono
Guessed Channel Layout for Input Stream #0.4 : mono
Guessed Channel Layout for Input Stream #0.5 : mono
Guessed Channel Layout for Input Stream #0.6 : mono
Guessed Channel Layout for Input Stream #0.7 : mono
Guessed Channel Layout for Input Stream #0.8 : mono
Guessed Channel Layout for Input Stream #0.9 : mono
Guessed Channel Layout for Input Stream #0.10 : mono
Guessed Channel Layout for Input Stream #0.11 : mono
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '\\bdfs11\array21\Eps101_1920x1080_20_51_DV_CC_25fps_20210622.mov':
  Metadata:
    major_brand     : qt
    minor_version   : 537199360
    compatible_brands: qt
    creation_time   : 2021-06-22T17:39:50.000000Z
  Duration: 00:59:08.16, start: 0.000000, bitrate: 217983 kb/s
    Stream #0:0(eng): Video: prores (HQ) (apch / 0x68637061), yuv422p10le(tv, bt709, progressive), 1920x1080, 206438 kb/s, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 25 tbn, 25 tbc (default)
    Metadata:
      creation_time   : 2021-06-22T17:39:50.000000Z
      handler_name    : Apple Video Media Handler
      encoder         : Apple ProRes 422 HQ
      timecode        : 00:59:59:00
    Stream #0:1(eng): Data: none (tmcd / 0x64636D74) (default)
    Metadata:
      rotate          : 0
      creation_time   : 2021-06-22T17:39:50.000000Z
      handler_name    : Time Code Media Handler
      reel_name       : untitled
      timecode        : 00:59:59:00
    Stream #0:2(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-06-22T17:39:50.000000Z
      handler_name    : Apple Sound Media Handler
    Stream #0:3(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-06-22T17:39:50.000000Z
      handler_name    : Apple Sound Media Handler
    Stream #0:4(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-06-22T17:39:50.000000Z
      handler_name    : Apple Sound Media Handler
    Stream #0:5(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-06-22T17:39:50.000000Z
      handler_name    : Apple Sound Media Handler
    Stream #0:6(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-06-22T17:39:50.000000Z
      handler_name    : Apple Sound Media Handler
    Stream #0:7(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-06-22T17:39:50.000000Z
      handler_name    : Apple Sound Media Handler
    Stream #0:8(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-06-22T17:39:50.000000Z
      handler_name    : Apple Sound Media Handler
    Stream #0:9(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-06-22T17:39:50.000000Z
      handler_name    : Apple Sound Media Handler
    Stream #0:10(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-06-22T17:39:50.000000Z
      handler_name    : Apple Sound Media Handler
    Stream #0:11(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-06-22T17:39:50.000000Z
      handler_name    : Apple Sound Media Handler
    Stream #0:12(eng): Subtitle: none (c708 / 0x38303763), 1920x1080, 21 kb/s (default)
    Metadata:
      creation_time   : 2021-06-22T17:39:50.000000Z
      handler_name    : Apple Closed Caption Media Handler
Stream mapping:
  Stream #0:2 (pcm_s24le) -> amerge:in0 (graph 0)
  Stream #0:3 (pcm_s24le) -> amerge:in1 (graph 0)
  Stream #0:4 (pcm_s24le) -> amerge:in2 (graph 0)
  Stream #0:5 (pcm_s24le) -> amerge:in3 (graph 0)
  Stream #0:6 (pcm_s24le) -> amerge:in4 (graph 0)
  Stream #0:7 (pcm_s24le) -> amerge:in5 (graph 0)
  Stream #0:8 (pcm_s24le) -> amerge:in6 (graph 0)
  Stream #0:9 (pcm_s24le) -> amerge:in7 (graph 0)
  Stream #0:10 (pcm_s24le) -> amerge:in8 (graph 0)
  Stream #0:11 (pcm_s24le) -> amerge:in9 (graph 0)
  Stream #0:0 -> #0:0 (prores (native) -> mpeg2video (native))
  atempo (graph 0) -> Stream #0:1 (pcm_s24le)
Press [q] to stop, [?] for help
[Parsed_amerge_0 @ 06e18dc0] No channel layout for input 1
[Parsed_amerge_0 @ 06e18dc0] Input channel layouts overlap: output layout will be determined by the number of distinct input channels
[mpeg2video @ 06dea000] Automatically choosing VBV buffer size of 746 kbyte
Output #0, mxf, to 'R:\2_SERIES\Eps101_1920x1080_20_51_DV_CC_25fps_20210622_25to23976_works.mxf':
  Metadata:
    major_brand     : qt
    minor_version   : 537199360
    compatible_brands: qt
    encoder         : Lavf58.30.100
    Stream #0:0(eng): Video: mpeg2video (4:2:2) (xd5d / 0x64356478), yuv422p, 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 50000 kb/s, 23.98 fps, 23.98 tbn, 23.98 tbc (default)
    Metadata:
      creation_time   : 2021-06-22T17:39:50.000000Z
      handler_name    : Apple Video Media Handler
      timecode        : 00:59:59:00
      encoder         : Lavc58.55.100 mpeg2video
    Side data:
      cpb: bitrate max/min/avg: 50000000/50000000/50000000 buffer size: 6111232 vbv_delay: 18446744073709551615
    Stream #0:1: Audio: pcm_s24le, 48000 Hz, 10 channels (FL+FR+FC+LFE+BL+BR+FLC+FRC+BC+SL), s32, 11520 kb/s (default)
    Metadata:
      encoder         : Lavc58.55.100 pcm_s24le
frame=  527 fps= 61 q=2.0 Lsize=  165571kB time=00:00:22.00 bitrate=61652.6kbits/s speed=2.56x
video:133971kB audio:30938kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.402084%


    


    Let me know if you need more detail than what I've provided.

    


    Tangential questions related to this job and potentially not worth their own thread, even though I've looked extensively and not found the answers (happy to post them individually if that's necessary) :

    


      

    1. I can't seem to split any portion of the filter_complex above with a caret (^) within a windows batch file (no number of spaces before or after resolve this issue). It breaks the chain and the filter graphs complain of no input.

      


    2. 


    3. Is FFMBC still the only way to include broadcast closed captioning ? This functionality doesn't exist within FFMPEG ?

      


    4. 


    


  • FFmpeg WASM writeFile Stalls and Doesn't Complete in React App with Ant Design

    26 février, par raiyan khan

    I'm using FFmpeg WebAssembly (WASM) in a React app to process and convert a video file before uploading it. The goal is to resize the video to 720p using FFmpeg before sending it to the backend.

    


    Problem :

    


    Everything works up to fetching the file and confirming it's loaded into memory, but FFmpeg hangs at ffmpeg.writeFile() and does not proceed further. No errors are thrown.

    


    Code Snippet :

    


      

    • Loading FFmpeg

      


       const loadFFmpeg = async () => {
 if (loaded) return; // Avoid reloading if 
 already loaded

 const baseURL = 'https://unpkg.com/@ffmpeg/core@0.12.6/dist/umd';
 const ffmpeg = ffmpegRef.current;
 ffmpeg.on('log', ({ message }) => {
     messageRef.current.innerHTML = message;
     console.log(message);
 });
 await ffmpeg.load({
     coreURL: await toBlobURL(`${baseURL}/ffmpeg-core.js`, 'text/javascript'),
     wasmURL: await toBlobURL(`${baseURL}/ffmpeg-core.wasm`, 'application/wasm'),
 });
 setLoaded(true);
 };

 useEffect(() => {
 loadFFmpeg()
 }, [])


      


    • 


    • Fetching and Writing File

      


        const convertVideoTo720p = async (videoFile) => {
       console.log("Starting video 
     conversion...");



 const { height } = await getVideoMetadata(videoFile);
 console.log(`Video height: ${height}`);

 if (height <= 720) {
     console.log("No conversion needed.");
     return videoFile;
 }

 const ffmpeg = ffmpegRef.current;
 console.log("FFmpeg instance loaded. Writing file to memory...");

 const fetchedFile = await fetchFile(videoFile);
 console.log("File fetched successfully:", fetchedFile);

 console.log("Checking FFmpeg memory before writing...");
 console.log(`File size: ${fetchedFile.length} bytes (~${(fetchedFile.length / 1024 / 1024).toFixed(2)} MB)`);

 if (!ffmpeg.isLoaded()) {
     console.error("FFmpeg is not fully loaded yet!");
     return;
 }

 console.log("Memory seems okay. Writing file to FFmpeg...");
 await ffmpeg.writeFile('input.mp4', fetchedFile);  // ❌ This line hangs, nothing after runs
 console.log("File successfully written to FFmpeg memory.");
      };


      


    • 


    


    Debugging Steps I've Tried :

    


      

    • Ensured FFmpeg is fully loaded before calling writeFile()
✅ ffmpeg.isLoaded() returns true.
    • 


    • Checked file fetch process :
✅ fetchFile(videoFile) successfully returns a Uint8Array.
    • 


    • Tried renaming the file to prevent caching issues
✅ Used a unique file name like video_${Date.now()}.mp4, but no change
    • 


    • Checked browser console for errors :
❌ No errors are displayed.
    • 


    • Tried skipping FFmpeg and uploading the raw file instead :
✅ Upload works fine without FFmpeg, so the issue is specific to FFmpeg.
    • 


    


    Expected Behavior

    


      

    • ffmpeg.writeFile('input.mp4', fetchedFile); should complete and allow FFmpeg to process the video.
    • 


    


    Actual Behavior

    


      

    • Execution stops at writeFile, and no errors are thrown.
    • 


    


    Environment :

    


      

    • React : 18.x
    • 


    • FFmpeg WASM Version : @ffmpeg/ffmpeg@0.12.15
    • 


    • Browser : Chrome 121, Edge 120
    • 


    • Operating System : Windows 11
    • 


    


    Question :
Why is FFmpeg's writeFile() stalling and never completing ?
How can I fix or further debug this issue ?

    


    Here is my full code :

    


    

    

    import { useNavigate } from "react-router-dom";&#xA;import { useEffect, useRef, useState } from &#x27;react&#x27;;&#xA;import { Form, Input, Button, Select, Space } from &#x27;antd&#x27;;&#xA;const { Option } = Select;&#xA;import { FaAngleLeft } from "react-icons/fa6";&#xA;import { message, Upload } from &#x27;antd&#x27;;&#xA;import { CiCamera } from "react-icons/ci";&#xA;import { IoVideocamOutline } from "react-icons/io5";&#xA;import { useCreateWorkoutVideoMutation } from "../../../redux/features/workoutVideo/workoutVideoApi";&#xA;import { convertVideoTo720p } from "../../../utils/ffmpegHelper";&#xA;import { FFmpeg } from &#x27;@ffmpeg/ffmpeg&#x27;;&#xA;import { fetchFile, toBlobURL } from &#x27;@ffmpeg/util&#x27;;&#xA;&#xA;&#xA;const AddWorkoutVideo = () => {&#xA;    const [videoFile, setVideoFile] = useState(null);&#xA;    const [imageFile, setImageFile] = useState(null);&#xA;    const [loaded, setLoaded] = useState(false);&#xA;    const ffmpegRef = useRef(new FFmpeg());&#xA;    const videoRef = useRef(null);&#xA;    const messageRef = useRef(null);&#xA;    const [form] = Form.useForm();&#xA;    const [createWorkoutVideo, { isLoading }] = useCreateWorkoutVideoMutation()&#xA;    const navigate = useNavigate();&#xA;&#xA;    const videoFileRef = useRef(null); // Use a ref instead of state&#xA;&#xA;&#xA;    // Handle Video Upload&#xA;    const handleVideoChange = ({ file }) => {&#xA;        setVideoFile(file.originFileObj);&#xA;    };&#xA;&#xA;    // Handle Image Upload&#xA;    const handleImageChange = ({ file }) => {&#xA;        setImageFile(file.originFileObj);&#xA;    };&#xA;&#xA;    // Load FFmpeg core if needed (optional if you want to preload)&#xA;    const loadFFmpeg = async () => {&#xA;        if (loaded) return; // Avoid reloading if already loaded&#xA;&#xA;        const baseURL = &#x27;https://unpkg.com/@ffmpeg/core@0.12.6/dist/umd&#x27;;&#xA;        const ffmpeg = ffmpegRef.current;&#xA;        ffmpeg.on(&#x27;log&#x27;, ({ message }) => {&#xA;            messageRef.current.innerHTML = message;&#xA;            console.log(message);&#xA;        });&#xA;        await ffmpeg.load({&#xA;            coreURL: await toBlobURL(`${baseURL}/ffmpeg-core.js`, &#x27;text/javascript&#x27;),&#xA;            wasmURL: await toBlobURL(`${baseURL}/ffmpeg-core.wasm`, &#x27;application/wasm&#x27;),&#xA;        });&#xA;        setLoaded(true);&#xA;    };&#xA;&#xA;    useEffect(() => {&#xA;        loadFFmpeg()&#xA;    }, [])&#xA;&#xA;    // Helper: Get video metadata (width and height)&#xA;    const getVideoMetadata = (file) => {&#xA;        return new Promise((resolve, reject) => {&#xA;            const video = document.createElement(&#x27;video&#x27;);&#xA;            video.preload = &#x27;metadata&#x27;;&#xA;            video.onloadedmetadata = () => {&#xA;                resolve({ width: video.videoWidth, height: video.videoHeight });&#xA;            };&#xA;            video.onerror = () => reject(new Error(&#x27;Could not load video metadata&#x27;));&#xA;            video.src = URL.createObjectURL(file);&#xA;        });&#xA;    };&#xA;&#xA;    // Inline conversion helper function&#xA;    // const convertVideoTo720p = async (videoFile) => {&#xA;    //     // Check the video resolution first&#xA;    //     const { height } = await getVideoMetadata(videoFile);&#xA;    //     if (height &lt;= 720) {&#xA;    //         // No conversion needed&#xA;    //         return videoFile;&#xA;    //     }&#xA;    //     const ffmpeg = ffmpegRef.current;&#xA;    //     // Load ffmpeg if not already loaded&#xA;    //     // await ffmpeg.load({&#xA;    //     //     coreURL: await toBlobURL(`${baseURL}/ffmpeg-core.js`, &#x27;text/javascript&#x27;),&#xA;    //     //     wasmURL: await toBlobURL(`${baseURL}/ffmpeg-core.wasm`, &#x27;application/wasm&#x27;),&#xA;    //     // });&#xA;    //     // Write the input file to the ffmpeg virtual FS&#xA;    //     await ffmpeg.writeFile(&#x27;input.mp4&#x27;, await fetchFile(videoFile));&#xA;    //     // Convert video to 720p (scale filter maintains aspect ratio)&#xA;    //     await ffmpeg.exec([&#x27;-i&#x27;, &#x27;input.mp4&#x27;, &#x27;-vf&#x27;, &#x27;scale=-1:720&#x27;, &#x27;output.mp4&#x27;]);&#xA;    //     // Read the output file&#xA;    //     const data = await ffmpeg.readFile(&#x27;output.mp4&#x27;);&#xA;    //     console.log(data, &#x27;data from convertVideoTo720p&#x27;);&#xA;    //     const videoBlob = new Blob([data.buffer], { type: &#x27;video/mp4&#x27; });&#xA;    //     return new File([videoBlob], &#x27;output.mp4&#x27;, { type: &#x27;video/mp4&#x27; });&#xA;    // };&#xA;    const convertVideoTo720p = async (videoFile) => {&#xA;        console.log("Starting video conversion...");&#xA;&#xA;        // Check the video resolution first&#xA;        const { height } = await getVideoMetadata(videoFile);&#xA;        console.log(`Video height: ${height}`);&#xA;&#xA;        if (height &lt;= 720) {&#xA;            console.log("No conversion needed. Returning original file.");&#xA;            return videoFile;&#xA;        }&#xA;&#xA;        const ffmpeg = ffmpegRef.current;&#xA;        console.log("FFmpeg instance loaded. Writing file to memory...");&#xA;&#xA;        // await ffmpeg.writeFile(&#x27;input.mp4&#x27;, await fetchFile(videoFile));&#xA;        // console.log("File written. Starting conversion...");&#xA;        console.log("Fetching file for FFmpeg:", videoFile);&#xA;        const fetchedFile = await fetchFile(videoFile);&#xA;        console.log("File fetched successfully:", fetchedFile);&#xA;        console.log("Checking FFmpeg memory before writing...");&#xA;        console.log(`File size: ${fetchedFile.length} bytes (~${(fetchedFile.length / 1024 / 1024).toFixed(2)} MB)`);&#xA;&#xA;        if (fetchedFile.length > 50 * 1024 * 1024) { // 50MB limit&#xA;            console.error("File is too large for FFmpeg WebAssembly!");&#xA;            message.error("File too large. Try a smaller video.");&#xA;            return;&#xA;        }&#xA;&#xA;        console.log("Memory seems okay. Writing file to FFmpeg...");&#xA;        const fileName = `video_${Date.now()}.mp4`; // Generate a unique name&#xA;        console.log(`Using filename: ${fileName}`);&#xA;&#xA;        await ffmpeg.writeFile(fileName, fetchedFile);&#xA;        console.log(`File successfully written to FFmpeg memory as ${fileName}.`);&#xA;&#xA;        await ffmpeg.exec([&#x27;-i&#x27;, &#x27;input.mp4&#x27;, &#x27;-vf&#x27;, &#x27;scale=-1:720&#x27;, &#x27;output.mp4&#x27;]);&#xA;        console.log("Conversion completed. Reading output file...");&#xA;&#xA;        const data = await ffmpeg.readFile(&#x27;output.mp4&#x27;);&#xA;        console.log("File read successful. Creating new File object.");&#xA;&#xA;        const videoBlob = new Blob([data.buffer], { type: &#x27;video/mp4&#x27; });&#xA;        const convertedFile = new File([videoBlob], &#x27;output.mp4&#x27;, { type: &#x27;video/mp4&#x27; });&#xA;&#xA;        console.log(convertedFile, "converted video from convertVideoTo720p");&#xA;&#xA;        return convertedFile;&#xA;    };&#xA;&#xA;&#xA;    const onFinish = async (values) => {&#xA;        // Ensure a video is selected&#xA;        if (!videoFileRef.current) {&#xA;            message.error("Please select a video file.");&#xA;            return;&#xA;        }&#xA;&#xA;        // Create FormData&#xA;        const formData = new FormData();&#xA;        if (imageFile) {&#xA;            formData.append("image", imageFile);&#xA;        }&#xA;&#xA;        try {&#xA;            message.info("Processing video. Please wait...");&#xA;&#xA;            // Convert the video to 720p only if needed&#xA;            const convertedVideo = await convertVideoTo720p(videoFileRef.current);&#xA;            console.log(convertedVideo, &#x27;convertedVideo from onFinish&#x27;);&#xA;&#xA;            formData.append("media", videoFileRef.current);&#xA;&#xA;            formData.append("data", JSON.stringify(values));&#xA;&#xA;            // Upload manually to the backend&#xA;            const response = await createWorkoutVideo(formData).unwrap();&#xA;            console.log(response, &#x27;response from add video&#x27;);&#xA;&#xA;            message.success("Video added successfully!");&#xA;            form.resetFields(); // Reset form&#xA;            setVideoFile(null); // Clear file&#xA;&#xA;        } catch (error) {&#xA;            message.error(error.data?.message || "Failed to add video.");&#xA;        }&#xA;&#xA;        // if (videoFile) {&#xA;        //     message.info("Processing video. Please wait...");&#xA;        //     try {&#xA;        //         // Convert the video to 720p only if needed&#xA;        //         const convertedVideo = await convertVideoTo720p(videoFile);&#xA;        //         formData.append("media", convertedVideo);&#xA;        //     } catch (conversionError) {&#xA;        //         message.error("Video conversion failed.");&#xA;        //         return;&#xA;        //     }&#xA;        // }&#xA;        // formData.append("data", JSON.stringify(values)); // Convert text fields to JSON&#xA;&#xA;        // try {&#xA;        //     const response = await createWorkoutVideo(formData).unwrap();&#xA;        //     console.log(response, &#x27;response from add video&#x27;);&#xA;&#xA;        //     message.success("Video added successfully!");&#xA;        //     form.resetFields(); // Reset form&#xA;        //     setFile(null); // Clear file&#xA;        // } catch (error) {&#xA;        //     message.error(error.data?.message || "Failed to add video.");&#xA;        // }&#xA;    };&#xA;&#xA;    const handleBackButtonClick = () => {&#xA;        navigate(-1); // This takes the user back to the previous page&#xA;    };&#xA;&#xA;    const videoUploadProps = {&#xA;        name: &#x27;video&#x27;,&#xA;        // action: &#x27;https://660d2bd96ddfa2943b33731c.mockapi.io/api/upload&#x27;,&#xA;        // headers: {&#xA;        //     authorization: &#x27;authorization-text&#x27;,&#xA;        // },&#xA;        // beforeUpload: (file) => {&#xA;        //     const isVideo = file.type.startsWith(&#x27;video/&#x27;);&#xA;        //     if (!isVideo) {&#xA;        //         message.error(&#x27;You can only upload video files!&#x27;);&#xA;        //     }&#xA;        //     return isVideo;&#xA;        // },&#xA;        // onChange(info) {&#xA;        //     if (info.file.status === &#x27;done&#x27;) {&#xA;        //         message.success(`${info.file.name} video uploaded successfully`);&#xA;        //     } else if (info.file.status === &#x27;error&#x27;) {&#xA;        //         message.error(`${info.file.name} video upload failed.`);&#xA;        //     }&#xA;        // },&#xA;        beforeUpload: (file) => {&#xA;            const isVideo = file.type.startsWith(&#x27;video/&#x27;);&#xA;            if (!isVideo) {&#xA;                message.error(&#x27;You can only upload video files!&#x27;);&#xA;                return Upload.LIST_IGNORE; // Prevents the file from being added to the list&#xA;            }&#xA;            videoFileRef.current = file; // Store file in ref&#xA;            // setVideoFile(file); // Store the file in state instead of uploading it automatically&#xA;            return false; // Prevent auto-upload&#xA;        },&#xA;    };&#xA;&#xA;    const imageUploadProps = {&#xA;        name: &#x27;image&#x27;,&#xA;        action: &#x27;https://660d2bd96ddfa2943b33731c.mockapi.io/api/upload&#x27;,&#xA;        headers: {&#xA;            authorization: &#x27;authorization-text&#x27;,&#xA;        },&#xA;        beforeUpload: (file) => {&#xA;            const isImage = file.type.startsWith(&#x27;image/&#x27;);&#xA;            if (!isImage) {&#xA;                message.error(&#x27;You can only upload image files!&#x27;);&#xA;            }&#xA;            return isImage;&#xA;        },&#xA;        onChange(info) {&#xA;            if (info.file.status === &#x27;done&#x27;) {&#xA;                message.success(`${info.file.name} image uploaded successfully`);&#xA;            } else if (info.file.status === &#x27;error&#x27;) {&#xA;                message.error(`${info.file.name} image upload failed.`);&#xA;            }&#xA;        },&#xA;    };&#xA;    return (&#xA;        &lt;>&#xA;            <div classname="flex items-center gap-2 text-xl cursor-pointer">&#xA;                <faangleleft></faangleleft>&#xA;                <h1 classname="font-semibold">Add Video</h1>&#xA;            </div>&#xA;            <div classname="rounded-lg py-4 border-[#79CDFF] border-2 shadow-lg mt-8 bg-white">&#xA;                <div classname="space-y-[24px] min-h-[83vh] bg-light-gray rounded-2xl">&#xA;                    <h3 classname="text-2xl text-[#174C6B] mb-4 border-b border-[#79CDFF]/50 pb-3 pl-16 font-semibold">&#xA;                        Adding Video&#xA;                    </h3>&#xA;                    <div classname="w-full px-16">&#xA;                        / style={{ maxWidth: 600, margin: &#x27;0 auto&#x27; }}&#xA;                        >&#xA;                            {/* Section 1 */}&#xA;                            {/* <space direction="vertical" style="{{"> */}&#xA;                            {/* <space size="large" direction="horizontal" classname="responsive-space"> */}&#xA;                            <div classname="grid grid-cols-2 gap-8 mt-8">&#xA;                                <div>&#xA;                                    <space size="large" direction="horizontal" classname="responsive-space-section-2">&#xA;&#xA;                                        {/* Video */}&#xA;                                        Upload Video}&#xA;                                            name="media"&#xA;                                            className="responsive-form-item"&#xA;                                        // rules={[{ required: true, message: &#x27;Please enter the package amount!&#x27; }]}&#xA;                                        >&#xA;                                            <upload maxcount="{1}">&#xA;                                                <button style="{{" solid="solid">&#xA;                                                    <span style="{{" 600="600">Select a video</span>&#xA;                                                    <iovideocamoutline size="{20}" color="#174C6B"></iovideocamoutline>&#xA;                                                </button>&#xA;                                            </upload>&#xA;                                        &#xA;&#xA;                                        {/* Thumbnail */}&#xA;                                        Upload Image}&#xA;                                            name="image"&#xA;                                            className="responsive-form-item"&#xA;                                        // rules={[{ required: true, message: &#x27;Please enter the package amount!&#x27; }]}&#xA;                                        >&#xA;                                            <upload maxcount="{1}">&#xA;                                                <button style="{{" solid="solid">&#xA;                                                    <span style="{{" 600="600">Select an image</span>&#xA;                                                    <cicamera size="{25}" color="#174C6B"></cicamera>&#xA;                                                </button>&#xA;                                            </upload>&#xA;                                        &#xA;&#xA;                                        {/* Title */}&#xA;                                        Video Title}&#xA;                                            name="name"&#xA;                                            className="responsive-form-item-section-2"&#xA;                                        >&#xA;                                            <input type="text" placeholder="Enter video title" style="{{&amp;#xA;" solid="solid" />&#xA;                                        &#xA;                                    </space>&#xA;                                </div>&#xA;                            </div>&#xA;&#xA;                            {/* </space> */}&#xA;                            {/* </space> */}&#xA;&#xA;&#xA;                            {/* Submit Button */}&#xA;                            &#xA;                                <div classname="p-4 mt-10 text-center mx-auto flex items-center justify-center">&#xA;                                    &#xA;                                        <span classname="text-white font-semibold">{isLoading ? &#x27;Uploading...&#x27; : &#x27;Upload&#x27;}</span>&#xA;                                    &#xA;                                </div>&#xA;                            &#xA;                        &#xA;                    </div>&#xA;                </div>&#xA;            </div>&#xA;        >&#xA;    )&#xA;}&#xA;&#xA;export default AddWorkoutVideo

    &#xD;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;&#xA;

    Would appreciate any insights or suggestions. Thanks !

    &#xA;

  • How to fix a segmentaion fault in a C program ? [closed]

    13 janvier 2012, par ipegasus

    Possible Duplicate :
    Segmentation fault

    Currently I am upgrading an open source program used for HTTP streaming. It needs to support the latest FFMPEG.
    The code compiles fine with no warnings although I am getting a segmentation fault error.
    I would like to know how to fix the issue ? and / or the best way to debug ? Please find attached a portion of the code due to size. I will try to add the project to github :) Thanks in advance !

    Sample Usage

    # segmenter --i out.ts --l 10 --o stream.m3u8 --d segments --f stream

    Makefile

    FFLIBS=`pkg-config --libs libavformat libavcodec libavutil`
    FFFLAGS=`pkg-config --cflags libavformat libavcodec libavutil`

    all:
       gcc -Wall -g segmenter.c -o segmenter ${FFFLAGS} ${FFLIBS}

    segmenter.c

    /*
    * Copyright (c) 2009 Chase Douglas
    *
    * This program is free software; you can redistribute it and/or
    * modify it under the terms of the GNU General Public License version 2
    * as published by the Free Software Foundation.
    *
    * This program is distributed in the hope that it will be useful,
    * but WITHOUT ANY WARRANTY; without even the implied warranty of
    * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    * GNU General Public License for more details.
    *
    * You should have received a copy of the GNU General Public License
    * along with this program; if not, write to the Free Software
    * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
    */
    #include
    #include
    #include
    #include
    #include
    #include "libavformat/avformat.h"

    #include "libavformat/avio.h"

    #include <sys></sys>stat.h>

    #include "segmenter.h"
    #include "libavformat/avformat.h"

    #define IMAGE_ID3_SIZE 9171

    void printUsage() {
       fprintf(stderr, "\nExample: segmenter --i infile --d baseDir --f baseFileName --o playListFile.m3u8 --l 10 \n");
       fprintf(stderr, "\nOptions: \n");
       fprintf(stderr, "--i <infile>.\n");
       fprintf(stderr, "--o <outfile>.\n");
       fprintf(stderr, "--d basedir, the base directory for files.\n");
       fprintf(stderr, "--f baseFileName, output files will be baseFileName-#.\n");
       fprintf(stderr, "--l segment length, the length of each segment.\n");
       fprintf(stderr, "--a,  audio only decode for &lt; 64k streams.\n");
       fprintf(stderr, "--v,  video only decode for &lt; 64k streams.\n");
       fprintf(stderr, "--version, print version details and exit.\n");
       fprintf(stderr, "\n\n");
    }

    void ffmpeg_version() {
       // output build and version numbers
       fprintf(stderr, "  libavutil version:   %s\n", AV_STRINGIFY(LIBAVUTIL_VERSION));
       fprintf(stderr, "  libavutil build:     %d\n", LIBAVUTIL_BUILD);
       fprintf(stderr, "  libavcodec version:  %s\n", AV_STRINGIFY(LIBAVCODEC_VERSION));
       fprintf(stdout, "  libavcodec build:    %d\n", LIBAVCODEC_BUILD);
       fprintf(stderr, "  libavformat version: %s\n", AV_STRINGIFY(LIBAVFORMAT_VERSION));
       fprintf(stderr, "  libavformat build:   %d\n", LIBAVFORMAT_BUILD);
       fprintf(stderr, "  built on " __DATE__ " " __TIME__);
    #ifdef __GNUC__
       fprintf(stderr, ", gcc: " __VERSION__ "\n");
    #else
       fprintf(stderr, ", using a non-gcc compiler\n");
    #endif
    }


    static AVStream *add_output_stream(AVFormatContext *output_format_context, AVStream *input_stream) {
       AVCodecContext *input_codec_context;
       AVCodecContext *output_codec_context;
       AVStream *output_stream;

       output_stream = avformat_new_stream(output_format_context, 0);
       if (!output_stream) {
           fprintf(stderr, "Segmenter error: Could not allocate stream\n");
           exit(1);
       }

       input_codec_context = input_stream->codec;
       output_codec_context = output_stream->codec;

       output_codec_context->codec_id = input_codec_context->codec_id;
       output_codec_context->codec_type = input_codec_context->codec_type;
       output_codec_context->codec_tag = input_codec_context->codec_tag;
       output_codec_context->bit_rate = input_codec_context->bit_rate;
       output_codec_context->extradata = input_codec_context->extradata;
       output_codec_context->extradata_size = input_codec_context->extradata_size;

       if (av_q2d(input_codec_context->time_base) * input_codec_context->ticks_per_frame > av_q2d(input_stream->time_base) &amp;&amp; av_q2d(input_stream->time_base) &lt; 1.0 / 1000) {
           output_codec_context->time_base = input_codec_context->time_base;
           output_codec_context->time_base.num *= input_codec_context->ticks_per_frame;
       } else {
           output_codec_context->time_base = input_stream->time_base;
       }

       switch (input_codec_context->codec_type) {
    #ifdef USE_OLD_FFMPEG
           case CODEC_TYPE_AUDIO:
    #else
           case AVMEDIA_TYPE_AUDIO:
    #endif
               output_codec_context->channel_layout = input_codec_context->channel_layout;
               output_codec_context->sample_rate = input_codec_context->sample_rate;
               output_codec_context->channels = input_codec_context->channels;
               output_codec_context->frame_size = input_codec_context->frame_size;
               if ((input_codec_context->block_align == 1 &amp;&amp; input_codec_context->codec_id == CODEC_ID_MP3) || input_codec_context->codec_id == CODEC_ID_AC3) {
                   output_codec_context->block_align = 0;
               } else {
                   output_codec_context->block_align = input_codec_context->block_align;
               }
               break;
    #ifdef USE_OLD_FFMPEG
           case CODEC_TYPE_VIDEO:
    #else
           case AVMEDIA_TYPE_VIDEO:
    #endif
               output_codec_context->pix_fmt = input_codec_context->pix_fmt;
               output_codec_context->width = input_codec_context->width;
               output_codec_context->height = input_codec_context->height;
               output_codec_context->has_b_frames = input_codec_context->has_b_frames;

               if (output_format_context->oformat->flags &amp; AVFMT_GLOBALHEADER) {
                   output_codec_context->flags |= CODEC_FLAG_GLOBAL_HEADER;
               }
               break;
           default:
               break;
       }

       return output_stream;
    }

    int write_index_file(const char index[], const char tmp_index[], const unsigned int planned_segment_duration, const unsigned int actual_segment_duration[],
           const char output_directory[], const char output_prefix[], const char output_file_extension[],
           const unsigned int first_segment, const unsigned int last_segment) {
       FILE *index_fp;
       char *write_buf;
       unsigned int i;

       index_fp = fopen(tmp_index, "w");
       if (!index_fp) {
           fprintf(stderr, "Could not open temporary m3u8 index file (%s), no index file will be created\n", tmp_index);
           return -1;
       }

       write_buf = malloc(sizeof (char) * 1024);
       if (!write_buf) {
           fprintf(stderr, "Could not allocate write buffer for index file, index file will be invalid\n");
           fclose(index_fp);
           return -1;
       }

       unsigned int maxDuration = planned_segment_duration;

       for (i = first_segment; i &lt;= last_segment; i++)
           if (actual_segment_duration[i] > maxDuration)
               maxDuration = actual_segment_duration[i];



       snprintf(write_buf, 1024, "#EXTM3U\n#EXT-X-TARGETDURATION:%u\n", maxDuration);

       if (fwrite(write_buf, strlen(write_buf), 1, index_fp) != 1) {
           fprintf(stderr, "Could not write to m3u8 index file, will not continue writing to index file\n");
           free(write_buf);
           fclose(index_fp);
           return -1;
       }

       for (i = first_segment; i &lt;= last_segment; i++) {
           snprintf(write_buf, 1024, "#EXTINF:%u,\n%s-%u%s\n", actual_segment_duration[i], output_prefix, i, output_file_extension);
           if (fwrite(write_buf, strlen(write_buf), 1, index_fp) != 1) {
               fprintf(stderr, "Could not write to m3u8 index file, will not continue writing to index file\n");
               free(write_buf);
               fclose(index_fp);
               return -1;
           }
       }

       snprintf(write_buf, 1024, "#EXT-X-ENDLIST\n");
       if (fwrite(write_buf, strlen(write_buf), 1, index_fp) != 1) {
           fprintf(stderr, "Could not write last file and endlist tag to m3u8 index file\n");
           free(write_buf);
           fclose(index_fp);
           return -1;
       }

       free(write_buf);
       fclose(index_fp);

       return rename(tmp_index, index);
    }

    int main(int argc, const char *argv[]) {
       //input parameters
       char inputFilename[MAX_FILENAME_LENGTH], playlistFilename[MAX_FILENAME_LENGTH], baseDirName[MAX_FILENAME_LENGTH], baseFileName[MAX_FILENAME_LENGTH];
       char baseFileExtension[5]; //either "ts", "aac" or "mp3"
       int segmentLength, outputStreams, verbosity, version;



       char currentOutputFileName[MAX_FILENAME_LENGTH];
       char tempPlaylistName[MAX_FILENAME_LENGTH];


       //these are used to determine the exact length of the current segment
       double prev_segment_time = 0;
       double segment_time;
       unsigned int actual_segment_durations[2048];
       double packet_time = 0;

       //new variables to keep track of output size
       double output_bytes = 0;

       unsigned int output_index = 1;
       AVOutputFormat *ofmt;
       AVFormatContext *ic = NULL;
       AVFormatContext *oc;
       AVStream *video_st = NULL;
       AVStream *audio_st = NULL;
       AVCodec *codec;
       int video_index;
       int audio_index;
       unsigned int first_segment = 1;
       unsigned int last_segment = 0;
       int write_index = 1;
       int decode_done;
       int ret;
       int i;

       unsigned char id3_tag[128];
       unsigned char * image_id3_tag;

       size_t id3_tag_size = 73;
       int newFile = 1; //a boolean value to flag when a new file needs id3 tag info in it

       if (parseCommandLine(inputFilename, playlistFilename, baseDirName, baseFileName, baseFileExtension, &amp;outputStreams, &amp;segmentLength, &amp;verbosity, &amp;version, argc, argv) != 0)
           return 0;

       if (version) {
           ffmpeg_version();
           return 0;
       }


       fprintf(stderr, "%s %s\n", playlistFilename, tempPlaylistName);


       image_id3_tag = malloc(IMAGE_ID3_SIZE);
       if (outputStreams == OUTPUT_STREAM_AUDIO)
           build_image_id3_tag(image_id3_tag);
       build_id3_tag((char *) id3_tag, id3_tag_size);

       snprintf(tempPlaylistName, strlen(playlistFilename) + strlen(baseDirName) + 1, "%s%s", baseDirName, playlistFilename);
       strncpy(playlistFilename, tempPlaylistName, strlen(tempPlaylistName));
       strncpy(tempPlaylistName, playlistFilename, MAX_FILENAME_LENGTH);
       strncat(tempPlaylistName, ".", 1);

       //decide if this is an aac file or a mpegts file.
       //postpone deciding format until later
       /*  ifmt = av_find_input_format("mpegts");
       if (!ifmt)
       {
       fprintf(stderr, "Could not find MPEG-TS demuxer.\n");
       exit(1);
       } */

       av_log_set_level(AV_LOG_DEBUG);

       av_register_all();
       ret = avformat_open_input(&amp;ic, inputFilename, NULL, NULL);
       if (ret != 0) {
           fprintf(stderr, "Could not open input file %s. Error %d.\n", inputFilename, ret);
           exit(1);
       }

       if (avformat_find_stream_info(ic, NULL) &lt; 0) {
           fprintf(stderr, "Could not read stream information.\n");
           exit(1);
       }

       oc = avformat_alloc_context();
       if (!oc) {
           fprintf(stderr, "Could not allocate output context.");
           exit(1);
       }

       video_index = -1;
       audio_index = -1;

       for (i = 0; i &lt; ic->nb_streams &amp;&amp; (video_index &lt; 0 || audio_index &lt; 0); i++) {
           switch (ic->streams[i]->codec->codec_type) {

    #ifdef USE_OLD_FFMPEG
               case CODEC_TYPE_VIDEO:
    #else
               case AVMEDIA_TYPE_VIDEO:
    #endif
                   video_index = i;
                   ic->streams[i]->discard = AVDISCARD_NONE;
                   if (outputStreams &amp; OUTPUT_STREAM_VIDEO)
                       video_st = add_output_stream(oc, ic->streams[i]);
                   break;
    #ifdef USE_OLD_FFMPEG
               case CODEC_TYPE_AUDIO:
    #else
               case AVMEDIA_TYPE_AUDIO:
    #endif
                   audio_index = i;
                   ic->streams[i]->discard = AVDISCARD_NONE;
                   if (outputStreams &amp; OUTPUT_STREAM_AUDIO)
                       audio_st = add_output_stream(oc, ic->streams[i]);
                   break;
               default:
                   ic->streams[i]->discard = AVDISCARD_ALL;
                   break;
           }
       }

       if (video_index == -1) {
           fprintf(stderr, "Stream must have video component.\n");
           exit(1);
       }

       //now that we know the audio and video output streams
       //we can decide on an output format.
       if (outputStreams == OUTPUT_STREAM_AUDIO) {
           //the audio output format should be the same as the audio input format
           switch (ic->streams[audio_index]->codec->codec_id) {
               case CODEC_ID_MP3:
                   fprintf(stderr, "Setting output audio to mp3.");
                   strncpy(baseFileExtension, ".mp3", strlen(".mp3"));
                   ofmt = av_guess_format("mp3", NULL, NULL);
                   break;
               case CODEC_ID_AAC:
                   fprintf(stderr, "Setting output audio to aac.");
                   ofmt = av_guess_format("adts", NULL, NULL);
                   break;
               default:
                   fprintf(stderr, "Codec id %d not supported.\n", ic->streams[audio_index]->id);
           }
           if (!ofmt) {
               fprintf(stderr, "Could not find audio muxer.\n");
               exit(1);
           }
       } else {
           ofmt = av_guess_format("mpegts", NULL, NULL);
           if (!ofmt) {
               fprintf(stderr, "Could not find MPEG-TS muxer.\n");
               exit(1);
           }
       }
       oc->oformat = ofmt;

       if (outputStreams &amp; OUTPUT_STREAM_VIDEO &amp;&amp; oc->oformat->flags &amp; AVFMT_GLOBALHEADER) {
           oc->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }


       /*  Deprecated: pass the options to avformat_write_header directly.
           if (av_set_parameters(oc, NULL) &lt; 0) {
               fprintf(stderr, "Invalid output format parameters.\n");
               exit(1);
           }
        */

       av_dump_format(oc, 0, baseFileName, 1);


       //open the video codec only if there is video data
       if (video_index != -1) {
           if (outputStreams &amp; OUTPUT_STREAM_VIDEO)
               codec = avcodec_find_decoder(video_st->codec->codec_id);
           else
               codec = avcodec_find_decoder(ic->streams[video_index]->codec->codec_id);
           if (!codec) {
               fprintf(stderr, "Could not find video decoder, key frames will not be honored.\n");
           }

           if (outputStreams &amp; OUTPUT_STREAM_VIDEO)
               ret = avcodec_open2(video_st->codec, codec, NULL);
           else
               avcodec_open2(ic->streams[video_index]->codec, codec, NULL);
           if (ret &lt; 0) {
               fprintf(stderr, "Could not open video decoder, key frames will not be honored.\n");
           }
       }

       snprintf(currentOutputFileName, strlen(baseDirName) + strlen(baseFileName) + strlen(baseFileExtension) + 10, "%s%s-%u%s", baseDirName, baseFileName, output_index++, baseFileExtension);

       if (avio_open(&amp;oc->pb, currentOutputFileName, URL_WRONLY) &lt; 0) {
           fprintf(stderr, "Could not open &#39;%s&#39;.\n", currentOutputFileName);
           exit(1);
       }
       newFile = 1;

       int r = avformat_write_header(oc,NULL);
       if (r) {
           fprintf(stderr, "Could not write mpegts header to first output file.\n");
           debugReturnCode(r);
           exit(1);
       }

       //no segment info is written here. This just creates the shell of the playlist file
       write_index = !write_index_file(playlistFilename, tempPlaylistName, segmentLength, actual_segment_durations, baseDirName, baseFileName, baseFileExtension, first_segment, last_segment);

       do {
           AVPacket packet;

           decode_done = av_read_frame(ic, &amp;packet);

           if (decode_done &lt; 0) {
               break;
           }

           if (av_dup_packet(&amp;packet) &lt; 0) {
               fprintf(stderr, "Could not duplicate packet.");
               av_free_packet(&amp;packet);
               break;
           }

           //this time is used to check for a break in the segments
           //  if (packet.stream_index == video_index &amp;&amp; (packet.flags &amp; PKT_FLAG_KEY))
           //  {
           //    segment_time = (double)video_st->pts.val * video_st->time_base.num / video_st->time_base.den;        
           //  }
    #if USE_OLD_FFMPEG
           if (packet.stream_index == video_index &amp;&amp; (packet.flags &amp; PKT_FLAG_KEY))
    #else
           if (packet.stream_index == video_index &amp;&amp; (packet.flags &amp; AV_PKT_FLAG_KEY))
    #endif
           {
               segment_time = (double) packet.pts * ic->streams[video_index]->time_base.num / ic->streams[video_index]->time_base.den;
           }
           //  else if (video_index &lt; 0)
           //  {
           //      segment_time = (double)audio_st->pts.val * audio_st->time_base.num / audio_st->time_base.den;
           //  }

           //get the most recent packet time
           //this time is used when the time for the final segment is printed. It may not be on the edge of
           //of a keyframe!
           if (packet.stream_index == video_index)
               packet_time = (double) packet.pts * ic->streams[video_index]->time_base.num / ic->streams[video_index]->time_base.den; //(double)video_st->pts.val * video_st->time_base.num / video_st->time_base.den;
           else if (outputStreams &amp; OUTPUT_STREAM_AUDIO)
               packet_time = (double) audio_st->pts.val * audio_st->time_base.num / audio_st->time_base.den;
           else
               continue;
           //start looking for segment splits for videos one half second before segment duration expires. This is because the
           //segments are split on key frames so we cannot expect all segments to be split exactly equally.
           if (segment_time - prev_segment_time >= segmentLength - 0.5) {
               fprintf(stderr, "looking to print index file at time %lf\n", segment_time);
               avio_flush(oc->pb);
               avio_close(oc->pb);

               if (write_index) {
                   actual_segment_durations[++last_segment] = (unsigned int) rint(segment_time - prev_segment_time);
                   write_index = !write_index_file(playlistFilename, tempPlaylistName, segmentLength, actual_segment_durations, baseDirName, baseFileName, baseFileExtension, first_segment, last_segment);
                   fprintf(stderr, "Writing index file at time %lf\n", packet_time);
               }

               struct stat st;
               stat(currentOutputFileName, &amp;st);
               output_bytes += st.st_size;

               snprintf(currentOutputFileName, strlen(baseDirName) + strlen(baseFileName) + strlen(baseFileExtension) + 10, "%s%s-%u%s", baseDirName, baseFileName, output_index++, baseFileExtension);
               if (avio_open(&amp;oc->pb, currentOutputFileName, URL_WRONLY) &lt; 0) {
                   fprintf(stderr, "Could not open &#39;%s&#39;\n", currentOutputFileName);
                   break;
               }

               newFile = 1;
               prev_segment_time = segment_time;
           }

           if (outputStreams == OUTPUT_STREAM_AUDIO &amp;&amp; packet.stream_index == audio_index) {
               if (newFile &amp;&amp; outputStreams == OUTPUT_STREAM_AUDIO) {
                   //add id3 tag info
                   //fprintf(stderr, "adding id3tag to file %s\n", currentOutputFileName);
                   //printf("%lf %lld %lld %lld %lld %lld %lf\n", segment_time, audio_st->pts.val, audio_st->cur_dts, audio_st->cur_pkt.pts, packet.pts, packet.dts, packet.dts * av_q2d(ic->streams[audio_index]->time_base) );
                   fill_id3_tag((char*) id3_tag, id3_tag_size, packet.dts);
                   avio_write(oc->pb, id3_tag, id3_tag_size);
                   avio_write(oc->pb, image_id3_tag, IMAGE_ID3_SIZE);
                   avio_flush(oc->pb);
                   newFile = 0;
               }

               packet.stream_index = 0; //only one stream in audio only segments
               ret = av_interleaved_write_frame(oc, &amp;packet);
           } else if (outputStreams &amp; OUTPUT_STREAM_VIDEO) {
               if (newFile) {
                   //fprintf(stderr, "New File: %lld %lld %lld\n", packet.pts, video_st->pts.val, audio_st->pts.val);
                   //printf("%lf %lld %lld %lld %lld %lld %lf\n", segment_time, audio_st->pts.val, audio_st->cur_dts, audio_st->cur_pkt.pts, packet.pts, packet.dts, packet.dts * av_q2d(ic->streams[audio_index]->time_base) );
                   newFile = 0;
               }
               if (outputStreams == OUTPUT_STREAM_VIDEO)
                   ret = av_write_frame(oc, &amp;packet);
               else
                   ret = av_interleaved_write_frame(oc, &amp;packet);
           }

           if (ret &lt; 0) {
               fprintf(stderr, "Warning: Could not write frame of stream.\n");
           } else if (ret > 0) {
               fprintf(stderr, "End of stream requested.\n");
               av_free_packet(&amp;packet);
               break;
           }

           av_free_packet(&amp;packet);
       } while (!decode_done);

       //make sure all packets are written and then close the last file.
       avio_flush(oc->pb);
       av_write_trailer(oc);

       if (video_st &amp;&amp; video_st->codec)
           avcodec_close(video_st->codec);

       if (audio_st &amp;&amp; audio_st->codec)
           avcodec_close(audio_st->codec);

       for (i = 0; i &lt; oc->nb_streams; i++) {
           av_freep(&amp;oc->streams[i]->codec);
           av_freep(&amp;oc->streams[i]);
       }

       avio_close(oc->pb);
       av_free(oc);

       struct stat st;
       stat(currentOutputFileName, &amp;st);
       output_bytes += st.st_size;


       if (write_index) {
           actual_segment_durations[++last_segment] = (unsigned int) rint(packet_time - prev_segment_time);

           //make sure that the last segment length is not zero
           if (actual_segment_durations[last_segment] == 0)
               actual_segment_durations[last_segment] = 1;

           write_index_file(playlistFilename, tempPlaylistName, segmentLength, actual_segment_durations, baseDirName, baseFileName, baseFileExtension, first_segment, last_segment);

       }

       write_stream_size_file(baseDirName, baseFileName, output_bytes * 8 / segment_time);

       return 0;
    }
    </outfile></infile>