
Recherche avancée
Médias (91)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
-
Les Miserables
4 juin 2012, par
Mis à jour : Février 2013
Langue : English
Type : Texte
-
Ne pas afficher certaines informations : page d’accueil
23 novembre 2011, par
Mis à jour : Novembre 2011
Langue : français
Type : Image
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Richard Stallman et la révolution du logiciel libre - Une biographie autorisée (version epub)
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (49)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (10625)
-
FFMPEG atempo introducing phasing for multichannel mono audio tracks
27 septembre 2021, par BrainNoWerkIs this a bug, or expected behaviour ? When converting materials from PAL to NTSC I invoke atempo as follows :


-map 0:a:? -af atempo=24000/25025 ^
-c:a pcm_s24le



I use this in a windows batch file (hence the caret) as a catch-all for all files that need to be converted, not having to deal with how many audio channels might be present or in what order.


However, when my input was a broadcast MXF with 10channel mono audio (1 per stream) it introduced wild phasing between the tracks.


Merging the tracks into a single stream to be processed by atempo resulted in no phasing.


-filter_complex "[0:a:0][0:a:1][0:a:2][0:a:3][0:a:4][0:a:5][0:a:6][0:a:7][0:a:8][0:a:9] amerge=inputs=10, atempo=24000/25025[FRC]" ^
-map "[FRC]" -c:a pcm_s24le



Is this expected behaviour ? I can't see any documentation detailing the need to first use amerge before invoking atempo.


If indeed this step is necessary, is there a way to "wildcard" the amerge operation so that I don't have to manually enter all the audio channels, and then the "inputs=" ? This would allow me to make it more universal.


This is my first question on stack overflow, so please be gentle. I've come here to find so many answers to my FFMPEG questions in the past—but this seems to be an edge case I can't get much detail on.


Thanks !


EDIT :


This output using the wildcard produces phasing :


C:\Windows>ffmpeg -ss 00:05:13.0 -r 24000/1001 -i "\\bdfs11\array21\Eps101_1920x1080_20_51_DV_CC_25fps_20210622.mov" -t 00:00:22.0 -map 0:v:0 -c:v mpeg2video -profile:v 0 -level:v 2 -b:v 50000k -minrate 50000k -maxrate 50000k -pix_fmt yuv422p -vtag xd5d -force_key_frames "expr:gte(t,n_forced*1)" -streamid 0:481 -streamid 1:129 -map 0:a:? -af atempo=24000/25025 -c:a pcm_s24le "R:\2_SERIES\%~n1_25to23976_works.%Container%" -y
ffmpeg version N-94566-gddd92ba2c6 Copyright (c) 2000-2019 the FFmpeg developers
 built with gcc 9.1.1 (GCC) 20190807
 configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
 libavutil 56. 33.100 / 56. 33.100
 libavcodec 58. 55.100 / 58. 55.100
 libavformat 58. 30.100 / 58. 30.100
 libavdevice 58. 9.100 / 58. 9.100
 libavfilter 7. 58.100 / 7. 58.100
 libswscale 5. 6.100 / 5. 6.100
 libswresample 3. 6.100 / 3. 6.100
 libpostproc 55. 6.100 / 55. 6.100
 [mov,mp4,m4a,3gp,3g2,mj2 @ 06ea4cc0] Could not find codec parameters for stream 12 (Subtitle: none (c708 / 0x38303763), 1920x1080, 21 kb/s): unknown codec
 Consider increasing the value for the 'analyzeduration' and 'probesize' options
 Guessed Channel Layout for Input Stream #0.2 : mono
 Guessed Channel Layout for Input Stream #0.3 : mono
 Guessed Channel Layout for Input Stream #0.4 : mono
 Guessed Channel Layout for Input Stream #0.5 : mono
 Guessed Channel Layout for Input Stream #0.6 : mono
 Guessed Channel Layout for Input Stream #0.7 : mono
 Guessed Channel Layout for Input Stream #0.8 : mono
 Guessed Channel Layout for Input Stream #0.9 : mono
 Guessed Channel Layout for Input Stream #0.10 : mono
 Guessed Channel Layout for Input Stream #0.11 : mono
 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '\\bdfs11\array21\Eps101_1920x1080_20_51_DV_CC_25fps_20210622.mov':
 Metadata:
 major_brand : qt
 minor_version : 537199360
 compatible_brands: qt
 creation_time : 2021-06-22T17:39:50.000000Z
 Duration: 00:59:08.16, start: 0.000000, bitrate: 217983 kb/s
 Stream #0:0(eng): Video: prores (HQ) (apch / 0x68637061), yuv422p10le(tv, bt709, progressive), 1920x1080, 206438 kb/s, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 25 tbn, 25 tbc (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Video Media Handler
 encoder : Apple ProRes 422 HQ
 timecode : 00:59:59:00
 Stream #0:1(eng): Data: none (tmcd / 0x64636D74) (default)
 Metadata:
 rotate : 0
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Time Code Media Handler
 reel_name : untitled
 timecode : 00:59:59:00
 Stream #0:2(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:3(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:4(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:5(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:6(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:7(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:8(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:9(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:10(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:11(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:12(eng): Subtitle: none (c708 / 0x38303763), 1920x1080, 21 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Closed Caption Media Handler
 Stream mapping:
 Stream #0:0 -> #0:0 (prores (native) -> mpeg2video (native))
 Stream #0:2 -> #0:1 (pcm_s24le (native) -> pcm_s24le (native))
 Stream #0:3 -> #0:2 (pcm_s24le (native) -> pcm_s24le (native))
 Stream #0:4 -> #0:3 (pcm_s24le (native) -> pcm_s24le (native))
 Stream #0:5 -> #0:4 (pcm_s24le (native) -> pcm_s24le (native))
 Stream #0:6 -> #0:5 (pcm_s24le (native) -> pcm_s24le (native))
 Stream #0:7 -> #0:6 (pcm_s24le (native) -> pcm_s24le (native))
 Stream #0:8 -> #0:7 (pcm_s24le (native) -> pcm_s24le (native))
 Stream #0:9 -> #0:8 (pcm_s24le (native) -> pcm_s24le (native))
 Stream #0:10 -> #0:9 (pcm_s24le (native) -> pcm_s24le (native))
 Stream #0:11 -> #0:10 (pcm_s24le (native) -> pcm_s24le (native))
 Press [q] to stop, [?] for help
 [mpeg2video @ 06f8aa40] Automatically choosing VBV buffer size of 746 kbyte
 Output #0, mxf, to 'R:\2_SERIES\Eps101_1920x1080_20_51_DV_CC_25fps_20210622_25to23976_works.mxf':
 Metadata:
 major_brand : qt
 minor_version : 537199360
 compatible_brands: qt
 encoder : Lavf58.30.100
 Stream #0:0(eng): Video: mpeg2video (4:2:2) (xd5d / 0x64356478), yuv422p, 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 50000 kb/s, 23.98 fps, 23.98 tbn, 23.98 tbc (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Video Media Handler
 timecode : 00:59:59:00
 encoder : Lavc58.55.100 mpeg2video
 Side data:
 cpb: bitrate max/min/avg: 50000000/50000000/50000000 buffer size: 6111232 vbv_delay: 18446744073709551615
 Stream #0:1(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 encoder : Lavc58.55.100 pcm_s24le
 Stream #0:2(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 encoder : Lavc58.55.100 pcm_s24le
 Stream #0:3(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 encoder : Lavc58.55.100 pcm_s24le
 Stream #0:4(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 encoder : Lavc58.55.100 pcm_s24le
 Stream #0:5(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 encoder : Lavc58.55.100 pcm_s24le
 Stream #0:6(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 encoder : Lavc58.55.100 pcm_s24le
 Stream #0:7(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 encoder : Lavc58.55.100 pcm_s24le
 Stream #0:8(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 encoder : Lavc58.55.100 pcm_s24le
 Stream #0:9(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 encoder : Lavc58.55.100 pcm_s24le
 Stream #0:10(eng): Audio: pcm_s24le, 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 encoder : Lavc58.55.100 pcm_s24le
 frame= 527 fps= 52 q=2.0 Lsize= 166106kB time=00:00:22.00 bitrate=61851.7kbits/s speed=2.19x
 video:133971kB audio:30938kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.726204%



This is the output that produces no phasing


C:\Windows>ffmpeg -ss 00:05:13.0 -r 24000/1001 -i "\\bdfs11\array21\Eps101_1920x1080_20_51_DV_CC_25fps_20210622.mov" -t 00:00:22.0 -map 0:v:0 -c:v mpeg2video -profile:v 0 -level:v 2 -b:v 50000k -minrate 50000k -maxrate 50000k -pix_fmt yuv422p -vtag xd5d -force_key_frames "expr:gte(t,n_forced*1)" -streamid 0:481 -streamid 1:129 -filter_complex "[0:a:0][0:a:1][0:a:2][0:a:3][0:a:4][0:a:5][0:a:6][0:a:7][0:a:8][0:a:9] amerge=inputs=10, atempo=24000/25025[FRC]" -map "[FRC]" -c:a pcm_s24le "R:\2_SERIES\Eps101_1920x1080_20_51_DV_CC_25fps_20210622_25to23976_works.mxf" -y
ffmpeg version N-94566-gddd92ba2c6 Copyright (c) 2000-2019 the FFmpeg developers
 built with gcc 9.1.1 (GCC) 20190807
 configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
 libavutil 56. 33.100 / 56. 33.100
 libavcodec 58. 55.100 / 58. 55.100
 libavformat 58. 30.100 / 58. 30.100
 libavdevice 58. 9.100 / 58. 9.100
 libavfilter 7. 58.100 / 7. 58.100
 libswscale 5. 6.100 / 5. 6.100
 libswresample 3. 6.100 / 3. 6.100
 libpostproc 55. 6.100 / 55. 6.100
[mov,mp4,m4a,3gp,3g2,mj2 @ 064f5580] Could not find codec parameters for stream 12 (Subtitle: none (c708 / 0x38303763), 1920x1080, 21 kb/s): unknown codec
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Guessed Channel Layout for Input Stream #0.2 : mono
Guessed Channel Layout for Input Stream #0.3 : mono
Guessed Channel Layout for Input Stream #0.4 : mono
Guessed Channel Layout for Input Stream #0.5 : mono
Guessed Channel Layout for Input Stream #0.6 : mono
Guessed Channel Layout for Input Stream #0.7 : mono
Guessed Channel Layout for Input Stream #0.8 : mono
Guessed Channel Layout for Input Stream #0.9 : mono
Guessed Channel Layout for Input Stream #0.10 : mono
Guessed Channel Layout for Input Stream #0.11 : mono
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '\\bdfs11\array21\Eps101_1920x1080_20_51_DV_CC_25fps_20210622.mov':
 Metadata:
 major_brand : qt
 minor_version : 537199360
 compatible_brands: qt
 creation_time : 2021-06-22T17:39:50.000000Z
 Duration: 00:59:08.16, start: 0.000000, bitrate: 217983 kb/s
 Stream #0:0(eng): Video: prores (HQ) (apch / 0x68637061), yuv422p10le(tv, bt709, progressive), 1920x1080, 206438 kb/s, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 25 tbn, 25 tbc (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Video Media Handler
 encoder : Apple ProRes 422 HQ
 timecode : 00:59:59:00
 Stream #0:1(eng): Data: none (tmcd / 0x64636D74) (default)
 Metadata:
 rotate : 0
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Time Code Media Handler
 reel_name : untitled
 timecode : 00:59:59:00
 Stream #0:2(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:3(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:4(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:5(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:6(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:7(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:8(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:9(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:10(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:11(eng): Audio: pcm_s24le (lpcm / 0x6D63706C), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Sound Media Handler
 Stream #0:12(eng): Subtitle: none (c708 / 0x38303763), 1920x1080, 21 kb/s (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Closed Caption Media Handler
Stream mapping:
 Stream #0:2 (pcm_s24le) -> amerge:in0 (graph 0)
 Stream #0:3 (pcm_s24le) -> amerge:in1 (graph 0)
 Stream #0:4 (pcm_s24le) -> amerge:in2 (graph 0)
 Stream #0:5 (pcm_s24le) -> amerge:in3 (graph 0)
 Stream #0:6 (pcm_s24le) -> amerge:in4 (graph 0)
 Stream #0:7 (pcm_s24le) -> amerge:in5 (graph 0)
 Stream #0:8 (pcm_s24le) -> amerge:in6 (graph 0)
 Stream #0:9 (pcm_s24le) -> amerge:in7 (graph 0)
 Stream #0:10 (pcm_s24le) -> amerge:in8 (graph 0)
 Stream #0:11 (pcm_s24le) -> amerge:in9 (graph 0)
 Stream #0:0 -> #0:0 (prores (native) -> mpeg2video (native))
 atempo (graph 0) -> Stream #0:1 (pcm_s24le)
Press [q] to stop, [?] for help
[Parsed_amerge_0 @ 06e18dc0] No channel layout for input 1
[Parsed_amerge_0 @ 06e18dc0] Input channel layouts overlap: output layout will be determined by the number of distinct input channels
[mpeg2video @ 06dea000] Automatically choosing VBV buffer size of 746 kbyte
Output #0, mxf, to 'R:\2_SERIES\Eps101_1920x1080_20_51_DV_CC_25fps_20210622_25to23976_works.mxf':
 Metadata:
 major_brand : qt
 minor_version : 537199360
 compatible_brands: qt
 encoder : Lavf58.30.100
 Stream #0:0(eng): Video: mpeg2video (4:2:2) (xd5d / 0x64356478), yuv422p, 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 50000 kb/s, 23.98 fps, 23.98 tbn, 23.98 tbc (default)
 Metadata:
 creation_time : 2021-06-22T17:39:50.000000Z
 handler_name : Apple Video Media Handler
 timecode : 00:59:59:00
 encoder : Lavc58.55.100 mpeg2video
 Side data:
 cpb: bitrate max/min/avg: 50000000/50000000/50000000 buffer size: 6111232 vbv_delay: 18446744073709551615
 Stream #0:1: Audio: pcm_s24le, 48000 Hz, 10 channels (FL+FR+FC+LFE+BL+BR+FLC+FRC+BC+SL), s32, 11520 kb/s (default)
 Metadata:
 encoder : Lavc58.55.100 pcm_s24le
frame= 527 fps= 61 q=2.0 Lsize= 165571kB time=00:00:22.00 bitrate=61652.6kbits/s speed=2.56x
video:133971kB audio:30938kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.402084%



Let me know if you need more detail than what I've provided.


Tangential questions related to this job and potentially not worth their own thread, even though I've looked extensively and not found the answers (happy to post them individually if that's necessary) :


- 

-
I can't seem to split any portion of the filter_complex above with a caret (^) within a windows batch file (no number of spaces before or after resolve this issue). It breaks the chain and the filter graphs complain of no input.


-
Is FFMBC still the only way to include broadcast closed captioning ? This functionality doesn't exist within FFMPEG ?








-
-
FFmpeg WASM writeFile Stalls and Doesn't Complete in React App with Ant Design
26 février, par raiyan khanI'm using FFmpeg WebAssembly (WASM) in a React app to process and convert a video file before uploading it. The goal is to resize the video to 720p using FFmpeg before sending it to the backend.


Problem :


Everything works up to fetching the file and confirming it's loaded into memory, but FFmpeg hangs at ffmpeg.writeFile() and does not proceed further. No errors are thrown.


Code Snippet :


- 

-
Loading FFmpeg


const loadFFmpeg = async () => {
 if (loaded) return; // Avoid reloading if 
 already loaded

 const baseURL = 'https://unpkg.com/@ffmpeg/core@0.12.6/dist/umd';
 const ffmpeg = ffmpegRef.current;
 ffmpeg.on('log', ({ message }) => {
 messageRef.current.innerHTML = message;
 console.log(message);
 });
 await ffmpeg.load({
 coreURL: await toBlobURL(`${baseURL}/ffmpeg-core.js`, 'text/javascript'),
 wasmURL: await toBlobURL(`${baseURL}/ffmpeg-core.wasm`, 'application/wasm'),
 });
 setLoaded(true);
 };

 useEffect(() => {
 loadFFmpeg()
 }, [])



-
Fetching and Writing File


const convertVideoTo720p = async (videoFile) => {
 console.log("Starting video 
 conversion...");



 const { height } = await getVideoMetadata(videoFile);
 console.log(`Video height: ${height}`);

 if (height <= 720) {
 console.log("No conversion needed.");
 return videoFile;
 }

 const ffmpeg = ffmpegRef.current;
 console.log("FFmpeg instance loaded. Writing file to memory...");

 const fetchedFile = await fetchFile(videoFile);
 console.log("File fetched successfully:", fetchedFile);

 console.log("Checking FFmpeg memory before writing...");
 console.log(`File size: ${fetchedFile.length} bytes (~${(fetchedFile.length / 1024 / 1024).toFixed(2)} MB)`);

 if (!ffmpeg.isLoaded()) {
 console.error("FFmpeg is not fully loaded yet!");
 return;
 }

 console.log("Memory seems okay. Writing file to FFmpeg...");
 await ffmpeg.writeFile('input.mp4', fetchedFile); // ❌ This line hangs, nothing after runs
 console.log("File successfully written to FFmpeg memory.");
 };









Debugging Steps I've Tried :


- 

- Ensured FFmpeg is fully loaded before calling
writeFile()

✅ffmpeg.isLoaded()
returnstrue
. - Checked file fetch process :
✅
fetchFile(videoFile)
successfully returns aUint8Array
. - Tried renaming the file to prevent caching issues
✅ Used a unique file name like
video_${Date.now()}.mp4
, but no change - Checked browser console for errors :
❌ No errors are displayed.
- Tried skipping FFmpeg and uploading the raw file instead :
✅ Upload works fine without FFmpeg, so the issue is specific to FFmpeg.












Expected Behavior


- 

ffmpeg.writeFile('input.mp4', fetchedFile);
should complete and allow FFmpeg to process the video.




Actual Behavior


- 

- Execution stops at
writeFile
, and no errors are thrown.




Environment :


- 

- React : 18.x
- FFmpeg WASM Version : @ffmpeg/ffmpeg@0.12.15
- Browser : Chrome 121, Edge 120
- Operating System : Windows 11










Question :
Why is FFmpeg's
writeFile()
stalling and never completing ?
How can I fix or further debug this issue ?

Here is my full code :




import { useNavigate } from "react-router-dom";
import { useEffect, useRef, useState } from 'react';
import { Form, Input, Button, Select, Space } from 'antd';
const { Option } = Select;
import { FaAngleLeft } from "react-icons/fa6";
import { message, Upload } from 'antd';
import { CiCamera } from "react-icons/ci";
import { IoVideocamOutline } from "react-icons/io5";
import { useCreateWorkoutVideoMutation } from "../../../redux/features/workoutVideo/workoutVideoApi";
import { convertVideoTo720p } from "../../../utils/ffmpegHelper";
import { FFmpeg } from '@ffmpeg/ffmpeg';
import { fetchFile, toBlobURL } from '@ffmpeg/util';


const AddWorkoutVideo = () => {
 const [videoFile, setVideoFile] = useState(null);
 const [imageFile, setImageFile] = useState(null);
 const [loaded, setLoaded] = useState(false);
 const ffmpegRef = useRef(new FFmpeg());
 const videoRef = useRef(null);
 const messageRef = useRef(null);
 const [form] = Form.useForm();
 const [createWorkoutVideo, { isLoading }] = useCreateWorkoutVideoMutation()
 const navigate = useNavigate();

 const videoFileRef = useRef(null); // Use a ref instead of state


 // Handle Video Upload
 const handleVideoChange = ({ file }) => {
 setVideoFile(file.originFileObj);
 };

 // Handle Image Upload
 const handleImageChange = ({ file }) => {
 setImageFile(file.originFileObj);
 };

 // Load FFmpeg core if needed (optional if you want to preload)
 const loadFFmpeg = async () => {
 if (loaded) return; // Avoid reloading if already loaded

 const baseURL = 'https://unpkg.com/@ffmpeg/core@0.12.6/dist/umd';
 const ffmpeg = ffmpegRef.current;
 ffmpeg.on('log', ({ message }) => {
 messageRef.current.innerHTML = message;
 console.log(message);
 });
 await ffmpeg.load({
 coreURL: await toBlobURL(`${baseURL}/ffmpeg-core.js`, 'text/javascript'),
 wasmURL: await toBlobURL(`${baseURL}/ffmpeg-core.wasm`, 'application/wasm'),
 });
 setLoaded(true);
 };

 useEffect(() => {
 loadFFmpeg()
 }, [])

 // Helper: Get video metadata (width and height)
 const getVideoMetadata = (file) => {
 return new Promise((resolve, reject) => {
 const video = document.createElement('video');
 video.preload = 'metadata';
 video.onloadedmetadata = () => {
 resolve({ width: video.videoWidth, height: video.videoHeight });
 };
 video.onerror = () => reject(new Error('Could not load video metadata'));
 video.src = URL.createObjectURL(file);
 });
 };

 // Inline conversion helper function
 // const convertVideoTo720p = async (videoFile) => {
 // // Check the video resolution first
 // const { height } = await getVideoMetadata(videoFile);
 // if (height <= 720) {
 // // No conversion needed
 // return videoFile;
 // }
 // const ffmpeg = ffmpegRef.current;
 // // Load ffmpeg if not already loaded
 // // await ffmpeg.load({
 // // coreURL: await toBlobURL(`${baseURL}/ffmpeg-core.js`, 'text/javascript'),
 // // wasmURL: await toBlobURL(`${baseURL}/ffmpeg-core.wasm`, 'application/wasm'),
 // // });
 // // Write the input file to the ffmpeg virtual FS
 // await ffmpeg.writeFile('input.mp4', await fetchFile(videoFile));
 // // Convert video to 720p (scale filter maintains aspect ratio)
 // await ffmpeg.exec(['-i', 'input.mp4', '-vf', 'scale=-1:720', 'output.mp4']);
 // // Read the output file
 // const data = await ffmpeg.readFile('output.mp4');
 // console.log(data, 'data from convertVideoTo720p');
 // const videoBlob = new Blob([data.buffer], { type: 'video/mp4' });
 // return new File([videoBlob], 'output.mp4', { type: 'video/mp4' });
 // };
 const convertVideoTo720p = async (videoFile) => {
 console.log("Starting video conversion...");

 // Check the video resolution first
 const { height } = await getVideoMetadata(videoFile);
 console.log(`Video height: ${height}`);

 if (height <= 720) {
 console.log("No conversion needed. Returning original file.");
 return videoFile;
 }

 const ffmpeg = ffmpegRef.current;
 console.log("FFmpeg instance loaded. Writing file to memory...");

 // await ffmpeg.writeFile('input.mp4', await fetchFile(videoFile));
 // console.log("File written. Starting conversion...");
 console.log("Fetching file for FFmpeg:", videoFile);
 const fetchedFile = await fetchFile(videoFile);
 console.log("File fetched successfully:", fetchedFile);
 console.log("Checking FFmpeg memory before writing...");
 console.log(`File size: ${fetchedFile.length} bytes (~${(fetchedFile.length / 1024 / 1024).toFixed(2)} MB)`);

 if (fetchedFile.length > 50 * 1024 * 1024) { // 50MB limit
 console.error("File is too large for FFmpeg WebAssembly!");
 message.error("File too large. Try a smaller video.");
 return;
 }

 console.log("Memory seems okay. Writing file to FFmpeg...");
 const fileName = `video_${Date.now()}.mp4`; // Generate a unique name
 console.log(`Using filename: ${fileName}`);

 await ffmpeg.writeFile(fileName, fetchedFile);
 console.log(`File successfully written to FFmpeg memory as ${fileName}.`);

 await ffmpeg.exec(['-i', 'input.mp4', '-vf', 'scale=-1:720', 'output.mp4']);
 console.log("Conversion completed. Reading output file...");

 const data = await ffmpeg.readFile('output.mp4');
 console.log("File read successful. Creating new File object.");

 const videoBlob = new Blob([data.buffer], { type: 'video/mp4' });
 const convertedFile = new File([videoBlob], 'output.mp4', { type: 'video/mp4' });

 console.log(convertedFile, "converted video from convertVideoTo720p");

 return convertedFile;
 };


 const onFinish = async (values) => {
 // Ensure a video is selected
 if (!videoFileRef.current) {
 message.error("Please select a video file.");
 return;
 }

 // Create FormData
 const formData = new FormData();
 if (imageFile) {
 formData.append("image", imageFile);
 }

 try {
 message.info("Processing video. Please wait...");

 // Convert the video to 720p only if needed
 const convertedVideo = await convertVideoTo720p(videoFileRef.current);
 console.log(convertedVideo, 'convertedVideo from onFinish');

 formData.append("media", videoFileRef.current);

 formData.append("data", JSON.stringify(values));

 // Upload manually to the backend
 const response = await createWorkoutVideo(formData).unwrap();
 console.log(response, 'response from add video');

 message.success("Video added successfully!");
 form.resetFields(); // Reset form
 setVideoFile(null); // Clear file

 } catch (error) {
 message.error(error.data?.message || "Failed to add video.");
 }

 // if (videoFile) {
 // message.info("Processing video. Please wait...");
 // try {
 // // Convert the video to 720p only if needed
 // const convertedVideo = await convertVideoTo720p(videoFile);
 // formData.append("media", convertedVideo);
 // } catch (conversionError) {
 // message.error("Video conversion failed.");
 // return;
 // }
 // }
 // formData.append("data", JSON.stringify(values)); // Convert text fields to JSON

 // try {
 // const response = await createWorkoutVideo(formData).unwrap();
 // console.log(response, 'response from add video');

 // message.success("Video added successfully!");
 // form.resetFields(); // Reset form
 // setFile(null); // Clear file
 // } catch (error) {
 // message.error(error.data?.message || "Failed to add video.");
 // }
 };

 const handleBackButtonClick = () => {
 navigate(-1); // This takes the user back to the previous page
 };

 const videoUploadProps = {
 name: 'video',
 // action: 'https://660d2bd96ddfa2943b33731c.mockapi.io/api/upload',
 // headers: {
 // authorization: 'authorization-text',
 // },
 // beforeUpload: (file) => {
 // const isVideo = file.type.startsWith('video/');
 // if (!isVideo) {
 // message.error('You can only upload video files!');
 // }
 // return isVideo;
 // },
 // onChange(info) {
 // if (info.file.status === 'done') {
 // message.success(`${info.file.name} video uploaded successfully`);
 // } else if (info.file.status === 'error') {
 // message.error(`${info.file.name} video upload failed.`);
 // }
 // },
 beforeUpload: (file) => {
 const isVideo = file.type.startsWith('video/');
 if (!isVideo) {
 message.error('You can only upload video files!');
 return Upload.LIST_IGNORE; // Prevents the file from being added to the list
 }
 videoFileRef.current = file; // Store file in ref
 // setVideoFile(file); // Store the file in state instead of uploading it automatically
 return false; // Prevent auto-upload
 },
 };

 const imageUploadProps = {
 name: 'image',
 action: 'https://660d2bd96ddfa2943b33731c.mockapi.io/api/upload',
 headers: {
 authorization: 'authorization-text',
 },
 beforeUpload: (file) => {
 const isImage = file.type.startsWith('image/');
 if (!isImage) {
 message.error('You can only upload image files!');
 }
 return isImage;
 },
 onChange(info) {
 if (info.file.status === 'done') {
 message.success(`${info.file.name} image uploaded successfully`);
 } else if (info.file.status === 'error') {
 message.error(`${info.file.name} image upload failed.`);
 }
 },
 };
 return (
 <>
 <div classname="flex items-center gap-2 text-xl cursor-pointer">
 <faangleleft></faangleleft>
 <h1 classname="font-semibold">Add Video</h1>
 </div>
 <div classname="rounded-lg py-4 border-[#79CDFF] border-2 shadow-lg mt-8 bg-white">
 <div classname="space-y-[24px] min-h-[83vh] bg-light-gray rounded-2xl">
 <h3 classname="text-2xl text-[#174C6B] mb-4 border-b border-[#79CDFF]/50 pb-3 pl-16 font-semibold">
 Adding Video
 </h3>
 <div classname="w-full px-16">
 / style={{ maxWidth: 600, margin: '0 auto' }}
 >
 {/* Section 1 */}
 {/* <space direction="vertical" style="{{"> */}
 {/* <space size="large" direction="horizontal" classname="responsive-space"> */}
 <div classname="grid grid-cols-2 gap-8 mt-8">
 <div>
 <space size="large" direction="horizontal" classname="responsive-space-section-2">

 {/* Video */}
 Upload Video}
 name="media"
 className="responsive-form-item"
 // rules={[{ required: true, message: 'Please enter the package amount!' }]}
 >
 <upload maxcount="{1}">
 <button style="{{" solid="solid">
 <span style="{{" 600="600">Select a video</span>
 <iovideocamoutline size="{20}" color="#174C6B"></iovideocamoutline>
 </button>
 </upload>
 

 {/* Thumbnail */}
 Upload Image}
 name="image"
 className="responsive-form-item"
 // rules={[{ required: true, message: 'Please enter the package amount!' }]}
 >
 <upload maxcount="{1}">
 <button style="{{" solid="solid">
 <span style="{{" 600="600">Select an image</span>
 <cicamera size="{25}" color="#174C6B"></cicamera>
 </button>
 </upload>
 

 {/* Title */}
 Video Title}
 name="name"
 className="responsive-form-item-section-2"
 >
 <input type="text" placeholder="Enter video title" style="{{&#xA;" solid="solid" />
 
 </space>
 </div>
 </div>

 {/* </space> */}
 {/* </space> */}


 {/* Submit Button */}
 
 <div classname="p-4 mt-10 text-center mx-auto flex items-center justify-center">
 
 <span classname="text-white font-semibold">{isLoading ? 'Uploading...' : 'Upload'}</span>
 
 </div>
 
 
 </div>
 </div>
 </div>
 >
 )
}

export default AddWorkoutVideo







Would appreciate any insights or suggestions. Thanks !


-
-
How to fix a segmentaion fault in a C program ? [closed]
13 janvier 2012, par ipegasusPossible Duplicate :
Segmentation faultCurrently I am upgrading an open source program used for HTTP streaming. It needs to support the latest FFMPEG.
The code compiles fine with no warnings although I am getting a segmentation fault error.
I would like to know how to fix the issue ? and / or the best way to debug ? Please find attached a portion of the code due to size. I will try to add the project to github :) Thanks in advance !Sample Usage
# segmenter --i out.ts --l 10 --o stream.m3u8 --d segments --f stream
Makefile
FFLIBS=`pkg-config --libs libavformat libavcodec libavutil`
FFFLAGS=`pkg-config --cflags libavformat libavcodec libavutil`
all:
gcc -Wall -g segmenter.c -o segmenter ${FFFLAGS} ${FFLIBS}segmenter.c
/*
* Copyright (c) 2009 Chase Douglas
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License version 2
* as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include
#include
#include
#include
#include
#include "libavformat/avformat.h"
#include "libavformat/avio.h"
#include <sys></sys>stat.h>
#include "segmenter.h"
#include "libavformat/avformat.h"
#define IMAGE_ID3_SIZE 9171
void printUsage() {
fprintf(stderr, "\nExample: segmenter --i infile --d baseDir --f baseFileName --o playListFile.m3u8 --l 10 \n");
fprintf(stderr, "\nOptions: \n");
fprintf(stderr, "--i <infile>.\n");
fprintf(stderr, "--o <outfile>.\n");
fprintf(stderr, "--d basedir, the base directory for files.\n");
fprintf(stderr, "--f baseFileName, output files will be baseFileName-#.\n");
fprintf(stderr, "--l segment length, the length of each segment.\n");
fprintf(stderr, "--a, audio only decode for < 64k streams.\n");
fprintf(stderr, "--v, video only decode for < 64k streams.\n");
fprintf(stderr, "--version, print version details and exit.\n");
fprintf(stderr, "\n\n");
}
void ffmpeg_version() {
// output build and version numbers
fprintf(stderr, " libavutil version: %s\n", AV_STRINGIFY(LIBAVUTIL_VERSION));
fprintf(stderr, " libavutil build: %d\n", LIBAVUTIL_BUILD);
fprintf(stderr, " libavcodec version: %s\n", AV_STRINGIFY(LIBAVCODEC_VERSION));
fprintf(stdout, " libavcodec build: %d\n", LIBAVCODEC_BUILD);
fprintf(stderr, " libavformat version: %s\n", AV_STRINGIFY(LIBAVFORMAT_VERSION));
fprintf(stderr, " libavformat build: %d\n", LIBAVFORMAT_BUILD);
fprintf(stderr, " built on " __DATE__ " " __TIME__);
#ifdef __GNUC__
fprintf(stderr, ", gcc: " __VERSION__ "\n");
#else
fprintf(stderr, ", using a non-gcc compiler\n");
#endif
}
static AVStream *add_output_stream(AVFormatContext *output_format_context, AVStream *input_stream) {
AVCodecContext *input_codec_context;
AVCodecContext *output_codec_context;
AVStream *output_stream;
output_stream = avformat_new_stream(output_format_context, 0);
if (!output_stream) {
fprintf(stderr, "Segmenter error: Could not allocate stream\n");
exit(1);
}
input_codec_context = input_stream->codec;
output_codec_context = output_stream->codec;
output_codec_context->codec_id = input_codec_context->codec_id;
output_codec_context->codec_type = input_codec_context->codec_type;
output_codec_context->codec_tag = input_codec_context->codec_tag;
output_codec_context->bit_rate = input_codec_context->bit_rate;
output_codec_context->extradata = input_codec_context->extradata;
output_codec_context->extradata_size = input_codec_context->extradata_size;
if (av_q2d(input_codec_context->time_base) * input_codec_context->ticks_per_frame > av_q2d(input_stream->time_base) && av_q2d(input_stream->time_base) < 1.0 / 1000) {
output_codec_context->time_base = input_codec_context->time_base;
output_codec_context->time_base.num *= input_codec_context->ticks_per_frame;
} else {
output_codec_context->time_base = input_stream->time_base;
}
switch (input_codec_context->codec_type) {
#ifdef USE_OLD_FFMPEG
case CODEC_TYPE_AUDIO:
#else
case AVMEDIA_TYPE_AUDIO:
#endif
output_codec_context->channel_layout = input_codec_context->channel_layout;
output_codec_context->sample_rate = input_codec_context->sample_rate;
output_codec_context->channels = input_codec_context->channels;
output_codec_context->frame_size = input_codec_context->frame_size;
if ((input_codec_context->block_align == 1 && input_codec_context->codec_id == CODEC_ID_MP3) || input_codec_context->codec_id == CODEC_ID_AC3) {
output_codec_context->block_align = 0;
} else {
output_codec_context->block_align = input_codec_context->block_align;
}
break;
#ifdef USE_OLD_FFMPEG
case CODEC_TYPE_VIDEO:
#else
case AVMEDIA_TYPE_VIDEO:
#endif
output_codec_context->pix_fmt = input_codec_context->pix_fmt;
output_codec_context->width = input_codec_context->width;
output_codec_context->height = input_codec_context->height;
output_codec_context->has_b_frames = input_codec_context->has_b_frames;
if (output_format_context->oformat->flags & AVFMT_GLOBALHEADER) {
output_codec_context->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
break;
default:
break;
}
return output_stream;
}
int write_index_file(const char index[], const char tmp_index[], const unsigned int planned_segment_duration, const unsigned int actual_segment_duration[],
const char output_directory[], const char output_prefix[], const char output_file_extension[],
const unsigned int first_segment, const unsigned int last_segment) {
FILE *index_fp;
char *write_buf;
unsigned int i;
index_fp = fopen(tmp_index, "w");
if (!index_fp) {
fprintf(stderr, "Could not open temporary m3u8 index file (%s), no index file will be created\n", tmp_index);
return -1;
}
write_buf = malloc(sizeof (char) * 1024);
if (!write_buf) {
fprintf(stderr, "Could not allocate write buffer for index file, index file will be invalid\n");
fclose(index_fp);
return -1;
}
unsigned int maxDuration = planned_segment_duration;
for (i = first_segment; i <= last_segment; i++)
if (actual_segment_duration[i] > maxDuration)
maxDuration = actual_segment_duration[i];
snprintf(write_buf, 1024, "#EXTM3U\n#EXT-X-TARGETDURATION:%u\n", maxDuration);
if (fwrite(write_buf, strlen(write_buf), 1, index_fp) != 1) {
fprintf(stderr, "Could not write to m3u8 index file, will not continue writing to index file\n");
free(write_buf);
fclose(index_fp);
return -1;
}
for (i = first_segment; i <= last_segment; i++) {
snprintf(write_buf, 1024, "#EXTINF:%u,\n%s-%u%s\n", actual_segment_duration[i], output_prefix, i, output_file_extension);
if (fwrite(write_buf, strlen(write_buf), 1, index_fp) != 1) {
fprintf(stderr, "Could not write to m3u8 index file, will not continue writing to index file\n");
free(write_buf);
fclose(index_fp);
return -1;
}
}
snprintf(write_buf, 1024, "#EXT-X-ENDLIST\n");
if (fwrite(write_buf, strlen(write_buf), 1, index_fp) != 1) {
fprintf(stderr, "Could not write last file and endlist tag to m3u8 index file\n");
free(write_buf);
fclose(index_fp);
return -1;
}
free(write_buf);
fclose(index_fp);
return rename(tmp_index, index);
}
int main(int argc, const char *argv[]) {
//input parameters
char inputFilename[MAX_FILENAME_LENGTH], playlistFilename[MAX_FILENAME_LENGTH], baseDirName[MAX_FILENAME_LENGTH], baseFileName[MAX_FILENAME_LENGTH];
char baseFileExtension[5]; //either "ts", "aac" or "mp3"
int segmentLength, outputStreams, verbosity, version;
char currentOutputFileName[MAX_FILENAME_LENGTH];
char tempPlaylistName[MAX_FILENAME_LENGTH];
//these are used to determine the exact length of the current segment
double prev_segment_time = 0;
double segment_time;
unsigned int actual_segment_durations[2048];
double packet_time = 0;
//new variables to keep track of output size
double output_bytes = 0;
unsigned int output_index = 1;
AVOutputFormat *ofmt;
AVFormatContext *ic = NULL;
AVFormatContext *oc;
AVStream *video_st = NULL;
AVStream *audio_st = NULL;
AVCodec *codec;
int video_index;
int audio_index;
unsigned int first_segment = 1;
unsigned int last_segment = 0;
int write_index = 1;
int decode_done;
int ret;
int i;
unsigned char id3_tag[128];
unsigned char * image_id3_tag;
size_t id3_tag_size = 73;
int newFile = 1; //a boolean value to flag when a new file needs id3 tag info in it
if (parseCommandLine(inputFilename, playlistFilename, baseDirName, baseFileName, baseFileExtension, &outputStreams, &segmentLength, &verbosity, &version, argc, argv) != 0)
return 0;
if (version) {
ffmpeg_version();
return 0;
}
fprintf(stderr, "%s %s\n", playlistFilename, tempPlaylistName);
image_id3_tag = malloc(IMAGE_ID3_SIZE);
if (outputStreams == OUTPUT_STREAM_AUDIO)
build_image_id3_tag(image_id3_tag);
build_id3_tag((char *) id3_tag, id3_tag_size);
snprintf(tempPlaylistName, strlen(playlistFilename) + strlen(baseDirName) + 1, "%s%s", baseDirName, playlistFilename);
strncpy(playlistFilename, tempPlaylistName, strlen(tempPlaylistName));
strncpy(tempPlaylistName, playlistFilename, MAX_FILENAME_LENGTH);
strncat(tempPlaylistName, ".", 1);
//decide if this is an aac file or a mpegts file.
//postpone deciding format until later
/* ifmt = av_find_input_format("mpegts");
if (!ifmt)
{
fprintf(stderr, "Could not find MPEG-TS demuxer.\n");
exit(1);
} */
av_log_set_level(AV_LOG_DEBUG);
av_register_all();
ret = avformat_open_input(&ic, inputFilename, NULL, NULL);
if (ret != 0) {
fprintf(stderr, "Could not open input file %s. Error %d.\n", inputFilename, ret);
exit(1);
}
if (avformat_find_stream_info(ic, NULL) < 0) {
fprintf(stderr, "Could not read stream information.\n");
exit(1);
}
oc = avformat_alloc_context();
if (!oc) {
fprintf(stderr, "Could not allocate output context.");
exit(1);
}
video_index = -1;
audio_index = -1;
for (i = 0; i < ic->nb_streams && (video_index < 0 || audio_index < 0); i++) {
switch (ic->streams[i]->codec->codec_type) {
#ifdef USE_OLD_FFMPEG
case CODEC_TYPE_VIDEO:
#else
case AVMEDIA_TYPE_VIDEO:
#endif
video_index = i;
ic->streams[i]->discard = AVDISCARD_NONE;
if (outputStreams & OUTPUT_STREAM_VIDEO)
video_st = add_output_stream(oc, ic->streams[i]);
break;
#ifdef USE_OLD_FFMPEG
case CODEC_TYPE_AUDIO:
#else
case AVMEDIA_TYPE_AUDIO:
#endif
audio_index = i;
ic->streams[i]->discard = AVDISCARD_NONE;
if (outputStreams & OUTPUT_STREAM_AUDIO)
audio_st = add_output_stream(oc, ic->streams[i]);
break;
default:
ic->streams[i]->discard = AVDISCARD_ALL;
break;
}
}
if (video_index == -1) {
fprintf(stderr, "Stream must have video component.\n");
exit(1);
}
//now that we know the audio and video output streams
//we can decide on an output format.
if (outputStreams == OUTPUT_STREAM_AUDIO) {
//the audio output format should be the same as the audio input format
switch (ic->streams[audio_index]->codec->codec_id) {
case CODEC_ID_MP3:
fprintf(stderr, "Setting output audio to mp3.");
strncpy(baseFileExtension, ".mp3", strlen(".mp3"));
ofmt = av_guess_format("mp3", NULL, NULL);
break;
case CODEC_ID_AAC:
fprintf(stderr, "Setting output audio to aac.");
ofmt = av_guess_format("adts", NULL, NULL);
break;
default:
fprintf(stderr, "Codec id %d not supported.\n", ic->streams[audio_index]->id);
}
if (!ofmt) {
fprintf(stderr, "Could not find audio muxer.\n");
exit(1);
}
} else {
ofmt = av_guess_format("mpegts", NULL, NULL);
if (!ofmt) {
fprintf(stderr, "Could not find MPEG-TS muxer.\n");
exit(1);
}
}
oc->oformat = ofmt;
if (outputStreams & OUTPUT_STREAM_VIDEO && oc->oformat->flags & AVFMT_GLOBALHEADER) {
oc->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
/* Deprecated: pass the options to avformat_write_header directly.
if (av_set_parameters(oc, NULL) < 0) {
fprintf(stderr, "Invalid output format parameters.\n");
exit(1);
}
*/
av_dump_format(oc, 0, baseFileName, 1);
//open the video codec only if there is video data
if (video_index != -1) {
if (outputStreams & OUTPUT_STREAM_VIDEO)
codec = avcodec_find_decoder(video_st->codec->codec_id);
else
codec = avcodec_find_decoder(ic->streams[video_index]->codec->codec_id);
if (!codec) {
fprintf(stderr, "Could not find video decoder, key frames will not be honored.\n");
}
if (outputStreams & OUTPUT_STREAM_VIDEO)
ret = avcodec_open2(video_st->codec, codec, NULL);
else
avcodec_open2(ic->streams[video_index]->codec, codec, NULL);
if (ret < 0) {
fprintf(stderr, "Could not open video decoder, key frames will not be honored.\n");
}
}
snprintf(currentOutputFileName, strlen(baseDirName) + strlen(baseFileName) + strlen(baseFileExtension) + 10, "%s%s-%u%s", baseDirName, baseFileName, output_index++, baseFileExtension);
if (avio_open(&oc->pb, currentOutputFileName, URL_WRONLY) < 0) {
fprintf(stderr, "Could not open '%s'.\n", currentOutputFileName);
exit(1);
}
newFile = 1;
int r = avformat_write_header(oc,NULL);
if (r) {
fprintf(stderr, "Could not write mpegts header to first output file.\n");
debugReturnCode(r);
exit(1);
}
//no segment info is written here. This just creates the shell of the playlist file
write_index = !write_index_file(playlistFilename, tempPlaylistName, segmentLength, actual_segment_durations, baseDirName, baseFileName, baseFileExtension, first_segment, last_segment);
do {
AVPacket packet;
decode_done = av_read_frame(ic, &packet);
if (decode_done < 0) {
break;
}
if (av_dup_packet(&packet) < 0) {
fprintf(stderr, "Could not duplicate packet.");
av_free_packet(&packet);
break;
}
//this time is used to check for a break in the segments
// if (packet.stream_index == video_index && (packet.flags & PKT_FLAG_KEY))
// {
// segment_time = (double)video_st->pts.val * video_st->time_base.num / video_st->time_base.den;
// }
#if USE_OLD_FFMPEG
if (packet.stream_index == video_index && (packet.flags & PKT_FLAG_KEY))
#else
if (packet.stream_index == video_index && (packet.flags & AV_PKT_FLAG_KEY))
#endif
{
segment_time = (double) packet.pts * ic->streams[video_index]->time_base.num / ic->streams[video_index]->time_base.den;
}
// else if (video_index < 0)
// {
// segment_time = (double)audio_st->pts.val * audio_st->time_base.num / audio_st->time_base.den;
// }
//get the most recent packet time
//this time is used when the time for the final segment is printed. It may not be on the edge of
//of a keyframe!
if (packet.stream_index == video_index)
packet_time = (double) packet.pts * ic->streams[video_index]->time_base.num / ic->streams[video_index]->time_base.den; //(double)video_st->pts.val * video_st->time_base.num / video_st->time_base.den;
else if (outputStreams & OUTPUT_STREAM_AUDIO)
packet_time = (double) audio_st->pts.val * audio_st->time_base.num / audio_st->time_base.den;
else
continue;
//start looking for segment splits for videos one half second before segment duration expires. This is because the
//segments are split on key frames so we cannot expect all segments to be split exactly equally.
if (segment_time - prev_segment_time >= segmentLength - 0.5) {
fprintf(stderr, "looking to print index file at time %lf\n", segment_time);
avio_flush(oc->pb);
avio_close(oc->pb);
if (write_index) {
actual_segment_durations[++last_segment] = (unsigned int) rint(segment_time - prev_segment_time);
write_index = !write_index_file(playlistFilename, tempPlaylistName, segmentLength, actual_segment_durations, baseDirName, baseFileName, baseFileExtension, first_segment, last_segment);
fprintf(stderr, "Writing index file at time %lf\n", packet_time);
}
struct stat st;
stat(currentOutputFileName, &st);
output_bytes += st.st_size;
snprintf(currentOutputFileName, strlen(baseDirName) + strlen(baseFileName) + strlen(baseFileExtension) + 10, "%s%s-%u%s", baseDirName, baseFileName, output_index++, baseFileExtension);
if (avio_open(&oc->pb, currentOutputFileName, URL_WRONLY) < 0) {
fprintf(stderr, "Could not open '%s'\n", currentOutputFileName);
break;
}
newFile = 1;
prev_segment_time = segment_time;
}
if (outputStreams == OUTPUT_STREAM_AUDIO && packet.stream_index == audio_index) {
if (newFile && outputStreams == OUTPUT_STREAM_AUDIO) {
//add id3 tag info
//fprintf(stderr, "adding id3tag to file %s\n", currentOutputFileName);
//printf("%lf %lld %lld %lld %lld %lld %lf\n", segment_time, audio_st->pts.val, audio_st->cur_dts, audio_st->cur_pkt.pts, packet.pts, packet.dts, packet.dts * av_q2d(ic->streams[audio_index]->time_base) );
fill_id3_tag((char*) id3_tag, id3_tag_size, packet.dts);
avio_write(oc->pb, id3_tag, id3_tag_size);
avio_write(oc->pb, image_id3_tag, IMAGE_ID3_SIZE);
avio_flush(oc->pb);
newFile = 0;
}
packet.stream_index = 0; //only one stream in audio only segments
ret = av_interleaved_write_frame(oc, &packet);
} else if (outputStreams & OUTPUT_STREAM_VIDEO) {
if (newFile) {
//fprintf(stderr, "New File: %lld %lld %lld\n", packet.pts, video_st->pts.val, audio_st->pts.val);
//printf("%lf %lld %lld %lld %lld %lld %lf\n", segment_time, audio_st->pts.val, audio_st->cur_dts, audio_st->cur_pkt.pts, packet.pts, packet.dts, packet.dts * av_q2d(ic->streams[audio_index]->time_base) );
newFile = 0;
}
if (outputStreams == OUTPUT_STREAM_VIDEO)
ret = av_write_frame(oc, &packet);
else
ret = av_interleaved_write_frame(oc, &packet);
}
if (ret < 0) {
fprintf(stderr, "Warning: Could not write frame of stream.\n");
} else if (ret > 0) {
fprintf(stderr, "End of stream requested.\n");
av_free_packet(&packet);
break;
}
av_free_packet(&packet);
} while (!decode_done);
//make sure all packets are written and then close the last file.
avio_flush(oc->pb);
av_write_trailer(oc);
if (video_st && video_st->codec)
avcodec_close(video_st->codec);
if (audio_st && audio_st->codec)
avcodec_close(audio_st->codec);
for (i = 0; i < oc->nb_streams; i++) {
av_freep(&oc->streams[i]->codec);
av_freep(&oc->streams[i]);
}
avio_close(oc->pb);
av_free(oc);
struct stat st;
stat(currentOutputFileName, &st);
output_bytes += st.st_size;
if (write_index) {
actual_segment_durations[++last_segment] = (unsigned int) rint(packet_time - prev_segment_time);
//make sure that the last segment length is not zero
if (actual_segment_durations[last_segment] == 0)
actual_segment_durations[last_segment] = 1;
write_index_file(playlistFilename, tempPlaylistName, segmentLength, actual_segment_durations, baseDirName, baseFileName, baseFileExtension, first_segment, last_segment);
}
write_stream_size_file(baseDirName, baseFileName, output_bytes * 8 / segment_time);
return 0;
}
</outfile></infile>