
Recherche avancée
Médias (3)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (46)
-
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (7505)
-
FFMpeg is slow on Android [duplicate]
28 juin 2017, par Oleg FilimonovThis question already has an answer here :
I’m using this library : https://github.com/WritingMinds/ffmpeg-android
Executing this command :
ffmpeg -y -i video.mp4 -ignore_loop 0 -i overlay.gif -filter_complex `
>> "[0][1]overlay=x=mod((1080/4 + sin(t)*1080/4)\,1920):y=mod(t*300\,1080):shortest=1[res];`
>> [res][1]overlay=x=mod((200*1 + 1080/4 - sin(2*t)*1080/4)\,1920):y=mod(100 + t*300\,1080):shortest=1[res];`
>> [res][1]overlay=x=mod((200*2 + 1080/4 - sin(-90 + t/2)*1080/4)\,1920):y=mod(150 + t*250\,1080):shortest=1[res];`
>> [res][1]overlay=x=mod((200*3 + 1080/4 - sin(90 + 3*t)*1080/4)\,1920):y=mod(200 + t*100\,1080):shortest=1[res];`
>> [res][1]overlay=x=mod((200*4 + 1080/4 - sin(180 + t)*1080/4)\,1920):y=mod(t*250\,1080):shortest=1[res];`
>> [res][1]overlay=x=mod((200*5 + 1080/4 - sin(-180 + t)*1080/4)\,1920):y=mod(t*100\,1080):shortest=1[res];`
>> [res][1]overlay=x=mod((200*6 + 1080/4 - sin(2*t)*1080/4)\,1920):y=100 + mod(100 + t*300\,1080):shortest=1[res]`
>> " -map "[res]" -preset ultrafast out.mp4(Basically add a bunch of animated overlays to the video)
On PC this command takes about a second to execute (render speed is about 278 fps). But on Android device with Snapdragon 821 it takes about 10 seconds (about 30 fps).
The curious thing is that whenever command is executing on PC, CPU load is around 90%, but on Android it’s less than 1% (see screenshot : initial spike is UI stuff plus loading binaries, afterwards ffmpeg was executing until 10 second mark)
I tried executing the same command, but writing to /dev/null
instead of the file :ffmpeg -y -i video.mp4 -ignore_loop 0 -i overlay.gif -filter_complex `
>> "[0][1]overlay=x=mod((1080/4 + sin(t)*1080/4)\,1920):y=mod(t*300\,1080):shortest=1[res];`
>> [res][1]overlay=x=mod((200*1 + 1080/4 - sin(2*t)*1080/4)\,1920):y=mod(100 + t*300\,1080):shortest=1[res];`
>> [res][1]overlay=x=mod((200*2 + 1080/4 - sin(-90 + t/2)*1080/4)\,1920):y=mod(150 + t*250\,1080):shortest=1[res];`
>> [res][1]overlay=x=mod((200*3 + 1080/4 - sin(90 + 3*t)*1080/4)\,1920):y=mod(200 + t*100\,1080):shortest=1[res];`
>> [res][1]overlay=x=mod((200*4 + 1080/4 - sin(180 + t)*1080/4)\,1920):y=mod(t*250\,1080):shortest=1[res];`
>> [res][1]overlay=x=mod((200*5 + 1080/4 - sin(-180 + t)*1080/4)\,1920):y=mod(t*100\,1080):shortest=1[res];`
>> [res][1]overlay=x=mod((200*6 + 1080/4 - sin(2*t)*1080/4)\,1920):y=100 + mod(100 + t*300\,1080):shortest=1[res]`
>> " -map "[res]" -preset ultrafast -f null /dev/nullThis reduced time by 3 seconds.
I also tried replacing gif overlay with png overlay, but speed didn’t improve significantly.
Here’s log of the command execution on Android :
ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 4.8 (GCC)
configuration: --target-os=linux --cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- --arch=arm --cpu=cortex-a8 --enable-runtime-cpudetect --sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot --enable-pic --enable-libx264 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-fontconfig --enable-pthreads --disable-debug --disable-ffserver --enable-version3 --enable-hardcoded-tables --disable-ffplay --disable-ffprobe --enable-gpl --enable-yasm --disable-doc --disable-shared --enable-static --pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config --prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a --extra-cflags='-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all' --extra-ldflags='-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie' --extra-libs='-lpng -lexpat -lm' --extra-cxxflags=
libavutil 55. 17.103 / 55. 17.103
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/data/user/0/com.example.user.proofofconcept/files/in_10s.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
creation_time : 2017-06-27 18:07:54
Duration: 00:00:10.47, start: 0.000000, bitrate: 14043 kb/s
Stream #0:0(eng): Video: h264 (Baseline) (avc1 / 0x31637661), yuvj420p(pc, smpte170m), 1280x720, 13971 kb/s, 29.86 fps, 30.01 tbr, 90k tbn, 180k tbc (default)
Metadata:
creation_time : 2017-06-27 18:07:53
handler_name : VideoHandle
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 96 kb/s (default)
Metadata:
creation_time : 2017-06-27 18:07:54
handler_name : SoundHandle
Input #1, gif, from '/data/user/0/com.example.user.proofofconcept/files/overlay.gif':
Duration: N/A, bitrate: N/A
Stream #1:0: Video: gif, bgra, 130x98, 6.17 fps, 4.92 tbr, 100 tbn, 100 tbc
[swscaler @ 0xeddb3000] deprecated pixel format used, make sure you did set range correctly
[libx264 @ 0xee1fdc00] using cpu capabilities: none!
[libx264 @ 0xee1fdc00] profile Constrained Baseline, level 3.2
[libx264 @ 0xee1fdc00] 264 - core 148 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0
Output #0, mp4, to '/storage/emulated/0/temp/out.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.25.100
Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 1280x720, q=-1--1, 30.01 fps, 90k tbn, 30.01 tbc (default)
Metadata:
encoder : Lavc57.24.102 libx264
Side data:
unknown side data type 10 (24 bytes)
Stream mapping:
Stream #0:0 (h264) -> overlay:main
Stream #1:0 (gif) -> overlay:overlay
Stream #1:0 (gif) -> overlay:overlay
Stream #1:0 (gif) -> overlay:overlay
Stream #1:0 (gif) -> overlay:overlay
Stream #1:0 (gif) -> overlay:overlay
Stream #1:0 (gif) -> overlay:overlay
Stream #1:0 (gif) -> overlay:overlay
overlay -> Stream #0:0 (libx264)
Press [q] to stop, [?] for help
frame= 16 fps=0.0 q=24.0 size= 227kB time=00:00:00.29 bitrate=6210.2kbits/s dup=2 drop=0 speed=0.56x
frame= 29 fps= 28 q=24.0 size= 451kB time=00:00:00.73 bitrate=5042.1kbits/s dup=2 drop=0 speed=0.707x
frame= 40 fps= 26 q=24.0 size= 637kB time=00:00:01.09 bitrate=4744.5kbits/s dup=2 drop=0 speed=0.71x
frame= 57 fps= 28 q=24.0 size= 903kB time=00:00:01.66 bitrate=4441.3kbits/s dup=2 drop=0 speed=0.812x
frame= 75 fps= 29 q=24.0 size= 1186kB time=00:00:02.26 bitrate=4288.0kbits/s dup=2 drop=0 speed=0.885x
frame= 94 fps= 31 q=24.0 size= 1456kB time=00:00:02.89 bitrate=4113.4kbits/s dup=2 drop=0 speed=0.945x
frame= 113 fps= 32 q=25.0 size= 1780kB time=00:00:03.53 bitrate=4127.3kbits/s dup=2 drop=0 speed=0.985x
frame= 131 fps= 32 q=24.0 size= 2101kB time=00:00:04.13 bitrate=4165.9kbits/s dup=2 drop=0 speed=1.01x
frame= 150 fps= 33 q=24.0 size= 2461kB time=00:00:04.76 bitrate=4231.2kbits/s dup=2 drop=0 speed=1.03x
frame= 165 fps= 32 q=24.0 size= 2721kB time=00:00:05.26 bitrate=4234.1kbits/s dup=2 drop=0 speed=1.03x
frame= 185 fps= 33 q=24.0 size= 3028kB time=00:00:05.93 bitrate=4182.4kbits/s dup=2 drop=0 speed=1.05x
frame= 205 fps= 33 q=24.0 size= 3329kB time=00:00:06.59 bitrate=4133.3kbits/s dup=2 drop=0 speed=1.07x
frame= 224 fps= 34 q=24.0 size= 3610kB time=00:00:07.23 bitrate=4089.7kbits/s dup=2 drop=0 speed=1.08x
frame= 243 fps= 34 q=24.0 size= 3901kB time=00:00:07.86 bitrate=4063.3kbits/s dup=2 drop=0 speed=1.09x
frame= 264 fps= 34 q=24.0 size= 4297kB time=00:00:08.56 bitrate=4110.6kbits/s dup=2 drop=0 speed=1.11x
frame= 284 fps= 35 q=24.0 size= 4601kB time=00:00:09.23 bitrate=4083.5kbits/s dup=2 drop=0 speed=1.12x
frame= 304 fps= 35 q=24.0 size= 4886kB time=00:00:09.89 bitrate=4044.4kbits/s dup=2 drop=0 speed=1.14x
frame= 314 fps= 35 q=-1.0 Lsize= 5138kB time=00:00:10.46 bitrate=4022.4kbits/s dup=2 drop=0 speed=1.16x
video:5135kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.040961%
[libx264 @ 0xee1fdc00] frame I:2 Avg QP:20.50 size: 84700
[libx264 @ 0xee1fdc00] frame P:312 Avg QP:24.06 size: 16310
[libx264 @ 0xee1fdc00] mb I I16..4: 100.0% 0.0% 0.0%
[libx264 @ 0xee1fdc00] mb P I16..4: 7.7% 0.0% 0.0% P16..4: 54.9% 0.0% 0.0% 0.0% 0.0% skip:37.4%
[libx264 @ 0xee1fdc00] coded y,uvDC,uvAC intra: 62.7% 22.9% 3.4% inter: 26.1% 8.4% 0.9%
[libx264 @ 0xee1fdc00] i16 v,h,dc,p: 13% 14% 59% 13%
[libx264 @ 0xee1fdc00] i8c dc,h,v,p: 47% 24% 22% 6%
[libx264 @ 0xee1fdc00] kb/s:4020.26
Success
time used: 9843So what could be the reason of such low CPU load ?
-
ffmpeg got stuck while trying to crossfade merge two videos
30 juin 2017, par JeflopoI’m trying to do a crossfade merge (1s) between two videos. An intro (39secs duration) video with the main video. When I executed the command it started working without throwing errors but at some frame ffmpeg gets stuck.
I read a lot of q/a here in stackoverflow, and the official docs but I can’t solve this so :
This is the command :
ffmpeg -i "inputs/intro.mp4" -i "inputs/240p.mp4" -an -filter_complex \
"[0:v]trim=start=0:end=38,setpts=PTS-STARTPTS[firstclip]; \
[0:v]trim=start=38:end=39,setpts=PTS-STARTPTS[fadeoutsrc]; \
[1:v]trim=start=1,setpts=PTS-STARTPTS[secondclip]; \
[1:v]trim=start=0:end=1,setpts=PTS-STARTPTS[fadeinsrc]; \
[fadeinsrc]format=pix_fmts=yuva420p, fade=t=in:st=0:d=1:alpha=1[fadein]; \
[fadeoutsrc]format=pix_fmts=yuva420p, fade=t=out:st=0:d=1:alpha=1[fadeout]; \
[fadein]fifo[fadeinfifo]; \
[fadeout]fifo[fadeoutfifo]; \
[fadeoutfifo][fadeinfifo]overlay[crossfade]; \
[firstclip][crossfade][secondclip]concat=n=3[output]; \
[0:a][1:a] acrossfade=d=1 [audio]" -vcodec libx264 -map "[output]" -map "[audio]" "outputs/240p.mp4"Here’s the raw command (the exact command I used) :
ffmpeg -i "inputs/intro.mp4" -i "inputs/240p.mp4" -an -filter_complex "[0:v]trim=start=0:end=38,setpts=PTS-STARTPTS[firstclip]; [0:v]trim=start=38:end=39,setpts=PTS-STARTPTS[fadeoutsrc]; [1:v]trim=start=1,setpts=PTS-STARTPTS[secondclip]; [1:v]trim=start=0:end=1,setpts=PTS-STARTPTS[fadeinsrc]; [fadeinsrc]format=pix_fmts=yuva420p, fade=t=in:st=0:d=1:alpha=1[fadein]; [fadeoutsrc]format=pix_fmts=yuva420p, fade=t=out:st=0:d=1:alpha=1[fadeout]; [fadein]fifo[fadeinfifo]; [fadeout]fifo[fadeoutfifo]; [fadeoutfifo][fadeinfifo]overlay[crossfade]; [firstclip][crossfade][secondclip]concat=n=3[output]; [0:a][1:a] acrossfade=d=1 [audio]" -vcodec libx264 -map "[output]" -map "[audio]" "outputs/240p.mp4"
The "error" is reproducible with and without the
-an
and theacrossfade
filters.This is the output :
PS C:\scripts\ffmpeg> ffmpeg -i "inputs/intro.mp4" -i "inputs/240p.mp4" -an -filter_complex "[0:v]trim=start=0:end=38,setpts=PTS-STARTPTS[firstclip]; [0:v]trim=start=38:end=39,setpts=PTS-STARTPTS[fadeoutsrc]; [1:v]trim=start=1,setpts=PTS-STARTPTS[secondclip]; [1:v]trim=start=0:end=1,setpts=PTS-STARTPTS[fadeinsrc]; [fadeinsrc]format=pix_fmts=yuva420p, fade=t=in:st=0:d=1:alpha=1[fadein]; [fadeoutsrc]format=pix_fmts=yuva420p, fade=t=out:st=0:d=1:alpha=1[fadeout]; [fadein]fifo[fadeinfifo]; [fadeout]fifo[fadeoutfifo]; [fadeoutfifo][fadeinfifo]overlay[crossfade]; [firstclip][crossfade][secondclip]concat=n=3[output]; [0:a][1:a] acrossfade=d=1 [audio]" -map "[output]" -map "[audio]" "outputs/240p.mp4"
ffmpeg version N-86669-gc1d1274 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 7.1.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
libavutil 55. 67.100 / 55. 67.100
libavcodec 57.100.102 / 57.100.102
libavformat 57. 75.100 / 57. 75.100
libavdevice 57. 7.100 / 57. 7.100
libavfilter 6. 94.100 / 6. 94.100
libswscale 4. 7.101 / 4. 7.101
libswresample 2. 8.100 / 2. 8.100
libpostproc 54. 6.100 / 54. 6.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'inputs/intro.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.72.101
Duration: 00:06:24.45, start: 0.000000, bitrate: 491 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 426x240 [SAR 1:1 DAR 71:40], 353 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 130 kb/s (default)
Metadata:
handler_name : SoundHandler
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'inputs/240p.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf56.40.101
Duration: 00:06:24.43, start: 0.000000, bitrate: 375 kb/s
Stream #1:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 426x240 [SAR 1:1 DAR 71:40], 243 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #1:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 125 kb/s (default)
Metadata:
handler_name : SoundHandler
Stream mapping:
Stream #0:0 (h264) -> trim
Stream #0:0 (h264) -> trim
Stream #0:1 (aac) -> acrossfade:crossfade0
Stream #1:0 (h264) -> trim
Stream #1:0 (h264) -> trim
Stream #1:1 (aac) -> acrossfade:crossfade1
concat -> Stream #0:0 (libx264)
acrossfade -> Stream #0:1 (aac)
Press [q] to stop, [?] for help
[libx264 @ 00000000026b2240] using SAR=1/1
[libx264 @ 00000000026b2240] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 00000000026b2240] profile High, level 2.1
[libx264 @ 00000000026b2240] 264 - core 152 r2851 ba24899 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=7 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'outputs/240p.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.75.100
Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 426x240 [SAR 1:1 DAR 71:40], q=-1--1, 25 fps, 12800 tbn, 25 tbc (default)
Metadata:
encoder : Lavc57.100.102 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
Stream #0:1: Audio: aac (LC) ([64][0][0][0] / 0x0040), 44100 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
encoder : Lavc57.100.102 aac
frame=10369 fps=503 q=28.0 size= 24064kB time=00:06:55.68 bitrate= 474.2kbits/s speed=20.2xAt frame 10000 it gets stuck... I waited for 1hour but it keeps stuck.
I’ve updated ffmpeg :
ffmpeg -version
ffmpeg version N-86669-gc1d1274 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 7.1.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
libavutil 55. 67.100 / 55. 67.100
libavcodec 57.100.102 / 57.100.102
libavformat 57. 75.100 / 57. 75.100
libavdevice 57. 7.100 / 57. 7.100
libavfilter 6. 94.100 / 6. 94.100
libswscale 4. 7.101 / 4. 7.101
libswresample 2. 8.100 / 2. 8.100
libpostproc 54. 6.100 / 54. 6.100I used these references :
-
ADD Image overlay to ffmpeg video stream
1er juillet 2017, par ChrisI am new to ffmpeg and want to add an HUD to the video stream, so a few questions.
- What file do I need to edit.
- What do I need to do to achieve this.
Thanks in advance. Also I am VERY new to all of this, I will need instructions step by step
I saw other questions saying to add this :
ffmpeg -n -i video.mp4 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4
But I dont know where to put it, i entered it in the terminal and got this :
pi@raspberrypi:~ $ ffmpeg -n -i video.mp4 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4
ffmpeg version N-86215-gb5228e4 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.9.2 (Raspbian 4.9.2-10)
configuration: --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --extra-libs=-ldl
libavutil 55. 63.100 / 55. 63.100
libavcodec 57. 96.101 / 57. 96.101
libavformat 57. 72.101 / 57. 72.101
libavdevice 57. 7.100 / 57. 7.100
libavfilter 6. 90.100 / 6. 90.100
libswscale 4. 7.101 / 4. 7.101
libswresample 2. 8.100 / 2. 8.100
libpostproc 54. 6.100 / 54. 6.100
video.mp4: No such file or directoryI dont understand what i am supposed to do with the video.mp4 ?
HERE IS THE SCRIPT THAT SENDS THE VIDEO.
import subprocess
import shlex
import re
import os
import time
import urllib2
import platform
import json
import sys
import base64
import random
import argparse
parser = argparse.ArgumentParser(description='robot control')
parser.add_argument('camera_id')
parser.add_argument('video_device_number', default=0, type=int)
parser.add_argument('--kbps', default=450, type=int)
parser.add_argument('--brightness', default=75, type=int, help='camera brightness')
parser.add_argument('--contrast', default=75, type=int, help='camera contrast')
parser.add_argument('--saturation', default=15, type=int, help='camera saturation')
parser.add_argument('--rotate180', default=False, type=bool, help='rotate image 180 degrees')
parser.add_argument('--env', default="prod")
args = parser.parse_args()
server = "runmyrobot.com"
#server = "52.52.213.92"
from socketIO_client import SocketIO, LoggingNamespace
# enable raspicam driver in case a raspicam is being used
os.system("sudo modprobe bcm2835-v4l2")
if args.env == "dev":
print "using dev port 8122"
port = 8122
elif args.env == "prod":
print "using prod port 8022"
port = 8022
else:
print "invalid environment"
sys.exit(0)
print "initializing socket io"
print "server:", server
print "port:", port
socketIO = SocketIO(server, port, LoggingNamespace)
print "finished initializing socket io"
#ffmpeg -f qtkit -i 0 -f mpeg1video -b 400k -r 30 -s 320x240 http://52.8.81.124:8082/hello/320/240/
def onHandleCameraCommand(*args):
#thread.start_new_thread(handle_command, args)
print args
socketIO.on('command_to_camera', onHandleCameraCommand)
def onHandleTakeSnapshotCommand(*args):
print "taking snapshot"
inputDeviceID = streamProcessDict['device_answer']
snapShot(platform.system(), inputDeviceID)
with open ("snapshot.jpg", 'rb') as f:
data = f.read()
print "emit"
socketIO.emit('snapshot', {'image':base64.b64encode(data)})
socketIO.on('take_snapshot_command', onHandleTakeSnapshotCommand)
def randomSleep():
"""A short wait is good for quick recovery, but sometimes a longer delay is needed or it will just keep trying and failing short intervals, like because the system thinks the port is still in use and every retry makes the system think it's still in use. So, this has a high likelihood of picking a short interval, but will pick a long one sometimes."""
timeToWait = random.choice((0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 5))
print "sleeping", timeToWait
time.sleep(timeToWait)
def getVideoPort():
url = 'http://%s/get_video_port/%s' % (server, cameraIDAnswer)
for retryNumber in range(2000):
try:
print "GET", url
response = urllib2.urlopen(url).read()
break
except:
print "could not open url ", url
time.sleep(2)
return json.loads(response)['mpeg_stream_port']
def getAudioPort():
url = 'http://%s/get_audio_port/%s' % (server, cameraIDAnswer)
for retryNumber in range(2000):
try:
print "GET", url
response = urllib2.urlopen(url).read()
break
except:
print "could not open url ", url
time.sleep(2)
return json.loads(response)['audio_stream_port']
def runFfmpeg(commandLine):
print commandLine
ffmpegProcess = subprocess.Popen(shlex.split(commandLine))
print "command started"
return ffmpegProcess
def handleDarwin(deviceNumber, videoPort, audioPort):
p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "qtkit", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
print err
deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
commandLine = 'ffmpeg -f qtkit -i %s -f mpeg1video -b 400k -r 30 -s 320x240 http://%s:%s/hello/320/240/' % (deviceAnswer, server, videoPort)
process = runFfmpeg(commandLine)
return {'process': process, 'device_answer': deviceAnswer}
def handleLinux(deviceNumber, videoPort, audioPort):
print "sleeping to give the camera time to start working"
randomSleep()
print "finished sleeping"
#p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "qtkit", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
#out, err = p.communicate()
#print err
os.system("v4l2-ctl -c brightness={brightness} -c contrast={contrast} -c saturation={saturation}".format(brightness=args.brightness,
contrast=args.contrast,
saturation=args.saturation))
if deviceNumber is None:
deviceAnswer = raw_input("Enter the number of the camera device for your robot: ")
else:
deviceAnswer = str(deviceNumber)
#commandLine = '/usr/local/bin/ffmpeg -s 320x240 -f video4linux2 -i /dev/video%s -f mpeg1video -b 1k -r 20 http://runmyrobot.com:%s/hello/320/240/' % (deviceAnswer, videoPort)
#commandLine = '/usr/local/bin/ffmpeg -s 640x480 -f video4linux2 -i /dev/video%s -f mpeg1video -b 150k -r 20 http://%s:%s/hello/640/480/' % (deviceAnswer, server, videoPort)
# For new JSMpeg
#commandLine = '/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s -f mpegts -codec:v mpeg1video -s 640x480 -b:v 250k -bf 0 http://%s:%s/hello/640/480/' % (deviceAnswer, server, videoPort) # ClawDaddy
#commandLine = '/usr/local/bin/ffmpeg -s 1280x720 -f video4linux2 -i /dev/video%s -f mpeg1video -b 1k -r 20 http://runmyrobot.com:%s/hello/1280/720/' % (deviceAnswer, videoPort)
if args.rotate180:
rotationOption = "-vf transpose=2,transpose=2"
else:
rotationOption = ""
# video with audio
videoCommandLine = '/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s %s -f mpegts -codec:v mpeg1video -s 640x480 -b:v %dk -bf 0 -muxdelay 0.001 http://%s:%s/hello/640/480/' % (deviceAnswer, rotationOption, args.kbps, server, videoPort)
audioCommandLine = '/usr/local/bin/ffmpeg -f alsa -ar 44100 -ac 1 -i hw:1 -f mpegts -codec:a mp2 -b:a 32k -muxdelay 0.001 http://%s:%s/hello/640/480/' % (server, audioPort)
print videoCommandLine
print audioCommandLine
videoProcess = runFfmpeg(videoCommandLine)
audioProcess = runFfmpeg(audioCommandLine)
return {'video_process': videoProcess, 'audioProcess': audioProcess, 'device_answer': deviceAnswer}
def handleWindows(deviceNumber, videoPort):
p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "dshow", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
lines = err.split('\n')
count = 0
devices = []
for line in lines:
#if "] \"" in line:
# print "line:", line
m = re.search('.*\\"(.*)\\"', line)
if m != None:
#print line
if m.group(1)[0:1] != '@':
print count, m.group(1)
devices.append(m.group(1))
count += 1
if deviceNumber is None:
deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
else:
deviceAnswer = str(deviceNumber)
device = devices[int(deviceAnswer)]
commandLine = 'ffmpeg -s 640x480 -f dshow -i video="%s" -f mpegts -codec:v mpeg1video -b 200k -r 20 http://%s:%s/hello/640/480/' % (device, server, videoPort)
process = runFfmpeg(commandLine)
return {'process': process, 'device_answer': device}
def handleWindowsScreenCapture(deviceNumber, videoPort):
p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "dshow", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
lines = err.split('\n')
count = 0
devices = []
for line in lines:
#if "] \"" in line:
# print "line:", line
m = re.search('.*\\"(.*)\\"', line)
if m != None:
#print line
if m.group(1)[0:1] != '@':
print count, m.group(1)
devices.append(m.group(1))
count += 1
if deviceNumber is None:
deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
else:
deviceAnswer = str(deviceNumber)
device = devices[int(deviceAnswer)]
commandLine = 'ffmpeg -f dshow -i video="screen-capture-recorder" -vf "scale=640:480" -f mpeg1video -b 50k -r 20 http://%s:%s/hello/640/480/' % (server, videoPort)
print "command line:", commandLine
process = runFfmpeg(commandLine)
return {'process': process, 'device_answer': device}
def snapShot(operatingSystem, inputDeviceID, filename="snapshot.jpg"):
try:
os.remove('snapshot.jpg')
except:
print "did not remove file"
commandLineDict = {
'Darwin': 'ffmpeg -y -f qtkit -i %s -vframes 1 %s' % (inputDeviceID, filename),
'Linux': '/usr/local/bin/ffmpeg -y -f video4linux2 -i /dev/video%s -vframes 1 -q:v 1000 -vf scale=320:240 %s' % (inputDeviceID, filename),
'Windows': 'ffmpeg -y -s 320x240 -f dshow -i video="%s" -vframes 1 %s' % (inputDeviceID, filename)}
print commandLineDict[operatingSystem]
os.system(commandLineDict[operatingSystem])
def startVideoCapture():
videoPort = getVideoPort()
audioPort = getAudioPort()
print "video port:", videoPort
print "audio port:", audioPort
#if len(sys.argv) >= 3:
# deviceNumber = sys.argv[2]
#else:
# deviceNumber = None
deviceNumber = args.video_device_number
result = None
if platform.system() == 'Darwin':
result = handleDarwin(deviceNumber, videoPort, audioPort)
elif platform.system() == 'Linux':
result = handleLinux(deviceNumber, videoPort, audioPort)
elif platform.system() == 'Windows':
#result = handleWindowsScreenCapture(deviceNumber, videoPort)
result = handleWindows(deviceNumber, videoPort, audioPort)
else:
print "unknown platform", platform.system()
return result
def timeInMilliseconds():
return int(round(time.time() * 1000))
def main():
print "main"
streamProcessDict = None
twitterSnapCount = 0
while True:
socketIO.emit('send_video_status', {'send_video_process_exists': True,
'camera_id':cameraIDAnswer})
if streamProcessDict is not None:
print "stopping previously running ffmpeg (needs to happen if this is not the first iteration)"
streamProcessDict['process'].kill()
print "starting process just to get device result" # this should be a separate function so you don't have to do this
streamProcessDict = startVideoCapture()
inputDeviceID = streamProcessDict['device_answer']
print "stopping video capture"
streamProcessDict['process'].kill()
#print "sleeping"
#time.sleep(3)
#frameCount = int(round(time.time() * 1000))
videoWithSnapshots = False
while videoWithSnapshots:
frameCount = timeInMilliseconds()
print "taking single frame image"
snapShot(platform.system(), inputDeviceID, filename="single_frame_image.jpg")
with open ("single_frame_image.jpg", 'rb') as f:
# every so many frames, post a snapshot to twitter
#if frameCount % 450 == 0:
if frameCount % 6000 == 0:
data = f.read()
print "emit"
socketIO.emit('snapshot', {'frame_count':frameCount, 'image':base64.b64encode(data)})
data = f.read()
print "emit"
socketIO.emit('single_frame_image', {'frame_count':frameCount, 'image':base64.b64encode(data)})
time.sleep(0)
#frameCount += 1
if False:
if platform.system() != 'Windows':
print "taking snapshot"
snapShot(platform.system(), inputDeviceID)
with open ("snapshot.jpg", 'rb') as f:
data = f.read()
print "emit"
# skip sending the first image because it's mostly black, maybe completely black
#todo: should find out why this black image happens
if twitterSnapCount > 0:
socketIO.emit('snapshot', {'image':base64.b64encode(data)})
print "starting video capture"
streamProcessDict = startVideoCapture()
# This loop counts out a delay that occurs between twitter snapshots.
# Every 50 seconds, it kills and restarts ffmpeg.
# Every 40 seconds, it sends a signal to the server indicating status of processes.
period = 2*60*60 # period in seconds between snaps
for count in range(period):
time.sleep(1)
if count % 20 == 0:
socketIO.emit('send_video_status', {'send_video_process_exists': True,
'camera_id':cameraIDAnswer})
if count % 40 == 30:
print "stopping video capture just in case it has reached a state where it's looping forever, not sending video, and not dying as a process, which can happen"
streamProcessDict['video_process'].kill()
streamProcessDict['audio_process'].kill()
time.sleep(1)
if count % 80 == 75:
print "send status about this process and its child process ffmpeg"
ffmpegProcessExists = streamProcessDict['process'].poll() is None
socketIO.emit('send_video_status', {'send_video_process_exists': True,
'ffmpeg_process_exists': ffmpegProcessExists,
'camera_id':cameraIDAnswer})
#if count % 190 == 180:
# print "reboot system in case the webcam is not working"
# os.system("sudo reboot")
# if the video stream process dies, restart it
if streamProcessDict['video_process'].poll() is not None or streamProcessDict['audio_process'].poll():
# wait before trying to start ffmpeg
print "ffmpeg process is dead, waiting before trying to restart"
randomSleep()
streamProcessDict = startVideoCapture()
twitterSnapCount += 1
if __name__ == "__main__":
#if len(sys.argv) > 1:
# cameraIDAnswer = sys.argv[1]
#else:
# cameraIDAnswer = raw_input("Enter the Camera ID for your robot, you can get it by pointing a browser to the runmyrobot server %s: " % server)
cameraIDAnswer = args.camera_id
main()ERROR :
ffmpeg -n -f mpegts -i http://54.183.232.63:12221 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4
ffmpeg version N-86215-gb5228e4 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.9.2 (Raspbian 4.9.2-10)
configuration: --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --extra-libs=-ldl
libavutil 55. 63.100 / 55. 63.100
libavcodec 57. 96.101 / 57. 96.101
libavformat 57. 72.101 / 57. 72.101
libavdevice 57. 7.100 / 57. 7.100
libavfilter 6. 90.100 / 6. 90.100
libswscale 4. 7.101 / 4. 7.101
libswresample 2. 8.100 / 2. 8.100
libpostproc 54. 6.100 / 54. 6.100
[mpegts @ 0x1a57390] Could not detect TS packet size, defaulting to non-FEC/DVHS
http://54.183.232.63:12221: could not find codec parameters