
Recherche avancée
Autres articles (36)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (5187)
-
Trim video and extract thumbnail simultaneously
12 septembre 2019, par RobinI am using the following two codes for ffmpeg.
Trim Video :
ffmpeg -i input.mp4 -ss 0 -t 100 -c copy -f mp4 output.mp4
Create Thumbnail :
ffmpeg -i input.mp4 -ss 1 -vframes 1 -f mjpeg output
Which works as expected and is super fast, but when I combine the two like so :
ffmpeg -i input.mp4 -ss 0 -t 100 -c copy -f mp4 output.mp4 -i input.mp4 -ss 1 -vframes 1 -f mjpeg output.jpeg
ffmpeg
runs a re-encoding copy rather than using-c copy
. I really would like to execute both of these functions in a singleffmpeg
command.Below is the output from terminal when it is generating the thumbnail, which is where it appears the re-encoding is happening.
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.83.100
Stream #1:0(und): Video: mjpeg, yuvj420p(pc), 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc (default)
Metadata:
creation_time : 1970-01-01T00:00:00.000000Z
handler_name : VideoHandler
encoder : Lavc57.107.100 mjpeg
Side data:
cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
frame= 324 fps=0.0 q=-1.0 q=6.3 size= 1792kB time=00:00:12.94 bitrate=1133.7kbits/s dup=0 drop=285 speed=2
frame= 661 fps=660 q=-1.0 q=6.3 size= 4096kB time=00:00:26.43 bitrate=1269.5kbits/s dup=0 drop=622 speed=2
frame= 1013 fps=674 q=-1.0 q=6.3 size= 6912kB time=00:00:40.51 bitrate=1397.7kbits/s dup=0 drop=974 speed=
frame= 1356 fps=677 q=-1.0 q=6.3 size= 8704kB time=00:00:54.22 bitrate=1314.9kbits/s dup=0 drop=1317 speed=
frame= 1694 fps=677 q=-1.0 q=6.3 size= 11264kB time=00:01:07.75 bitrate=1361.9kbits/s dup=0 drop=1655 speed=
frame= 2031 fps=676 q=-1.0 q=6.3 size= 14080kB time=00:01:21.23 bitrate=1419.8kbits/s dup=0 drop=1992 speed=
frame= 2360 fps=673 q=-1.0 q=6.3 size= 16384kB time=00:01:34.37 bitrate=1422.1kbits/s dup=0 drop=2321 speed=
frame= 2511 fps=672 q=-1.0 Lq=6.3 size= 17695kB time=00:01:40.40 bitrate=1443.8kbits/s dup=0 drop=2474 speed=26.9x
video:13036kB audio:4684kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown -
ffmpeg stream chrome kiosk mode ubuntu 16.04 server
15 février 2021, par RaulI have a weird out-of-sync issue while using ffmpeg to stream to youtube live a chrome browser from an ub untu 16.04 server.



Issue : output video streamed to youtube has audio/video out of sync, sometimes with as much as 3s



Current flow :



1) start pulseaudio - we using something like this to start it :



pulseaudio --start -vvv --disallow-exit --log-target=syslog --high-priority --exit-idle-time=-1 --daemonize




2) start Xvfb



Xvfb :0 -ac -screen 0 1920x1080x24




3) start chrome linux in kiosk mode



google-chrome --kiosk --disable-gpu --incognito --no-first-run --disable-java --disable-plugins --disable-translate --disk-cache-size=$((1024 * 1024)) --disk-cache-dir=/tmp/chrome/ --user-data-dir=/tmp/chrome/ --force-device-scale-factor=1 --window-size=1920,1080 --window-position=0,0 LOCATION_URL




4) start ffmpeg



ffmpeg -y \
 -thread_queue_size 8192 -rtbufsize 250M -f x11grab -video_size 1920x1080 -framerate 24 -i :0 \
 -thread_queue_size 8192 -channel_layout stereo -f alsa -i pulse \
 -c:v libx264 -pix_fmt yuv420p -c:v libx264 -g 48 -crf 24 -filter:v fps=24 -preset ultrafast -tune zerolatency \
 -c:a aac -strict -2 -channel_layout stereo -ab 96k -ac 2 -flags +global_header \
 -f flv YOUTUBE_LIVE_STREAMING_RTMP




Note : this is running on an amazon ec2 instance, meaning there is no soundcard, so alsa and pulseaudio are creating a dummy audio card. However, the latency does not come from there. Logs :



Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Adjust latency mode enabled, configuring sink latency to half of overall latency.
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Requested latency=23.22 ms, Received latency=23.22 ms
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Final latency 69.66 ms = 23.22 ms + 2*11.61 ms + 23.22 ms




At this point, here's what we observed :



- 

-
if we start ffmpeg exactly after issuing the command to start chrome, we see the DTS errors from ffmpeg. Audio is out of sync with the video and has delay of 3-5seconds AHEAD. We also noticed the out of sync remains the same for the full duration of the stream
-
if we start ffmpeg after around 10seconds, audio and video are almost in sync. We then manually added a -itsoffset -0.125 to the ffmpeg command and everything is perfect.







Questions :



- 

- Why would ffmpeg have so much lag if it's started right after chrome ?
- Is starting the ffmpeg after 10s or X seconds the expected behavior ? That is, is this because the system needs to wait for audio/video signals to be "ready" or something ?
- Is there a way to 100% calculate or know when Chrome is fully ready and start ffmpeg ? We found sometimes it takes 5s, sometimes 10. Depends on the URL we load.
- Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything. And a restart is required to "re-balance" the audio/video inputs and get them back in sync.
- Can pulseaudio be the problem in this scenario ?













Thank you



UPDATE Dec 20



We were able to do some tricks to force chrome to start the audio on page load, and that will force connect to pulseaudio. Doing that, plus adding a 3s delay for ffmpeg to start, there is no more delay when ffmpeg starts.
However, our app is a webRTC app, and we have a STRANGER thing happening : if we start the page with no webcam/audio, once the webcam/audio is enabled, ffmpeg (while showing no errors) has a delay of 2s or so. While keep talking, in about max 30s, that delay is GONE.



So the new questions are :



- 

- Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything.
- What could cause the initial audio/video out of sync issue and then catching up ?






-
-
Mix original clip audio with audio of an overlay clip
28 octobre 2019, par Mr. MessyI have a video clip on which I want to add commentary videos (someone talking in a bubble).
I have 3 commentary videos I need to insert in specific times.The video rendering is working well, but I can’t seem to add the audio tracks.
I tried both amix and amerge, but I got the same issue.When I added "[0:1][2:1]amerge ;" I get the follwing :
and the process freezes.
The full ffmpeg command is as follows :
ffmpeg -y -i story.mp4
-loop 1 -i mask.png
-itsoffset 10 -i commentary1.mp4
-itsoffset 22 -i commentary2.mp4
-itsoffset 34 -i commentary3.mp4
-filter_complex "[0:v]scale=w=1/2*in_w:h=1/2*in_h[vid1],
[2:v]crop=w=480:h=480:x=0:y=120[vid2in],
[1:v]fifo[2af],[2af]alphaextract[alf2],[vid2in][alf2]alphamerge[vid2alf],
[vid2alf]format=yuva420p,fade=in:st=10:d=0.5:alpha=1,fade=out:st=22.7294:d=0.5:alpha=1[vid2fade],
[vid2fade]scale=w=-1:h=160[vid2],
[vid1][vid2]overlay=790:10:enable='between(t\,10,21)'[out2],
[3:v]crop=w=480:h=480:x=0:y=120[vid3in],
[1:v]fifo[3af],[3af]alphaextract[alf3],[vid3in][alf3]alphamerge[vid3alf],
[vid3alf]format=yuva420p,fade=in:st=22:d=0.5:alpha=1,fade=out:st=32.768733:d=0.5:alpha=1[vid3fade],
[vid3fade]scale=w=-1:h=160[vid3],
[out2][vid3]overlay=790:10:enable='between(t\,22,33)'[out3],
[4:v]crop=w=480:h=480:x=0:y=120[vid4in],
[1:v]fifo[4af],[4af]alphaextract[alf4],[vid4in][alf4]alphamerge[vid4alf],
[vid4alf]format=yuva420p,fade=in:st=34:d=0.5:alpha=1,fade=out:st=44.598189:d=0.5:alpha=1[vid4fade],
[vid4fade]scale=w=-1:h=160[vid4],
[out3][vid4]overlay=790:10:enable='between(t\,34,45)'[out4]"
-map [out4] -pix_fmt yuv420p -c:v libx264 -crf 18
final_video.mp4(mask.png is a circle on a transparent image that crops the video to a bubble)
Thank you for your help.