
Recherche avancée
Médias (91)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Wired NextMusic
14 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (88)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Qu’est ce qu’un éditorial
21 juin 2013, parEcrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
Vous pouvez personnaliser le formulaire de création d’un éditorial.
Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...)
Sur d’autres sites (10370)
-
FFmpeg with Nvidia GPU - full HW transcode with 50i to 50p deinterlacing
5 janvier 2018, par Jernej StopinšekI’m trying to do a full hardware transcode of an udp stream to hls
with 50i to 50p deinterlacing.I’m using ffmpeg and Nvidia GPU.
Since HLS requires deinterlacing
I would like to deinterlace an interlaced source stream and preserve
as much smooth motion and picture quality as possible.My hardware, software and driver info :
GPU : Tesla P100-PCIE-12GB
Nvidia Driver Version : 387.26
Cuda compilation tools, release 9.1, V9.1.85
FFmpeg from git on 20171218
ffmpeg version N-89520-g3f88744067 Copyright (c) 2000-2017 the FFmpeg
developers built with gcc 6.3.0 (Debian 6.3.0-18) 20170516
configuration : —enable-gpl
—enable-cuda-sdk —enable-libx264 —enable-libx265 —enable-nonfree —enable-libnpp —enable-opengl —enable-opencl —enable-libfreetype —enable-openssl —enable-libzvbi —enable-libfontconfig —enable-libfreetype —enable-libfribidi —extra-cflags=-I/usr/local/cuda/include —extra-ldflags=-L/usr/local/cuda/lib64 —arch=x86_64libavutil 56. 6.100 / 56. 6.100
libavcodec 58. 8.100 / 58.
8.100
libavformat 58. 3.100 / 58. 3.100
libavdevice 58. 0.100 / 58. 0.100
libavfilter 7. 7.100 / 7. 7.100
libswscale 5.
0.101 / 5. 0.101
libswresample 3. 0.101 / 3. 0.101
libpostproc 55. 0.100 / 55. 0.100Input stream info :
ffmpeg -t 00:05:00 -i udp://xxx.xxx.xxx.xxx:xxxx -map 0:0 -vf idet -c rawvideo -y -f rawvideo /dev/null
Input #0, mpegts, from ’udp ://xxx.xxx.xxx.xxx:xxxx’ :
Duration :
N/A, start : 49634.159411, bitrate : N/A
Program xxxxx
Metadata : service_name :
service_provider : Stream
#0:0[0x44d] : Video : h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709, top first), 1920x1080 [SAR 1:1 DAR 16:9], 25 fps, 50 tbr, 90k
tbn, 50 tbc
Stream #0:10x19de : Audio : mp2 ([3][0][0][0] /
0x0003), 48000 Hz, stereo, s16p, 192 kb/s
Stream
#0:20x19e1 : Subtitle : dvb_subtitle ([6][0][0][0] / 0x0006)Output #0, rawvideo, to ’/dev/null’ :
Metadata :
encoder :
Lavf58.3.100
Stream #0:0 : Video : rawvideo (I420 / 0x30323449),
yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 622080 kb/s, 25 fps, 25
tbn, 25 tbc
Metadata :
encoder : Lavc58.8.100 rawvideo
frame= 7538 fps= 25 q=-0.0 Lsize=22896675kB time=00:05:01.52
bitrate=622080.0kbits/s dup=38 drop=0 speed=1.02x
video:22896675kB audio:0kB subtitle:0kB other streams:0kB global
headers:0kB muxing overhead : 0.000000%
[Parsed_idet_0 @
0x56370b3c5080] Repeated Fields : Neither : 7458 Top : 24 Bottom : 18
[Parsed_idet_0 @ 0x56370b3c5080] Single frame detection : TFF : 281 BFF :
13 Progressive : 5639 Undetermined : 1567
[Parsed_idet_0 @
0x56370b3c5080] Multi frame detection : TFF : 380 BFF : 0 Progressive :
7120 Undetermined : 0
This is my command for adaptive hardware deinterlacing. It gives great results with picture, but sound is out of sync.
ffmpeg -y -err_detect ignore_err -loglevel debug -vsync -1 -hwaccel cuvid -hwaccel_device 1 -c:v h264_cuvid -deint adaptive -r:v 50 -gpu:v 1 -i "udp://xxx.xxx.xxx.xxx:xxxx=?overrun_nonfatal=1&fifo_size=84450&buffer_size=33554432" -map 0:0 -map 0:1 -c:a aac -b:a 196k -c:v h264_nvenc -flags -global_header+cgop -gpu:v 1 -g:v 50 -bf:v 4 -coder:v cabac -b_adapt:v false -b:v 5184000 -minrate:v 5184000 -maxrate:v 5184000 -bufsize:v 2488320 -rc:v cbr_hq -2pass:v true -rc-lookahead:v 25 -no-scenecut:v 1 -profile:v high -preset:v slow -color_range:v 1 -color_trc:v 1 -color_primaries:v 1 -colorspace:v 1 -f hls -hls_time 5 -hls_list_size 3 -start_number 0 -hls_flags delete_segments /srv/hls/program_01/1080p/index.m3u8
If I add option "-drop_second_field 1" to h264_cuvid and remove -r:v 50 from input and put it to h264_nvenc - then transcoded stream has synced audio, but I think I’m losing quality due to drop_second_field option.
ffmpeg -y -err_detect ignore_err -loglevel debug -vsync -1 -hwaccel cuvid -hwaccel_device 1 -c:v h264_cuvid -deint adaptive -drop_second_field 1 -gpu:v 1 -i "udp://xxx.xxx.xxx.xxx:xxxx=?overrun_nonfatal=1&fifo_size=84450&buffer_size=33554432" -map 0:0 -map 0:1 -c:a aac -b:a 196k -c:v h264_nvenc -flags -global_header+cgop -gpu:v 1 -g:v 50 -r:v 50 -bf:v 4 -coder:v cabac -b_adapt:v false -b:v 5184000 -minrate:v 5184000 -maxrate:v 5184000 -bufsize:v 2488320 -rc:v cbr_hq -2pass:v true -rc-lookahead:v 25 -no-scenecut:v 1 -profile:v high -preset:v slow -color_range:v 1 -color_trc:v 1 -color_primaries:v 1 -colorspace:v 1 -f hls -hls_time 5 -hls_list_size 3 -start_number 0 -hls_flags delete_segments /srv/hls/program_01/1080p/index.m3u8
Could someone please point me in the right direction how to properly deinterlace with cuvid and minimal possible loss of quality ?
-
ffmpeg 4 : Using the stream_loop parameter to loop the audio during a video ends up with an infinite loop
17 juin 2020, par JarsOfJam-SchedulerSummary



- 

- Context
- The software I use
- The problem
-
Results

4.1. Actual Results


4.2. Expected Results
-
What did I try to fix the bug ?
-
How to reproduce this bug : minimal and testable example with the provided required data
-
The question
-
Sources





















Context



I would want to set an audio WAV as the background sound of a video WEBM. The video can be shorter or longer than the audio. At the moment I add the audio over the video, I don't know the length of both streams. The audio must repeat until the video ends (the audio can be truncated if the video ends before the end of the last repetition of the audio).



The software I use



I use ffmpeg version 4.2.2-1ubuntu1 18.04.sav0.



The problem



ffmpeg seems to enter in an infinite loop when it proccesses in order to mix the audio and the video. Also, the length of the currently-generating-output-file (which contains both video and audio) is equal to the length of the audio, instead of the length of the video.



The problem seems to be triggered by this command line :



ffmpeg -i directory_1/video.webm -stream_loop -1 -fflags +shortest -max_interleave_delta 50000 -i directory_2/audio.wav directory_3/video_and_audio.webm




Results



Actual Results



Three things :



- 

-
The infinite loop of the ffmpeg process : I must manually stop the ffmpeg process
-
The output video file with music (which is currently generating but output anyway) : it contains both audio and video. But the length of the output file is equal to the length of the audio, instead of the length of the video.
-
The following output logs :











ffmpeg version 4.2.2-1ubuntu1 18.04.sav0 Copyright (c) 2000-2019 the
 FFmpeg developers built with gcc 7 (Ubuntu 7.5.0-3ubuntu1 18.04)

 configuration : —prefix=/usr —extra-version='1ubuntu1 18.04.sav0'
 —toolchain=hardened —libdir=/usr/lib/x86_64-linux-gnu —incdir=/usr/include/x86_64-linux-gnu —arch=amd64 —enable-gpl —disable-stripping —enable-avresample —disable-filter=resample —enable-avisynth —enable-gnutls —enable-ladspa —enable-libaom —enable-libass —enable-libbluray —enable-libbs2b —enable-libcaca —enable-libcdio —enable-libcodec2 —enable-libflite —enable-libfontconfig —enable-libfreetype —enable-libfribidi —enable-libgme —enable-libgsm —enable-libjack —enable-libmp3lame —enable-libmysofa —enable-libopenjpeg —enable-libopenmpt —enable-libopus —enable-libpulse —enable-librsvg —enable-librubberband —enable-libshine —enable-libsnappy —enable-libsoxr —enable-libspeex —enable-libssh —enable-libtheora —enable-libtwolame —enable-libvidstab —enable-libvorbis —enable-libvpx —enable-libwavpack —enable-libwebp —enable-libx265 —enable-libxml2 —enable-libxvid —enable-libzmq —enable-libzvbi —enable-lv2 —enable-omx —enable-openal —enable-opencl —enable-opengl —enable-sdl2 —enable-libdc1394 —enable-libdrm —enable-libiec61883 —enable-nvenc —enable-chromaprint —enable-frei0r —enable-libx264 —enable-shared libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 /
 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 /
 55. 5.100 Input #0, matroska,webm, from 'youtubed/my_youtube_video.webm' : Metadata :
 encoder : Chrome Duration : N/A, start : 0.000000, bitrate : N/A
 Stream #0:0(eng) : Video : vp8, yuv420p(progressive), 3200x1608, SAR 1:1 DAR 400:201, 1k tbr, 1k tbn, 1k tbc (default)
 Metadata :
 alpha_mode : 1 Guessed Channel Layout for Input Stream #1.0 : stereo Input #1, wav, from 'tmp_music/original_music.wav' :

 Duration : 00:00:11.78, bitrate : 1411 kb/s
 Stream #1:0 : Audio : pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s Stream mapping : Stream #0:0 -> #0:0 (vp8
 (native) -> vp9 (libvpx-vp9)) Stream #1:0 -> #0:1 (pcm_s16le
 (native) -> opus (libopus)) Press [q] to stop, [?] for help
 [libvpx-vp9 @ 0x5645268aed80] v1.8.2 [libopus @ 0x5645268b09c0] No bit
 rate set. Defaulting to 96000 bps. Output #0, webm, to
 'youtubed/my_youtube_video_with_music.webm' : Metadata :
 encoder : Lavf58.29.100
 Stream #0:0(eng) : Video : vp9 (libvpx-vp9), yuv420p(progressive), 3200x1608 [SAR 1:1 DAR 400:201], q=-1—1, 200 kb/s, 1k fps, 1k tbn, 1k
 tbc (default)
 Metadata :
 alpha_mode : 1
 encoder : Lavc58.54.100 libvpx-vp9
 Side data :
 cpb : bitrate max/min/avg : 0/0/0 buffer size : 0 vbv_delay : -1
 Stream #0:1 : Audio : opus (libopus), 48000 Hz, stereo, s16, 96 kb/s
 Metadata :
 encoder : Lavc58.54.100 libopus




Expected Results



- 

-
No infinite loop during the ffmpeg process
-
Concerning the output logs, I don't know what it should look.
-
The output file with the audio and the video should :



3.1. If the video is longer than the audio, then the audio is repeated until it exactly fits the video. The audio can be truncated.



3.2. If the video is shorter than the audio, then the audio is truncated and exactly fits the video.



3.3. If both video and audio are of the same length, then the audio exactly fits the video.









How to reproduce this bug ? (+ required data)



- 

-
Download the following files (resp. audio and video) (I must refresh these download links every 24 hours) :



1.1. https://a.uguu.se/dmgsmItjJMDq_audio.wav



-
Move them into the directory/directories of your choice.
-
Open your CLI, move to the adequat directory and copy/paste/execute the instruction given in Part. The Problem (don't forget to eventually modify this instruction by indicating the adequat directories, according to step 2.).
-
You'll face my problem.











What did I try to fix the bug ?



Nothing, since I don't even understand why the bug occures.



The question



How to correct my command in order to mix these audio and video streams without any infinite loop during the ffmpeg process, keeping in mind that I don't know their length, and that audio must be repeated in order to fit the video, even if audio must be truncated (in the case of the last repetition of the audio file must be truncated because the video stream has just ended) ?



Sources



The source is the command line you can find in Part. The problem.


-
Manual encoding into MPEG-TS
4 juillet 2014, par LaneSO...
I am trying to take a H264 Annex B byte stream video and encode it into MPEG-TS in pure Java. My goals is to create a minimal MPEG-TS, Single Program, valid stream and to not include any timing information information (PCR, PTS, DTS).
I am currently at the point where my generated file can be passed to ffmpeg (ffmpeg -i myVideo.ts) and ffmpeg reports...
[NULL @ 0x7f8103022600] start time is not set in estimate_timings_from_pts
Input #0, mpegts, from 'video.ts':
Duration: N/A, bitrate: N/A
Program 1
Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc...it seems like this warning for start time is not a big deal... and ffmpeg is unable to determine how long the video is. If I create another mpeg-ts file from my video file (ffmpeg -i myVideo.ts -vcodec copy validVideo.ts) and run ffmpeg -i validVideo.ts I get...
Input #0, mpegts, from 'video2.ts':
Duration: 00:00:11.61, start: 1.400000, bitrate: 3325 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc...so you can see the timing information and bitrate is there and so is the metadata.
My H264 video is comprised of only I and P Frames (with the SPS and PPS preceding the I Frame of course) and the way that I am creating my MPEG-TS stream is...
- Write a single PAT at the beginning of the file
- Write a single PMT at the beginning of the file
- Create TS and PES packets from SPS, PPS and I Frame (AUD NALs too, if this is required ?)
- Create TS and PES packets from P Frame (again, AUD NALs too, if required)
- For the last payload of either an I Frame or P Frame, add filler bytes to an adaptation field to make sure it fits into a full TS packet
- Repeat 3-5 for the entire file
...my PAT looks like this...
4740 0010 0000 b00d 0001 c100 0000 01f0
002a b104 b2ff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff...and my PMT looks like this...
4750 0010
0002 b012 0001 c100 00ff fff0 001b e100
f000 c15b 41e0 ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff...notice after the c100 00, the "ff ff", f0... says that we are not using a PCR... Also notice that I have updated my CRC to reflect this change to the PMT. My first I Frame packet looks like...
4741 0010 0000 01e0
0000 8000 0000 0000 0109 f000 0000 0127
4d40 288d 8d60 2802 dd80 b501 0101 4000
00fa 4000 3a98 3a18 00b7 2000 3380 2ef2
e343 0016 e400 0670 05de 5c16 345d c000
0000 0128 ee3c 8000 0000 0165 8880 0020
0000 4fe5 63b5 4e90 b11c 9f8f f891 10f3
13b1 666b 9fc6 03e9 e321 36bf 1788 347b
eb23 fc89 5772 6e2e 1714 96df ed16 9b30
252d ceb7 07e9 a0c7 c6e7 9515 be87 2df1
81f3 b9d2 ba5f 243e 2d5c cba2 8ca5 b798
6bec 8c43 0b5d bbda bc5b 6e7c e15c 84e8
2f13 be84...you’ll notice after the 01e0 0000, 8000 00 is the PES header extension where I specify no PTS / DTS and the remaining length is zero. My first P Frame packet looks like...
4741 001d
0000 01e0 0000 8000 0000 0000 0109 f000
0000 0141 9a00 0200 0593 ff45 a7ae 1acd
f2d7 f9ec 557f cdb6 ba38 60d6 a626 5edb
4bb9 9783 89e2 d7e1 102e 4625 2fbf ce16
f952 d8c9 f027 e55a 6b2a 81c3 48d4 6a45
050a f355 fbec db01 6562 6405 04aa e011
50ec 0b45 45e5 0df7 2fed a3f8 ac13 2e69
6739 6d81 f13d 2455 e6ca 1c6b dc96 65d5
3bad f250 7dab 42e4 7ba9 f564 ee61 29fb
1b2c 974c 6924 1a1f 99ef 063c b99a c507
8c22 b0f8 b14c 3e4d 01d0 6120 4e19 8725
2fda 6550 f907 3f87...and whenever an I Frame or P Frame is ending, I have a TS packet with an adaptation field like...
4701 003c b000 ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff...where the first b0 bytes are the adaptation field stuffing bytes and the remaining ones are the final bytes of the I or P Frame. So as you can tell I can use ffmpeg and pass it my file to create a valid movie in any format. However, I need the file I create to be in the proper format and I cannot quite figure out what the last piece I am missing is. Any ideas ?