
Recherche avancée
Médias (91)
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#1 The Wires
11 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
ED-ME-5 1-DVD
11 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (100)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (8617)
-
fluent-ffmpeg concatenate files ends up with wrong length
24 juillet 2021, par Hugo CoxI have the following input file :


ffconcat version 1.0
file '../tmp/59bd6a7896654d0b0c00705f/vidR/intro.mp4' #0
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out003.mp4' #1
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #2
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #2
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #2
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #2
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #2
file '../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4' #2
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out007.mp4' #3
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #4
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #4
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #4
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #4
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #4
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #4
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #4
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #4
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out013.mp4' #5
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #6
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #6
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #6
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #6
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #6
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #6
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out017.mp4' #7
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #8
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #8
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #8
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #8
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #8
file '../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4' #8
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out021.mp4' #9
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #10
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #10
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #10
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #10
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #10
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #10
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out025.mp4' #11
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #12
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #12
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #12
file '../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4' #12
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out028.mp4' #13
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #14
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #14
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #14
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #14
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #14
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #14
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out032.mp4' #15
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #16
file '../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4' #16
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out034.mp4' #17
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #18
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #18
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #18
file '../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4' #18
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out037.mp4' #19
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #20
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #20
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out039.mp4' #21
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #22
file '../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4' #22
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out041.mp4' #23
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #24
file '../intersegment/paddingsegment_h264_2.6s_1920x1080_30fps.mp4' #24
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out043.mp4' #25
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #26
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #26
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out045.mp4' #27
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #28
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #28
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out047.mp4' #29
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #30
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #30
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out049.mp4' #31
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #32
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #32
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out051.mp4' #33
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #34
file '../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4' #34
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out053.mp4' #35
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #36
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #36
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out055.mp4' #37
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #38
file '../intersegment/paddingsegment_h264_2.6s_1920x1080_30fps.mp4' #38
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out057.mp4' #39
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #40
file '../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4' #40
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out059.mp4' #41
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #42
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #42
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out061.mp4' #43
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #44
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #44
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out063.mp4' #45
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #46
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #46
file '../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4' #46
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out065.mp4' #47
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #48
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #48
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #48
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out067.mp4' #49
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #50
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #50
file '../intersegment/paddingsegment_h264_2.6s_1920x1080_30fps.mp4' #50
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out070.mp4' #51
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #52
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #52
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #52
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #52
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #52
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #52
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out074.mp4' #53
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #54
file '../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4' #54
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out076.mp4' #55
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #56
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #56
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #56
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #56
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #56
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out080.mp4' #57
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #58
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #58
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #58
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #58
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #58
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #58
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out084.mp4' #59
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #60
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #60
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #60
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #60
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #60
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #60
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out089.mp4' #61
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #62
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #62
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #62
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #62
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #62
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #62
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out093.mp4' #63
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #64
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #64
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #64
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #64
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out096.mp4' #65
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #66
file '../intersegment/paddingsegment_h264_2.6s_1920x1080_30fps.mp4' #66
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out098.mp4' #67
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #68
file '../intersegment/paddingsegment_h264_2.6s_1920x1080_30fps.mp4' #68
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out100.mp4' #69
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #70
file '../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4' #70
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out102.mp4' #71
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #72
file '../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4' #72
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out104.mp4' #73
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #74
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #74
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #74
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #74
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #74
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #74
file '../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4' #74
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out109.mp4' #75
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #76
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #76
file '../intersegment/paddingsegment_h264_2.6s_1920x1080_30fps.mp4' #76
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out112.mp4' #77
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #78
file '../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4' #78
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out114.mp4' #79
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #80
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #80
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #80
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #80
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #80
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #80
file '../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4' #80
file '../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4' #80
file '../tmp/59bd6a7896654d0b0c00705f/vidR/out119.mp4' #81
file '../tmp/59bd6a7896654d0b0c00705f/vidR/outro.mp4' #82



However, when I use the following command :


ffmpeg(__dirname + '/log/vidR_concatenate.txt')
 .inputFormat('concat')
 .inputOptions([
 '-safe 0'
 ]).outputOptions([
 '-c copy'
 ]).output(__dirname + '/output/' + ID + '/video/1080p/' + ID + '-R-1080p.mp4')
 .on('start', function (commandLine) {
 console.log('Spawned Ffmpeg with command: ' + commandLine);
 })
 .on('error', function (err, stdout, stderr) {
 console.log('An error occurred: ' + err.message, err, stderr);
 })
 .on('progress', function (progress) {
 console.log('Processing: ' + progress.percent + '% done')
 })
 .on('end', function (err, stdout, stderr) {
 console.log('Finished vidR processing!' /*, err, stdout, stderr*/)
 resolve()
 })
 .run()



I do not end up with a video length that is the sum of all individual videos !


ffprobe -i ../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4 :


Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'intersegment/intersegment_h264_3s_1920x1080_30fps.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Duration: 00:00:03.00, start: 0.000000, bitrate: 24 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 19 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
 Metadata:
 handler_name : VideoHandler



ffprobe -i ../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4 :


Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Duration: 00:00:00.60, start: 0.000000, bitrate: 45 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 30 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
 Metadata:
 handler_name : VideoHandler



ffprobe -i ../tmp/59bd6a7896654d0b0c00705f/vidR/intro.mp4 :


Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'tmp/59bd6a7896654d0b0c00705f/vidR/intro.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Duration: 00:00:14.80, start: 0.000000, bitrate: 21 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 17 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
 Metadata:
 handler_name : VideoHandler



ffprobe -i ../tmp/59bd6a7896654d0b0c00705f/vidR/outro.mp4 :


Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'tmp/59bd6a7896654d0b0c00705f/vidR/outro.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Duration: 00:00:05.30, start: 0.000000, bitrate: 22 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 18 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
 Metadata:
 handler_name : VideoHandler



ffprobe -i ../tmp/59bd6a7896654d0b0c00705f/vidR/out003.mp4 :


Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'tmp/59bd6a7896654d0b0c00705f/vidR/out003.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 title : Big Buck Bunny, Sunflower version
 artist : Blender Foundation 2008, Janus Bager Kristensen 2013
 composer : Sacha Goedegebure
 encoder : Lavf58.29.100
 comment : Creative Commons Attribution 3.0 - http://bbb3d.renderfarming.net
 genre : Animation
 Duration: 00:00:08.40, start: 0.000000, bitrate: 3703 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 3699 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
 Metadata:
 handler_name : GPAC ISO Video Handler



I thought all the fps, tbr, tbn and tbc are the same, so what is the problem ??
It is off several seconds from the total sum of each individual file !


-
ffmpeg : libavformat/libswresample to transcode and resample at same time
21 février 2024, par whatdoidoI want to transcode and down/re-sample the audio for output using ffmpeg's libav*/libswresample - I am using ffmpeg's (4.x) transcode_aac.c and resample_audio.c as reference - but the code produces audio with glitches that is clearly not what ffmpeg itself would produce (ie ffmpeg -i foo.wav -ar 22050 foo.m4a)


Based on the ffmpeg examples, to resample audio it appears that I need to set the output AVAudioContext and SwrContext sample_rate to what I desire and ensure the swr_convert() is provided with the correct number of output samples based av_rescale_rnd( swr_delay(), ...) once I have an decoded input audio. I've taken care to ensure all the relevant calculations of samples for output are taken into account in the merged code (below) :


- 

- open_output_file() - AVCodecContext.sample_rate (avctx variable) set to our target (down sampled) sample_rate
- read_decode_convert_and_store() is where the work happens : input audio is decoded to an AVFrame and this input frame is converted before being encoded.

- 

- init_converted_samples() and av_samples_alloc() uses the input frame's nb_samples
- ADDED : calc the number of output samples via av_rescale_rnd() and swr_delay()
- UPDATED : convert_samples() and swr_convert() uses the input frame's samples and our calculated output samples as parameters














However the resulting audio file is produced with audio glitches. Does the community know of any references for how transcode AND resample should be done or what is missing in this example ?


/* compile and run:
 gcc -I/usr/include/ffmpeg transcode-swr-aac.c -lavformat -lavutil -lavcodec -lswresample -lm
 ./a.out foo.wav foo.m4a
 */

/*
 * Copyright (c) 2013-2018 Andreas Unterweger
 * 
 * This file is part of FFmpeg. 
 ... ...
 * 
 * @example transcode_aac.c 
 * Convert an input audio file to AAC in an MP4 container using FFmpeg. 
 * Formats other than MP4 are supported based on the output file extension. 
 * @author Andreas Unterweger (xxxx@xxxxx.com)
 */ 
 #include 
 

 #include "libavformat/avformat.h"
 #include "libavformat/avio.h"
 
 #include "libavcodec/avcodec.h"
 
 #include "libavutil/audio_fifo.h"
 #include "libavutil/avassert.h"
 #include "libavutil/avstring.h"
 #include "libavutil/channel_layout.h"
 #include "libavutil/frame.h"
 #include "libavutil/opt.h"
 
 #include "libswresample/swresample.h"
 
 #define OUTPUT_BIT_RATE 128000
 #define OUTPUT_CHANNELS 2
 
 static int open_input_file(const char *filename,
 AVFormatContext **input_format_context,
 AVCodecContext **input_codec_context)
 {
 AVCodecContext *avctx;
 const AVCodec *input_codec;
 const AVStream *stream;
 int error;
 
 if ((error = avformat_open_input(input_format_context, filename, NULL,
 NULL)) < 0) {
 fprintf(stderr, "Could not open input file '%s' (error '%s')\n",
 filename, av_err2str(error));
 *input_format_context = NULL;
 return error;
 }
 

 if ((error = avformat_find_stream_info(*input_format_context, NULL)) < 0) {
 fprintf(stderr, "Could not open find stream info (error '%s')\n",
 av_err2str(error));
 avformat_close_input(input_format_context);
 return error;
 }
 
 if ((*input_format_context)->nb_streams != 1) {
 fprintf(stderr, "Expected one audio input stream, but found %d\n",
 (*input_format_context)->nb_streams);
 avformat_close_input(input_format_context);
 return AVERROR_EXIT;
 }
 
 stream = (*input_format_context)->streams[0];
 
 if (!(input_codec = avcodec_find_decoder(stream->codecpar->codec_id))) {
 fprintf(stderr, "Could not find input codec\n");
 avformat_close_input(input_format_context);
 return AVERROR_EXIT;
 }
 
 avctx = avcodec_alloc_context3(input_codec);
 if (!avctx) {
 fprintf(stderr, "Could not allocate a decoding context\n");
 avformat_close_input(input_format_context);
 return AVERROR(ENOMEM);
 }
 
 /* Initialize the stream parameters with demuxer information. */
 error = avcodec_parameters_to_context(avctx, stream->codecpar);
 if (error < 0) {
 avformat_close_input(input_format_context);
 avcodec_free_context(&avctx);
 return error;
 }
 
 /* Open the decoder for the audio stream to use it later. */
 if ((error = avcodec_open2(avctx, input_codec, NULL)) < 0) {
 fprintf(stderr, "Could not open input codec (error '%s')\n",
 av_err2str(error));
 avcodec_free_context(&avctx);
 avformat_close_input(input_format_context);
 return error;
 }
 
 /* Set the packet timebase for the decoder. */
 avctx->pkt_timebase = stream->time_base;
 
 /* Save the decoder context for easier access later. */
 *input_codec_context = avctx;
 
 return 0;
 }
 
 static int open_output_file(const char *filename,
 AVCodecContext *input_codec_context,
 AVFormatContext **output_format_context,
 AVCodecContext **output_codec_context)
 {
 AVCodecContext *avctx = NULL;
 AVIOContext *output_io_context = NULL;
 AVStream *stream = NULL;
 const AVCodec *output_codec = NULL;
 int error;
 

 if ((error = avio_open(&output_io_context, filename,
 AVIO_FLAG_WRITE)) < 0) {
 fprintf(stderr, "Could not open output file '%s' (error '%s')\n",
 filename, av_err2str(error));
 return error;
 }
 

 if (!(*output_format_context = avformat_alloc_context())) {
 fprintf(stderr, "Could not allocate output format context\n");
 return AVERROR(ENOMEM);
 }
 

 (*output_format_context)->pb = output_io_context;
 

 if (!((*output_format_context)->oformat = av_guess_format(NULL, filename,
 NULL))) {
 fprintf(stderr, "Could not find output file format\n");
 goto cleanup;
 }
 
 if (!((*output_format_context)->url = av_strdup(filename))) {
 fprintf(stderr, "Could not allocate url.\n");
 error = AVERROR(ENOMEM);
 goto cleanup;
 }
 

 if (!(output_codec = avcodec_find_encoder(AV_CODEC_ID_AAC))) {
 fprintf(stderr, "Could not find an AAC encoder.\n");
 goto cleanup;
 }
 
 /* Create a new audio stream in the output file container. */
 if (!(stream = avformat_new_stream(*output_format_context, NULL))) {
 fprintf(stderr, "Could not create new stream\n");
 error = AVERROR(ENOMEM);
 goto cleanup;
 }
 
 avctx = avcodec_alloc_context3(output_codec);
 if (!avctx) {
 fprintf(stderr, "Could not allocate an encoding context\n");
 error = AVERROR(ENOMEM);
 goto cleanup;
 }
 
 /* Set the basic encoder parameters.
 * SET OUR DESIRED output sample_rate here
 */
 avctx->channels = OUTPUT_CHANNELS;
 avctx->channel_layout = av_get_default_channel_layout(OUTPUT_CHANNELS);
 // avctx->sample_rate = input_codec_context->sample_rate;
 avctx->sample_rate = 22050;
 avctx->sample_fmt = output_codec->sample_fmts[0];
 avctx->bit_rate = OUTPUT_BIT_RATE;
 
 avctx->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;
 
 /* Set the sample rate for the container. */
 stream->time_base.den = avctx->sample_rate;
 stream->time_base.num = 1;
 
 if ((*output_format_context)->oformat->flags & AVFMT_GLOBALHEADER)
 avctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 
 if ((error = avcodec_open2(avctx, output_codec, NULL)) < 0) {
 fprintf(stderr, "Could not open output codec (error '%s')\n",
 av_err2str(error));
 goto cleanup;
 }
 
 error = avcodec_parameters_from_context(stream->codecpar, avctx);
 if (error < 0) {
 fprintf(stderr, "Could not initialize stream parameters\n");
 goto cleanup;
 }
 
 /* Save the encoder context for easier access later. */
 *output_codec_context = avctx;
 
 return 0;
 
 cleanup:
 avcodec_free_context(&avctx);
 avio_closep(&(*output_format_context)->pb);
 avformat_free_context(*output_format_context);
 *output_format_context = NULL;
 return error < 0 ? error : AVERROR_EXIT;
 }
 
 /**
 * Initialize one data packet for reading or writing.
 */
 static int init_packet(AVPacket **packet)
 {
 if (!(*packet = av_packet_alloc())) {
 fprintf(stderr, "Could not allocate packet\n");
 return AVERROR(ENOMEM);
 }
 return 0;
 }
 
 static int init_input_frame(AVFrame **frame)
 {
 if (!(*frame = av_frame_alloc())) {
 fprintf(stderr, "Could not allocate input frame\n");
 return AVERROR(ENOMEM);
 }
 return 0;
 }
 
 static int init_resampler(AVCodecContext *input_codec_context,
 AVCodecContext *output_codec_context,
 SwrContext **resample_context)
 {
 int error;

 /**
 * create the resample, including ref to the desired output sample rate
 */
 *resample_context = swr_alloc_set_opts(NULL,
 av_get_default_channel_layout(output_codec_context->channels),
 output_codec_context->sample_fmt,
 output_codec_context->sample_rate,
 av_get_default_channel_layout(input_codec_context->channels),
 input_codec_context->sample_fmt,
 input_codec_context->sample_rate,
 0, NULL);
 if (!*resample_context < 0) {
 fprintf(stderr, "Could not allocate resample context\n");
 return AVERROR(ENOMEM);
 }
 
 if ((error = swr_init(*resample_context)) < 0) {
 fprintf(stderr, "Could not open resample context\n");
 swr_free(resample_context);
 return error;
 }
 return 0;
 }
 
 static int init_fifo(AVAudioFifo **fifo, AVCodecContext *output_codec_context)
 {
 if (!(*fifo = av_audio_fifo_alloc(output_codec_context->sample_fmt,
 output_codec_context->channels, 1))) {
 fprintf(stderr, "Could not allocate FIFO\n");
 return AVERROR(ENOMEM);
 }
 return 0;
 }
 
 static int write_output_file_header(AVFormatContext *output_format_context)
 {
 int error;
 if ((error = avformat_write_header(output_format_context, NULL)) < 0) {
 fprintf(stderr, "Could not write output file header (error '%s')\n",
 av_err2str(error));
 return error;
 }
 return 0;
 }
 
 static int decode_audio_frame(AVFrame *frame,
 AVFormatContext *input_format_context,
 AVCodecContext *input_codec_context,
 int *data_present, int *finished)
 {
 AVPacket *input_packet;
 int error;
 
 error = init_packet(&input_packet);
 if (error < 0)
 return error;
 
 *data_present = 0;
 *finished = 0;

 if ((error = av_read_frame(input_format_context, input_packet)) < 0) {
 if (error == AVERROR_EOF)
 *finished = 1;
 else {
 fprintf(stderr, "Could not read frame (error '%s')\n",
 av_err2str(error));
 goto cleanup;
 }
 }
 
 if ((error = avcodec_send_packet(input_codec_context, input_packet)) < 0) {
 fprintf(stderr, "Could not send packet for decoding (error '%s')\n",
 av_err2str(error));
 goto cleanup;
 }
 
 error = avcodec_receive_frame(input_codec_context, frame);
 if (error == AVERROR(EAGAIN)) {
 error = 0;
 goto cleanup;
 } else if (error == AVERROR_EOF) {
 *finished = 1;
 error = 0;
 goto cleanup;
 } else if (error < 0) {
 fprintf(stderr, "Could not decode frame (error '%s')\n",
 av_err2str(error));
 goto cleanup;
 } else {
 *data_present = 1;
 goto cleanup;
 }
 
 cleanup:
 av_packet_free(&input_packet);
 return error;
 }
 
 static int init_converted_samples(uint8_t ***converted_input_samples,
 AVCodecContext *output_codec_context,
 int frame_size)
 {
 int error;
 
 if (!(*converted_input_samples = calloc(output_codec_context->channels,
 sizeof(**converted_input_samples)))) {
 fprintf(stderr, "Could not allocate converted input sample pointers\n");
 return AVERROR(ENOMEM);
 }
 

 if ((error = av_samples_alloc(*converted_input_samples, NULL,
 output_codec_context->channels,
 frame_size,
 output_codec_context->sample_fmt, 0)) < 0) {
 fprintf(stderr,
 "Could not allocate converted input samples (error '%s')\n",
 av_err2str(error));
 av_freep(&(*converted_input_samples)[0]);
 free(*converted_input_samples);
 return error;
 }
 return 0;
 }
 
 static int convert_samples(const uint8_t **input_data, const int input_nb_samples,
 uint8_t **converted_data, const int output_nb_samples,
 SwrContext *resample_context)
 {
 int error;
 
 if ((error = swr_convert(resample_context,
 converted_data, output_nb_samples,
 input_data , input_nb_samples)) < 0) {
 fprintf(stderr, "Could not convert input samples (error '%s')\n",
 av_err2str(error));
 return error;
 }
 
 return 0;
 }
 
 static int add_samples_to_fifo(AVAudioFifo *fifo,
 uint8_t **converted_input_samples,
 const int frame_size)
 {
 int error;
 
 if ((error = av_audio_fifo_realloc(fifo, av_audio_fifo_size(fifo) + frame_size)) < 0) {
 fprintf(stderr, "Could not reallocate FIFO\n");
 return error;
 }
 
 if (av_audio_fifo_write(fifo, (void **)converted_input_samples,
 frame_size) < frame_size) {
 fprintf(stderr, "Could not write data to FIFO\n");
 return AVERROR_EXIT;
 }
 return 0;
 }
 
 static int read_decode_convert_and_store(AVAudioFifo *fifo,
 AVFormatContext *input_format_context,
 AVCodecContext *input_codec_context,
 AVCodecContext *output_codec_context,
 SwrContext *resampler_context,
 int *finished)
 {
 AVFrame *input_frame = NULL;
 uint8_t **converted_input_samples = NULL;
 int data_present;
 int ret = AVERROR_EXIT;
 

 if (init_input_frame(&input_frame))
 goto cleanup;

 if (decode_audio_frame(input_frame, input_format_context,
 input_codec_context, &data_present, finished))
 goto cleanup;

 if (*finished) {
 ret = 0;
 goto cleanup;
 }

 if (data_present) {
 /* Initialize the temporary storage for the converted input samples. */
 if (init_converted_samples(&converted_input_samples, output_codec_context,
 input_frame->nb_samples))
 goto cleanup;
 
 /* figure out how many samples are required for target sample_rate incl
 * any items left in the swr buffer
 */ 
 int output_nb_samples = av_rescale_rnd(
 swr_get_delay(resampler_context, input_codec_context->sample_rate) + input_frame->nb_samples,
 output_codec_context->sample_rate, 
 input_codec_context->sample_rate,
 AV_ROUND_UP);
 
 /* ignore, just to ensure we've got enough buffer alloc'd for conversion buffer */
 av_assert1(input_frame->nb_samples > output_nb_samples);
 
 /* Convert the input samples to the desired output sample format, via swr_convert().
 */
 if (convert_samples((const uint8_t**)input_frame->extended_data, input_frame->nb_samples,
 converted_input_samples, output_nb_samples,
 resampler_context))
 goto cleanup;
 
 /* Add the converted input samples to the FIFO buffer for later processing. */
 if (add_samples_to_fifo(fifo, converted_input_samples,
 output_nb_samples))
 goto cleanup;
 ret = 0;
 }
 ret = 0;
 
 cleanup:
 if (converted_input_samples) {
 av_freep(&converted_input_samples[0]);
 free(converted_input_samples);
 }
 av_frame_free(&input_frame);
 
 return ret;
 }
 
 static int init_output_frame(AVFrame **frame,
 AVCodecContext *output_codec_context,
 int frame_size)
 {
 int error;
 
 if (!(*frame = av_frame_alloc())) {
 fprintf(stderr, "Could not allocate output frame\n");
 return AVERROR_EXIT;
 }
 
 /* Set the frame's parameters, especially its size and format.
 * av_frame_get_buffer needs this to allocate memory for the
 * audio samples of the frame.
 * Default channel layouts based on the number of channels
 * are assumed for simplicity. */
 (*frame)->nb_samples = frame_size;
 (*frame)->channel_layout = output_codec_context->channel_layout;
 (*frame)->format = output_codec_context->sample_fmt;
 (*frame)->sample_rate = output_codec_context->sample_rate;
 
 /* Allocate the samples of the created frame. This call will make
 * sure that the audio frame can hold as many samples as specified. */
 if ((error = av_frame_get_buffer(*frame, 0)) < 0) {
 fprintf(stderr, "Could not allocate output frame samples (error '%s')\n",
 av_err2str(error));
 av_frame_free(frame);
 return error;
 }
 
 return 0;
 }
 
 /* Global timestamp for the audio frames. */
 static int64_t pts = 0;
 
 /**
 * Encode one frame worth of audio to the output file.
 */
 static int encode_audio_frame(AVFrame *frame,
 AVFormatContext *output_format_context,
 AVCodecContext *output_codec_context,
 int *data_present)
 {
 AVPacket *output_packet;
 int error;
 
 error = init_packet(&output_packet);
 if (error < 0)
 return error;
 
 /* Set a timestamp based on the sample rate for the container. */
 if (frame) {
 frame->pts = pts;
 pts += frame->nb_samples;
 }
 
 *data_present = 0;
 error = avcodec_send_frame(output_codec_context, frame);
 if (error < 0 && error != AVERROR_EOF) {
 fprintf(stderr, "Could not send packet for encoding (error '%s')\n",
 av_err2str(error));
 goto cleanup;
 }
 

 error = avcodec_receive_packet(output_codec_context, output_packet);
 if (error == AVERROR(EAGAIN)) {
 error = 0;
 goto cleanup;
 } else if (error == AVERROR_EOF) {
 error = 0;
 goto cleanup;
 } else if (error < 0) {
 fprintf(stderr, "Could not encode frame (error '%s')\n",
 av_err2str(error));
 goto cleanup;
 } else {
 *data_present = 1;
 }
 
 /* Write one audio frame from the temporary packet to the output file. */
 if (*data_present &&
 (error = av_write_frame(output_format_context, output_packet)) < 0) {
 fprintf(stderr, "Could not write frame (error '%s')\n",
 av_err2str(error));
 goto cleanup;
 }
 
 cleanup:
 av_packet_free(&output_packet);
 return error;
 }
 
 /**
 * Load one audio frame from the FIFO buffer, encode and write it to the
 * output file.
 */
 static int load_encode_and_write(AVAudioFifo *fifo,
 AVFormatContext *output_format_context,
 AVCodecContext *output_codec_context)
 {
 AVFrame *output_frame;
 /* Use the maximum number of possible samples per frame.
 * If there is less than the maximum possible frame size in the FIFO
 * buffer use this number. Otherwise, use the maximum possible frame size. */
 const int frame_size = FFMIN(av_audio_fifo_size(fifo),
 output_codec_context->frame_size);
 int data_written;
 
 if (init_output_frame(&output_frame, output_codec_context, frame_size))
 return AVERROR_EXIT;
 
 /* Read as many samples from the FIFO buffer as required to fill the frame.
 * The samples are stored in the frame temporarily. */
 if (av_audio_fifo_read(fifo, (void **)output_frame->data, frame_size) < frame_size) {
 fprintf(stderr, "Could not read data from FIFO\n");
 av_frame_free(&output_frame);
 return AVERROR_EXIT;
 }
 
 /* Encode one frame worth of audio samples. */
 if (encode_audio_frame(output_frame, output_format_context,
 output_codec_context, &data_written)) {
 av_frame_free(&output_frame);
 return AVERROR_EXIT;
 }
 av_frame_free(&output_frame);
 return 0;
 }
 
 /**
 * Write the trailer of the output file container.
 */
 static int write_output_file_trailer(AVFormatContext *output_format_context)
 {
 int error;
 if ((error = av_write_trailer(output_format_context)) < 0) {
 fprintf(stderr, "Could not write output file trailer (error '%s')\n",
 av_err2str(error));
 return error;
 }
 return 0;
 }
 
 int main(int argc, char **argv)
 {
 AVFormatContext *input_format_context = NULL, *output_format_context = NULL;
 AVCodecContext *input_codec_context = NULL, *output_codec_context = NULL;
 SwrContext *resample_context = NULL;
 AVAudioFifo *fifo = NULL;
 int ret = AVERROR_EXIT;
 
 if (argc != 3) {
 fprintf(stderr, "Usage: %s <input file="file" /> <output file="file">\n", argv[0]);
 exit(1);
 }
 

 if (open_input_file(argv[1], &input_format_context,
 &input_codec_context))
 goto cleanup;

 if (open_output_file(argv[2], input_codec_context,
 &output_format_context, &output_codec_context))
 goto cleanup;

 if (init_resampler(input_codec_context, output_codec_context,
 &resample_context))
 goto cleanup;

 if (init_fifo(&fifo, output_codec_context))
 goto cleanup;

 if (write_output_file_header(output_format_context))
 goto cleanup;
 
 while (1) {
 /* Use the encoder's desired frame size for processing. */
 const int output_frame_size = output_codec_context->frame_size;
 int finished = 0;
 
 while (av_audio_fifo_size(fifo) < output_frame_size) {
 /* Decode one frame worth of audio samples, convert it to the
 * output sample format and put it into the FIFO buffer. */
 if (read_decode_convert_and_store(fifo, input_format_context,
 input_codec_context,
 output_codec_context,
 resample_context, &finished))
 goto cleanup;
 
 if (finished)
 break;
 }
 
 while (av_audio_fifo_size(fifo) >= output_frame_size ||
 (finished && av_audio_fifo_size(fifo) > 0))
 if (load_encode_and_write(fifo, output_format_context,
 output_codec_context))
 goto cleanup;
 
 if (finished) {
 int data_written;
 do {
 if (encode_audio_frame(NULL, output_format_context,
 output_codec_context, &data_written))
 goto cleanup;
 } while (data_written);
 break;
 }
 }
 
 if (write_output_file_trailer(output_format_context))
 goto cleanup;
 ret = 0;
 
 cleanup:
 if (fifo)
 av_audio_fifo_free(fifo);
 swr_free(&resample_context);
 if (output_codec_context)
 avcodec_free_context(&output_codec_context);
 if (output_format_context) {
 avio_closep(&output_format_context->pb);
 avformat_free_context(output_format_context);
 }
 if (input_codec_context)
 avcodec_free_context(&input_codec_context);
 if (input_format_context)
 avformat_close_input(&input_format_context);
 
 return ret;
 }
</output>


-
Building FFmpeg with NVIDIA GPU Hardware Acceleration in docker image, cannot load libnvcuvid.so.1 and libnvidia-encode.so.1
22 mars 2023, par konovificationI'm trying to build FFmpeg with NVIDIA GPU Hardware Acceleration following these instructions : https://docs.nvidia.com/video-technologies/video-codec-sdk/ffmpeg-with-nvidia-gpu/index.html#compiling-for-linux. The Docker image I'm using is
nvidia/cuda:12.0.1-devel-ubuntu20.04


Running the test command
ffmpeg -y -vsync 0 -hwaccel cuda -hwaccel_output_format cuda -i bbb.mp4 -c:a copy -c:v h264_nvenc -b:v 5M output.mp4
, I get the following output :

ffmpeg version N-109965-ge50a02b0f6 Copyright (c) 2000-2023 the FFmpeg developers 
 built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1) 
 configuration: --enable-nonfree --enable-cuda-nvcc --enable-libnpp --extra-cflags=-I/usr/local/cuda/include --extra-ldflags=-L/usr/local/cuda/lib64 --disable-static --enable-shared 
 libavutil 58. 3.100 / 58. 3.100 
 libavcodec 60. 6.100 / 60. 6.100 
 libavformat 60. 4.100 / 60. 4.100 
 libavdevice 60. 2.100 / 60. 2.100 
 libavfilter 9. 4.100 / 9. 4.100 
 libswscale 7. 2.100 / 7. 2.100 
 libswresample 4. 11.100 / 4. 11.100 
-vsync is deprecated. Use -fps_mode 
Passing a number to -vsync is deprecated, use a string argument as described in the manual. 
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'bbb.mp4': 
 Metadata: 
 major_brand : isom 
 minor_version : 1 
 compatible_brands: isomavc1 
 creation_time : 2013-12-16T17:44:39.000000Z 
 title : Big Buck Bunny, Sunflower version 
 artist : Blender Foundation 2008, Janus Bager Kristensen 2013 
 comment : Creative Commons Attribution 3.0 - http://bbb3d.renderfarming.net 
 genre : Animation 
 composer : Sacha Goedegebure 
 Duration: 00:10:34.60, start: 0.000000, bitrate: 3481 kb/s 
 Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 2998 kb/s, 30 fps, 30 tbr, 30k tbn (default)
 Metadata: 
 creation_time : 2013-12-16T17:44:39.000000Z 
 handler_name : GPAC ISO Video Handler 
 vendor_id : [0][0][0][0] 
 Stream #0:1[0x2](und): Audio: mp3 (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 160 kb/s (default) 
 Metadata: 
 creation_time : 2013-12-16T17:44:42.000000Z 
 handler_name : GPAC ISO Audio Handler 
 vendor_id : [0][0][0][0] 
 Stream #0:2[0x3](und): Audio: ac3 (ac-3 / 0x332D6361), 48000 Hz, 5.1(side), fltp, 320 kb/s (default) 
 Metadata: 
 creation_time : 2013-12-16T17:44:42.000000Z 
 handler_name : GPAC ISO Audio Handler 
 vendor_id : [0][0][0][0] 
 Side data: 
 audio service type: main 
Stream mapping: 
 Stream #0:0 -> #0:0 (h264 (native) -> h264 (h264_nvenc)) 
 Stream #0:2 -> #0:1 (copy) 
Press [q] to stop, [?] for help 
[h264 @ 0x55bd878c2d80] Cannot load libnvcuvid.so.1 
[h264 @ 0x55bd878c2d80] Failed loading nvcuvid. 
[h264 @ 0x55bd878c2d80] Failed setup for format cuda: hwaccel initialisation returned error. 
[h264_nvenc @ 0x55bd86f5e680] Cannot load libnvidia-encode.so.1 
[h264_nvenc @ 0x55bd86f5e680] The minimum required Nvidia driver for nvenc is 520.56.06 or newer 
[vost#0:0/h264_nvenc @ 0x55bd86f5e1c0] Error initializing output stream: Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
Conversion failed! 



Output from
nvidia-smi
:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.05 Driver Version: 525.85.05 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 On | N/A |
| 27% 43C P8 12W / 250W | 500MiB / 11264MiB | 1% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
 
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+



The shared libraries are not part of the docker image. What are my options to add them ?