Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (75)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (6541)

  • How to use FFMPEG with Nvidia Acceleration (Nvenc) to hardcode subtitles

    21 février 2021, par Ibraheem Nofal

    So I'm trying to use Nvenc to accelerate video encoding. The aim is to have 1 input video file and 1 input subtitle, and to get multiple outputs at different resolutions with subtitles hardcoded or burned into the video. I've tried multiple approaches but can't figure out how to do it.

    


    Here's the command that I'm currently using :

    


    ffmpeg -hwaccel cuvid -i 3030025890-TEST.mp4 -i output_ar.srt  -filter_complex "[0:v]scale_npp=1920:1080, hwdownload,format=nv12[base], [base]subtitles=output_ar.srt[marked]" -map "[marked]" -c:v h264_nvenc -map 0:v:0 -map 0:a:0 -g 50 -b:v 5M -maxrate 5.5M -minrate 4M -bufsize 5M -preset fast 1080_output.mp4


    


    and here's the output :

    


    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '3030025890-TEST.mp4':
  Metadata:
    major_brand     : M4V 
    minor_version   : 1
    compatible_brands: isomavc1mp42
    creation_time   : 2021-01-05T13:45:58.000000Z
  Duration: 00:45:04.28, start: 0.000000, bitrate: 5574 kb/s
    Stream #0:0(und): Video: h264 (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 5000 kb/s, 25 fps, 25 tbr, 25k tbn, 50 tbc (default)
    Metadata:
      creation_time   : 2021-01-05T13:45:58.000000Z
      handler_name    : ETI ISO Video Media Handler
      encoder         : Elemental H.264
    Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 93 kb/s (default)
    Metadata:
      creation_time   : 2021-01-05T13:45:58.000000Z
      handler_name    : ETI ISO Audio Media Handler
    Stream #0:2(und): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 93 kb/s (default)
    Metadata:
      creation_time   : 2021-01-05T13:45:58.000000Z
      handler_name    : ETI ISO Audio Media Handler
    Stream #0:3(und): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 93 kb/s (default)
    Metadata:
      creation_time   : 2021-01-05T13:45:58.000000Z
      handler_name    : ETI ISO Audio Media Handler
    Stream #0:4(und): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 93 kb/s (default)
    Metadata:
      creation_time   : 2021-01-05T13:45:58.000000Z
      handler_name    : ETI ISO Audio Media Handler
    Stream #0:5(und): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 93 kb/s (default)
    Metadata:
      creation_time   : 2021-01-05T13:45:58.000000Z
      handler_name    : ETI ISO Audio Media Handler
    Stream #0:6(und): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 93 kb/s (default)
    Metadata:
      creation_time   : 2021-01-05T13:45:58.000000Z
      handler_name    : ETI ISO Audio Media Handler
Input #1, srt, from 'output_ar.srt':
  Duration: N/A, bitrate: N/A
    Stream #1:0: Subtitle: subrip
[Parsed_subtitles_3 @ 0x5601070b1dc0] Shaper: FriBidi 0.19.7 (SIMPLE)
Fontconfig error: Cannot load default config file
[Parsed_subtitles_3 @ 0x5601070b1dc0] No usable fontconfig configuration file found, using fallback.
Fontconfig error: Cannot load default config file
[Parsed_subtitles_3 @ 0x5601070b1dc0] Using font provider fontconfig
Stream mapping:
  Stream #0:0 (h264) -> scale_npp (graph 0)
  subtitles (graph 0) -> Stream #0:0 (h264_nvenc)
  Stream #0:0 -> #0:1 (h264 (native) -> h264 (h264_nvenc))
  Stream #0:1 -> #0:2 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[h264 @ 0x56010792f980] Error creating a NVDEC decoder: 1
[h264 @ 0x56010792f980] Failed setup for format cuda: hwaccel initialisation returned error.
[Parsed_subtitles_3 @ 0x560107364cc0] Shaper: FriBidi 0.19.7 (SIMPLE)
Fontconfig error: Cannot load default config file
[Parsed_subtitles_3 @ 0x560107364cc0] No usable fontconfig configuration file found, using fallback.
Fontconfig error: Cannot load default config file
[Parsed_subtitles_3 @ 0x560107364cc0] Using font provider fontconfig
Impossible to convert between the formats supported by the filter 'graph 0 input from stream 0:0' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0
[aac @ 0x56010734d400] Qavg: 65536.000
[aac @ 0x56010734d400] 2 frames left in the queue on closing
Conversion failed!


    


    Edit : Made some progress. I now no longer get an error, but upon viewing the output, there's no subtitles burned in.
Command :

    


    ffmpeg -hwaccel cuvid -c:v h264_cuvid -i 3030025890-TEST.mp4  -c:v h264_nvenc -map 0:v:0 -map 0:a:0 -g 50 -b:v 5M -maxrate 5.5M -minrate 4M -bufsize 5M -vf "scale_npp=1920:1080, hwdownload, format=nv12, subtitles=output_ar.srt, hwupload" -preset fast 1080_output.mp4


    


  • Mixing various audio and video sources into a single video

    18 février 2021, par Basj

    I've already read FFmpeg - Overlay one video onto another video ?, How to overlay 2 videos at different time over another video in single ffmpeg command ?, FFmpeg - Multiple videos with 4 areas and different play times (and many similar questions tagged [ffmpeg] about setpts), and the following code is working, but I'm sure we can simplify it, and have a more elegant solution.

    


    I'd like to mix multiple sources (image and sound) , with different starting points :

    


    t (seconds)           0   1   2   3   4   5   6   7   8   9  10  11  12  13    
test.png              [-------------------------------]
a.mp3                         [-------]
without_sound.mp4                                 [-------------------]        (overlay at x,y=200,200)
b.mp3                                     [---]
with_sound.mp4                    [---------------------------------------]    (overlay at x,y=100,100)


    


    This works :

    


    ffmpeg -i test.png 
       -t 2 -i a.mp3 
       -t 5 -i without_sound.mp4 
       -t 1 -i b.mp3 
       -t 10 -i with_sound.mp4 
       -filter_complex "
            [0]setpts=PTS-STARTPTS[s0];
            [1]adelay=2000^|2000[s1];
            [2]setpts=PTS-STARTPTS+7/TB[s2];
            [3]adelay=5000^|5000[s3];
            [4]setpts=PTS-STARTPTS+3/TB[s4];
            [4:a]adelay=3000^|3000[t4];
            [s1][s3][t4]amix=inputs=3[outa];
            [s0][s4]overlay=100:100[o2];
            [o2][s2]overlay=200:200[outv]
       " -map [outa] -map [outv]
       out.mp4 -y


    


    but :

    


      

    • is it normal that we have to use both setpts and adelay ? I have tried without adelay and then the sound is not shifted. Said differently, is there a way to simplify :

      


      [4]setpts=PTS-STARTPTS+3/TB[s4];
[4:a]adelay=3000^|3000[t4];


      


       ?

      


    • 


    • is there a way to do it with setpts and asetpts only ? When I replaced adelay=5000|5000 with asetpts=PTS-STARTPTS+5/TB and also for the other one, it didn't give the expected time-shifting (see below)

      


    • 


    • in similar questions/answers I often see overlay=...:enable='between(t,...,...)', here it seems it is not needed, why ?

      


    • 


    


    More generally, how would you simplify this "mix multiple audio and video" ffmpeg code ?

    



    


    More details about the second bullet point : if we replace adelay by asetpts,

    


    -filter_complex "
            [0]setpts=PTS-STARTPTS[s0];
            [1]asetpts=PTS-STARTPTS+2/TB[s1];
            [2]setpts=PTS-STARTPTS+7/TB[s2];
            [3]asetpts=PTS-STARTPTS+5/TB[s3];
            [4]setpts=PTS-STARTPTS+3/TB[s4];
            [4:a]asetpts=PTS-STARTPTS+3/TB[t4];
            [s1][s3][t4]amix=inputs=3[outa];
            [s0][s4]overlay=100:100[o2];
            [o2][s2]overlay=200:200[outv]


    


    it doesn't work : [3] should begin at 0'05", and [4:a] at 0'03" but they all begin at the same time than [1], i.e. at 0'02".

    


    It seems that amix only takes the first asetpts in consideration, and discards the others ; is it true ?

    


  • Mix audio from various sources, regardless if an input video has sound or not

    17 février 2021, par Basj

    The following code :

    


    ffmpeg -i test.png            
       -t 2 -i a.mp3          
       -t 4 -i video.mp4      
       -t 1 -i b.mp3          
       -filter_complex [1]adelay=2000|2000[s1];[3]adelay=5000|5000[s3];[s1][2][s3]amix=inputs=3[outa];[0][2]overlay[outv]^
       -map [outa] -map [outv]^
       out.mp4 -y


    


    works, and mixes the audio from the MP3s (time-shifted, as desired) and from the MP4 video.

    


    But it fails if the MP4 has no audio channel (= a no-sound video) :

    


    


    Stream specifier '' in filtergraph description ... matches no stream

    


    


    I'd like my script to work in both cases, if the video has audio or not.

    


    How to include [2] in the amix if and only if this video has sound ?

    



    


    Note : A good way would be to be able to load a MP4 with always a sound stream : the original sound stream if the video has sound, and a silence audio track if the MP4 has no sound in it. Is this possible with a single command in ffmpeg ?