Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (68)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (8039)

  • FFmpeg stdout emits more frames than it should

    29 décembre 2022, par distante

    I am running this command to get two images per second from an rtsp stream :

    


    const ffmpeg = spawn(this.ffmpeg, [
      // Set the frame rate of the input video to 2 frames per second
      '-r',
      '2',
      // Specify that the RTSP stream should use TCP as the transport protocol
      '-rtsp_transport',
      'tcp',
      // Set the input source to the RTSP stream specified by the `rtspUrl` variable
      '-i',
      rtspUrl,
      // Set the video filter to only output two frames per second (final video)
      '-vf',
      'fps=2',
      // Set the output format to "image2pipe," which specifies that the output should be written to stdout as a series of images in a pipe
      '-f',
      'image2pipe',
      // Overwrite any existing output file without prompting for confirmation
      '-y',
      // Set the output destination to stdout
      '-',
    ]);



    


    I am subscribing to the stdout and saving each emit into a file (just for testing, I need to process the data later)

    


        let i = 0;
    from(ffmpeg.stdout)
      .pipe(
        concatMap((data) => {
          i++
          return writeFile(`file_${i}`, data).then(() => i);
        }),
      )
      .subscribe((i) => {
        this.imageLogger.log(`file_${i} saved`);
      });


    


    The thing is, I am getting a lot more than 2 images per second :

    


    i       | [Nest] 431  - 12/28/2022, 1:16:09 PM     LOG [Image Logger] stderr: ffmpeg version 5.0.1-static https://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2022 the FFmpeg developers
api       |   built with gcc 8 (Debian 8.3.0-6)
api       |   configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg
api       |   libavutil      57. 17.100 / 57. 17.100
api       |   libavcodec     59. 18.100 / 59. 18.100
api       |   libavformat    59. 16.100 / 59. 16.100
api       |   libavdevice    59.  4.100 / 59.  4.100
api       |   libavfilter     8. 24.100 /  8. 24.100
api       |   libswscale      6.  4.100 /  6.  4.100
api       |   libswresample   4.  3.100 /  4.  3.100
api       |   libpostproc    56.  3.100 / 56.  3.100
api       |   configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg
api       |   libavutil      57. 17.100 / 57. 17.100
api       |   libavcodec     59. 18.100 / 59. 18.100
api       |   libavformat    59. 16.100 / 59. 16.100
api       |   libavdevice    59.  4.100 / 59.  4.100
api       |   libavfilter     8. 24.100 /  8. 24.100
api       |   libswscale      6.  4.100 /  6.  4.100
api       |   libswresample   4.  3.100 /  4.  3.100
api       |   libpostproc    56.  3.100 / 56.  3.100
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] stderr: Guessed Channel Layout for Input Stream #0.1 : mono
api       | Input #0, rtsp, from 'rtsp://user:password@192.68.45.54:554/stream2':
api       |   Metadata:
api       |     title           : Session streamed by "TP-LINK RTSP Server"
api       |     comment         : stream2
api       |   Duration: N/A, start: 0.000000, bitrate: N/A
api       |   Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 640x360, 15 fps, 28.58 tbr, 90k tbn
api       |   Stream #0:1: Audio: pcm_alaw, 8000 Hz, mono, s16, 64 kb/s
api       |
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] stderr: Stream mapping:
api       |   Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
api       | Press [q] to stop, [?] for help
api       |
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] stderr: Output #0, image2pipe, to 'pipe:':
api       |   Metadata:
api       |     title           : Session streamed by "TP-LINK RTSP Server"
api       |     comment         : stream2
api       |     encoder         : Lavf59.16.100
api       |   Stream #0:0: Video: mjpeg, yuvj420p(pc, bt709, progressive), 640x360, q=2-31, 200 kb/s, 2 fps, 2 tbn
api       |
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] stderr:     Metadata:
api       |       encoder         : Lavc59.18.100 mjpeg
api       |     Side data:
api       |       cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: N/A
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] stderr: frame=    1 fps=0.0 q=5.0 size=      23kB time=00:00:00.50 bitrate= 383.9kbits/s speed=59.4x
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_1.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_2.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_3.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_4.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_5.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_6.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_7.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_8.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_9.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_10.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_11.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_12.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_13.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_14.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_15.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_16.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_17.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_18.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_19.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_20.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_21.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_22.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_23.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_24.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_25.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_26.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] stderr: frame=   27 fps=0.0 q=16.0 size=     559kB time=00:00:13.50 bitrate= 339.3kbits/s speed=25.3x
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_27.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_28.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_29.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_30.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_31.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:12 PM     LOG [Image Logger] file_32.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:13 PM     LOG [Image Logger] file_33.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:13 PM     LOG [Image Logger] stderr: frame=   34 fps= 33 q=16.6 size=     632kB time=00:00:17.00 bitrate= 304.6kbits/s speed=16.3x
api       | [Nest] 431  - 12/28/2022, 1:16:13 PM     LOG [Image Logger] file_34.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:13 PM     LOG [Image Logger] file_35.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:13 PM     LOG [Image Logger] file_36.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:13 PM     LOG [Image Logger] file_37.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:13 PM     LOG [Image Logger] file_38.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:13 PM     LOG [Image Logger] file_39.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:13 PM     LOG [Image Logger] file_40.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:13 PM     LOG [Image Logger] file_41.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:13 PM     LOG [Image Logger] stderr: frame=   41 fps= 26 q=16.5 size=     705kB time=00:00:20.50 bitrate= 281.7kbits/s speed=13.2x
api       | [Nest] 431  - 12/28/2022, 1:16:13 PM     LOG [Image Logger] file_42.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:13 PM     LOG [Image Logger] file_43.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:13 PM     LOG [Image Logger] file_44.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:13 PM     LOG [Image Logger] file_45.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:13 PM     LOG [Image Logger] file_46.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:14 PM     LOG [Image Logger] file_47.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:14 PM     LOG [Image Logger] file_48.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:14 PM     LOG [Image Logger] stderr: frame=   48 fps= 23 q=16.4 size=     779kB time=00:00:24.00 bitrate= 265.9kbits/s speed=11.6x
api       | [Nest] 431  - 12/28/2022, 1:16:14 PM     LOG [Image Logger] file_49.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:14 PM     LOG [Image Logger] file_50.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:14 PM     LOG [Image Logger] file_51.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:14 PM     LOG [Image Logger] file_52.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:14 PM     LOG [Image Logger] file_53.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:14 PM     LOG [Image Logger] file_54.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:14 PM     LOG [Image Logger] file_55.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:14 PM     LOG [Image Logger] stderr: frame=   56 fps= 22 q=16.1 size=     865kB time=00:00:28.00 bitrate= 253.0kbits/s speed=10.9x
api       | [Nest] 431  - 12/28/2022, 1:16:14 PM     LOG [Image Logger] file_56.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:14 PM     LOG [Image Logger] file_57.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:14 PM     LOG [Image Logger] file_58.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:14 PM     LOG [Image Logger] file_59.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:14 PM     LOG [Image Logger] file_60.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] file_61.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] file_62.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] file_63.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] stderr: frame=   63 fps= 20 q=15.5 size=     939kB time=00:00:31.50 bitrate= 244.3kbits/s speed=10.2x
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] file_64.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] file_65.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] file_66.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] file_67.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] file_68.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] file_69.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] file_70.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] stderr: frame=   70 fps= 19 q=15.4 size=    1017kB time=00:00:35.00 bitrate= 238.0kbits/s speed= 9.7x
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] file_71.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] file_72.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] file_73.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] file_74.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:15 PM     LOG [Image Logger] file_75.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:16 PM     LOG [Image Logger] file_76.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:16 PM     LOG [Image Logger] file_77.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:16 PM     LOG [Image Logger] stderr: frame=   78 fps= 19 q=15.1 size=    1106kB time=00:00:39.00 bitrate= 232.4kbits/s speed=9.48x
api       | [Nest] 431  - 12/28/2022, 1:16:16 PM     LOG [Image Logger] file_78.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:16 PM     LOG [Image Logger] file_79.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:16 PM     LOG [Image Logger] file_80.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:16 PM     LOG [Image Logger] file_81.jpg saved
api       | [Nest] 431  - 12/28/2022, 1:16:16 PM     LOG [Image Logger] file_82.jpg saved


    


    Am I missing some part ? why I do not only get 2 stdout events pro second ?

    


    Edit :

    


    ffmpeg version 5.0.1-static
node : 18.12.1

    


    Edit 2 :

    


    When I run this command on the terminal :

    


    ffmpeg -r 2 -rtsp_transport tcp -i "rtsp://user:password@192.68.45.54:554/stream2" -vf fps=2 -timecode 00:00:00:00 test.mp4


    


    The generated video has a frame rate of 2 but it looks like it is in slow motion. Since the source video has also a timer, I see how it takes something like 12 "frames" to go from one second to another.

    


    This is the output :

    


    fmpeg version 5.1.2 Copyright (c) 2000-2022 the FFmpeg developers
  built with Apple clang version 14.0.0 (clang-1400.0.29.202)
  configuration: --prefix=/usr/local/Cellar/ffmpeg/5.1.2_1 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox
  libavutil      57. 28.100 / 57. 28.100
  libavcodec     59. 37.100 / 59. 37.100
  libavformat    59. 27.100 / 59. 27.100
  libavdevice    59.  7.100 / 59.  7.100
  libavfilter     8. 44.100 /  8. 44.100
  libswscale      6.  7.100 /  6.  7.100
  libswresample   4.  7.100 /  4.  7.100
  libpostproc    56.  6.100 / 56.  6.100
Input #0, rtsp, from 'rtsp://user:password@192.68.45.54:554/stream2':
  Metadata:
    title           : Session streamed by "TP-LINK RTSP Server"
    comment         : stream2
  Duration: N/A, start: 0.000000, bitrate: N/A
  Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 640x360, 15 fps, 28.58 tbr, 90k tbn
  Stream #0:1: Audio: pcm_alaw, 8000 Hz, mono, s16, 64 kb/s
File 'test.mp4' already exists. Overwrite? [y/N] y
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #0:1 (pcm_alaw (native) -> aac (native))
Press [q] to stop, [?] for help
[aac @ 0x7fa79fb061c0] Too many bits 8832.000000 > 6144 per frame requested, clamping to max
[libx264 @ 0x7fa79fb04e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x7fa79fb04e00] profile High, level 2.2, 4:2:0, 8-bit
[libx264 @ 0x7fa79fb04e00] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=11 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=2 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
[mp4 @ 0x7fa78f104780] Using non-standard frame rate 2/1
    Last message repeated 1 times
Output #0, mp4, to 'test.mp4':
  Metadata:
    title           : Session streamed by "TP-LINK RTSP Server"
    comment         : stream2
    timecode        : 00:00:00:00
    encoder         : Lavf59.27.100
  Stream #0:0: Video: h264 (avc1 / 0x31637661), yuvj420p(pc, bt709, progressive), 640x360, q=2-31, 2 fps, 16384 tbn
    Metadata:
      encoder         : Lavc59.37.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
  Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, mono, fltp, 48 kb/s
    Metadata:
      encoder         : Lavc59.37.100 aac
frame=  186 fps= 28 q=20.0 size=     256kB time=00:01:04.00 bitrate=  32.8kbits/s speed=9.58x

[q] command received. Exiting.

frame=  190 fps= 27 q=-1.0 Lsize=     749kB time=00:01:33.50 bitrate=  65.6kbits/s speed=13.2x
video:708kB audio:35kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.836704%
[libx264 @ 0x7fa79fb04e00] frame I:1     Avg QP:10.13  size: 45352
[libx264 @ 0x7fa79fb04e00] frame P:74    Avg QP:12.27  size:  7161
[libx264 @ 0x7fa79fb04e00] frame B:115   Avg QP:22.28  size:  1294
[libx264 @ 0x7fa79fb04e00] consecutive B-frames: 14.7%  9.5% 12.6% 63.2%
[libx264 @ 0x7fa79fb04e00] mb I  I16..4: 15.0% 14.0% 71.0%
[libx264 @ 0x7fa79fb04e00] mb P  I16..4:  0.6%  1.5%  2.4%  P16..4: 20.4%  4.9%  5.9%  0.0%  0.0%    skip:64.3%
[libx264 @ 0x7fa79fb04e00] mb B  I16..4:  0.0%  0.1%  0.3%  B16..8: 15.9%  2.4%  1.5%  direct: 0.9%  skip:78.9%  L0:45.5% L1:48.6% BI: 5.9%
[libx264 @ 0x7fa79fb04e00] 8x8 transform intra:27.4% inter:64.7%
[libx264 @ 0x7fa79fb04e00] coded y,uvDC,uvAC intra: 81.7% 60.2% 46.5% inter: 9.4% 4.2% 1.5%
[libx264 @ 0x7fa79fb04e00] i16 v,h,dc,p: 44% 10% 19% 27%
[libx264 @ 0x7fa79fb04e00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 52% 14% 13%  3%  2%  3%  3%  5%  4%
[libx264 @ 0x7fa79fb04e00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 35% 18% 13%  6%  5%  6%  5%  6%  7%
[libx264 @ 0x7fa79fb04e00] i8c dc,h,v,p: 45% 18% 27%  9%
[libx264 @ 0x7fa79fb04e00] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x7fa79fb04e00] ref P L0: 78.9%  7.6% 11.4%  2.1%
[libx264 @ 0x7fa79fb04e00] ref B L0: 90.8%  8.0%  1.2%
[libx264 @ 0x7fa79fb04e00] ref B L1: 97.0%  3.0%
[libx264 @ 0x7fa79fb04e00] kb/s:60.98
[aac @ 0x7fa79fb061c0] Qavg: 64944.246


    


  • Node.js readable maximize throughput/performance for compute intense readable - Writable doesn't pull data fast enough

    31 décembre 2022, par flohall

    General setup

    


    I developed an application using AWS Lambda node.js 14.
I use a custom Readable implementation FrameCreationStream that uses node-canvas to draw images, svgs and more on a canvas. This result is then extracted as a raw image buffer in BGRA. A single image buffer contains 1920 * 1080 * 4 Bytes = 8294400 Bytes 8 MB.
This is then piped to stdin of a child_process running ffmpeg.
The highWaterMark of my Readable in objectMode:true is set to 25 so that the internal buffer can use up to 8 MB * 25 = 200 MB.

    


    All this works fine and also doesn't contain too much RAM. But I noticed after some time, that the performance is not ideally.

    


    Performance not optimal

    


    I have an example input that generates a video of 315 frames. If I set highWaterMark to a value above 25 the performance increases to the point, when I set to a value of 315 or above.

    


    For some reason ffmpeg doesn't start to pull any data until highWaterMark is reached. Obviously thats not what I want. ffmpeg should always consume data if minimum 1 frame is cached in the Readable and if it has finished processing the frame before. And the Readable should produce more frames as long highWaterMark isn't reached or the last frame has been reached. So ideally the Readable and the Writeable are busy all the time.

    


    I found another way to improve the speed. If I add a timeout in the _read() method of the Readable after let's say every tenth frame for 100 ms. Then the ffmpeg-Writable will use this timeout to write some frames to ffmpeg.

    


    It seems like frames aren't passed to ffmpeg during frame creation because some node.js main thread is busy ?

    


    The fastest result I have if I increase highWaterMark above the amount of frames - which doesn't work for longer videos as this would make the AWS Lambda RAM explode. And this makes the whole streaming idea useless. Using timeouts always gives me stomach pain. Also depending on the execution on different environments a good fitting timeout might differ. Any ideas ?

    


    FrameCreationStream

    


    import canvas from &#x27;canvas&#x27;;&#xA;import {Readable} from &#x27;stream&#x27;;&#xA;import {IMAGE_STREAM_BUFFER_SIZE, PerformanceUtil, RenderingLibraryError, VideoRendererInput} from &#x27;vm-rendering-backend-commons&#x27;;&#xA;import {AnimationAssets, BufferType, DrawingService, FullAnimationData} from &#x27;vm-rendering-library&#x27;;&#xA;&#xA;/**&#xA; * This is a proper back pressure compatible implementation of readable for a having a stream to read single frames from.&#xA; * Whenever read() is called a new frame is created and added to the stream.&#xA; * read() will be called internally until options.highWaterMark has been reached.&#xA; * then calling read will be paused until one frame is read from the stream.&#xA; */&#xA;export class FrameCreationStream extends Readable {&#xA;&#xA;    drawingService: DrawingService;&#xA;    endFrameIndex: number;&#xA;    currentFrameIndex: number = 0;&#xA;    startFrameIndex: number;&#xA;    frameTimer: [number, number];&#xA;    readTimer: [number, number];&#xA;    fullAnimationData: FullAnimationData;&#xA;&#xA;    constructor(animationAssets: AnimationAssets, fullAnimationData: FullAnimationData, videoRenderingInput: VideoRendererInput, frameTimer: [number, number]) {&#xA;        super({highWaterMark: IMAGE_STREAM_BUFFER_SIZE, objectMode: true});&#xA;&#xA;        this.frameTimer = frameTimer;&#xA;        this.readTimer = PerformanceUtil.startTimer();&#xA;&#xA;        this.fullAnimationData = fullAnimationData;&#xA;&#xA;        this.startFrameIndex = Math.floor(videoRenderingInput.startFrameId);&#xA;        this.currentFrameIndex = this.startFrameIndex;&#xA;        this.endFrameIndex = Math.floor(videoRenderingInput.endFrameId);&#xA;&#xA;        this.drawingService = new DrawingService(animationAssets, fullAnimationData, videoRenderingInput, canvas);&#xA;        console.time("read");&#xA;    }&#xA;&#xA;    /**&#xA;     * this method is only overwritten for debugging&#xA;     * @param size&#xA;     */&#xA;    read(size?: number): string | Buffer {&#xA;&#xA;        console.log("read("&#x2B;size&#x2B;")");&#xA;        const buffer = super.read(size);&#xA;        console.log(buffer);&#xA;        console.log(buffer?.length);&#xA;        if(buffer) {&#xA;            console.timeLog("read");&#xA;        }&#xA;        return buffer;&#xA;    }&#xA;&#xA;    // _read() will be called when the stream wants to pull more data in.&#xA;    // _read() will be called again after each call to this.push(dataChunk) once the stream is ready to accept more data. https://nodejs.org/api/stream.html#readable_readsize&#xA;    // this way it is ensured, that even though this.createImageBuffer() is async, only one frame is created at a time and the order is kept&#xA;    _read(): void {&#xA;        // as frame numbers are consecutive and unique, we have to draw each frame number (also the first and the last one)&#xA;        if (this.currentFrameIndex &lt;= this.endFrameIndex) {&#xA;            PerformanceUtil.logTimer(this.readTimer, &#x27;WAIT   -> READ\t&#x27;);&#xA;            this.createImageBuffer()&#xA;                 .then(buffer => this.optionalTimeout(buffer))&#xA;                // push means adding a buffered raw frame to the stream&#xA;                .then((buffer: Buffer) => {&#xA;                    this.readTimer = PerformanceUtil.startTimer();&#xA;                    // the following two frame numbers start with 1 as first value&#xA;                    const processedFrameNumberOfScene = 1 &#x2B; this.currentFrameIndex - this.startFrameIndex;&#xA;                    const totalFrameNumberOfScene = 1 &#x2B; this.endFrameIndex - this.startFrameIndex;&#xA;                    // the overall frameId or frameIndex starts with frameId 0&#xA;                    const processedFrameIndex = this.currentFrameIndex;&#xA;                    this.currentFrameIndex&#x2B;&#x2B;;&#xA;                    this.push(buffer); // nothing besides logging should happen after calling this.push(buffer)&#xA;                    console.log(processedFrameNumberOfScene &#x2B; &#x27; of &#x27; &#x2B; totalFrameNumberOfScene &#x2B; &#x27; processed - full video frameId: &#x27; &#x2B; processedFrameIndex &#x2B; &#x27; - buffered frames: &#x27; &#x2B; this.readableLength);&#xA;                })&#xA;                .catch(err => {&#xA;                    // errors will be finally handled, when subscribing to frameCreation stream in ffmpeg service&#xA;                    // this log is just generated for tracing errors and if for some reason the handling in ffmpeg service doesn&#x27;t work&#xA;                    console.log("createImageBuffer: ", err);&#xA;                    this.emit("error", err);&#xA;                });&#xA;        } else {&#xA;            // push(null) makes clear that this stream has ended&#xA;            this.push(null);&#xA;            PerformanceUtil.logTimer(this.frameTimer, &#x27;FRAME_STREAM&#x27;);&#xA;        }&#xA;    }&#xA;&#xA;    private optionalTimeout(buffer: Buffer): Promise<buffer> {&#xA;        if(this.currentFrameIndex % 10 === 0) {&#xA;            return new Promise(resolve => setTimeout(() => resolve(buffer), 140));&#xA;        }&#xA;        return Promise.resolve(buffer);&#xA;    }&#xA;&#xA;    // prevent memory leaks - without this lambda memory will increase with every call&#xA;    _destroy(): void {&#xA;        this.drawingService.destroyStage();&#xA;    }&#xA;&#xA;    /**&#xA;     * This creates a raw pixel buffer that contains a single frame of the video drawn by the rendering library&#xA;     *&#xA;     */&#xA;    public async createImageBuffer(): Promise<buffer> {&#xA;&#xA;        const drawTimer = PerformanceUtil.startTimer();&#xA;        try {&#xA;            await this.drawingService.drawForFrame(this.currentFrameIndex);&#xA;        } catch (err: any) {&#xA;            throw new RenderingLibraryError(err);&#xA;        }&#xA;&#xA;        PerformanceUtil.logTimer(drawTimer, &#x27;DRAW   -> FRAME\t&#x27;);&#xA;&#xA;        const bufferTimer = PerformanceUtil.startTimer();&#xA;        // Creates a raw pixel buffer, containing simple binary data&#xA;        // the exact same information (BGRA/screen ratio) has to be provided to ffmpeg, because ffmpeg cannot detect format for raw input&#xA;        const buffer = await this.drawingService.toBuffer(BufferType.RAW);&#xA;        PerformanceUtil.logTimer(bufferTimer, &#x27;CANVAS -> BUFFER&#x27;);&#xA;&#xA;        return buffer;&#xA;    }&#xA;}&#xA;</buffer></buffer>

    &#xA;

    FfmpegService

    &#xA;

    import {ChildProcess, execFile} from &#x27;child_process&#x27;;&#xA;import {Readable} from &#x27;stream&#x27;;&#xA;import {FPS, StageSize} from &#x27;vm-rendering-library&#x27;;&#xA;import {&#xA;    FfmpegError,&#xA;    LOCAL_MERGE_VIDEOS_TEXT_FILE, LOCAL_SOUND_FILE_PATH,&#xA;    LOCAL_VIDEO_FILE_PATH,&#xA;    LOCAL_VIDEO_SOUNDLESS_MERGE_FILE_PATH&#xA;} from "vm-rendering-backend-commons";&#xA;&#xA;/**&#xA; * This class bundles all ffmpeg usages for rendering one scene.&#xA; * FFmpeg is a console program which can transcode nearly all types of sounds, images and videos from one to another.&#xA; */&#xA;export class FfmpegService {&#xA;&#xA;    ffmpegPath: string = null;&#xA;&#xA;&#xA;    constructor(ffmpegPath: string) {&#xA;        this.ffmpegPath = ffmpegPath;&#xA;    }&#xA;&#xA;    /**&#xA;     * Convert a stream of raw images into an .mp4 video using the command line program ffmpeg.&#xA;     *&#xA;     * @param inputStream an input stream containing images in raw format BGRA&#xA;     * @param stageSize the size of a single frame in pixels (minimum is 2*2)&#xA;     * @param outputPath the filepath to write the resulting video to&#xA;     */&#xA;    public imageToVideo(inputStream: Readable, stageSize: StageSize, outputPath: string): Promise<void> {&#xA;        const args: string[] = [&#xA;            &#x27;-f&#x27;,&#xA;            &#x27;rawvideo&#x27;,&#xA;            &#x27;-r&#x27;,&#xA;            `${FPS}`,&#xA;            &#x27;-pix_fmt&#x27;,&#xA;            &#x27;bgra&#x27;,&#xA;            &#x27;-s&#x27;,&#xA;            `${stageSize.width}x${stageSize.height}`,&#xA;            &#x27;-i&#x27;,&#xA;            // input "-" means input will be passed via pipe (streamed)&#xA;            &#x27;-&#x27;,&#xA;            // codec that also QuickTime player can understand&#xA;            &#x27;-vcodec&#x27;,&#xA;            &#x27;libx264&#x27;,&#xA;            &#x27;-pix_fmt&#x27;,&#xA;            &#x27;yuv420p&#x27;,&#xA;            /*&#xA;                * "-movflags faststart":&#xA;                * metadata at beginning of file&#xA;                * needs more RAM&#xA;                * file will be broken, if not finished properly&#xA;                * higher application compatibility&#xA;                * better for browser streaming&#xA;            */&#xA;            &#x27;-movflags&#x27;,&#xA;            &#x27;faststart&#x27;,&#xA;            // "-preset ultrafast", //use this to speed up compression, but quality/compression ratio gets worse&#xA;            // don&#x27;t overwrite an existing file here,&#xA;            // but delete file in the beginning of execution index.ts&#xA;            // (this is better for local testing believe me)&#xA;            outputPath&#xA;        ];&#xA;&#xA;        return this.execFfmpegPromise(args, inputStream);&#xA;    }&#xA;&#xA;    private execFfmpegPromise(args: string[], inputStream?: Readable): Promise<void> {&#xA;        const ffmpegServiceSelf = this;&#xA;        return new Promise(function (resolve, reject) {&#xA;            const executionProcess: ChildProcess = execFile(ffmpegServiceSelf.ffmpegPath, args, (err) => {&#xA;                if (err) {&#xA;                    reject(new FfmpegError(err));&#xA;                } else {&#xA;                    console.log(&#x27;ffmpeg finished&#x27;);&#xA;                    resolve();&#xA;                }&#xA;            });&#xA;            if (inputStream) {&#xA;                // it&#x27;s important to listen on errors of input stream before piping it into the write stream&#xA;                // if we don&#x27;t do this here, we get an unhandled promise exception for every issue in the input stream&#xA;                inputStream.on("error", err => {&#xA;                    reject(err);&#xA;                });&#xA;                // don&#x27;t reject promise here as the error will also be thrown inside execFile and will contain more debugging info&#xA;                // this log is just generated for tracing errors and if for some reason the handling in execFile doesn&#x27;t work&#xA;                inputStream.pipe(executionProcess.stdin).on("error", err => console.log("pipe stream: " , err));&#xA;            }&#xA;        });&#xA;    }&#xA;}&#xA;</void></void>

    &#xA;

  • FFMpeg CUDA yuvj420p frame conversion to cv::Mat layers shifted

    26 février 2023, par AcidTonic

    I am trying to retrieve hardware decoded H264 frames from the cuda backend of ffmpeg and display them as a cv::Mat. I got decently far and was able to get color images but it seems the conversion is not quite right as the image I get has a green bar at the top and if you look closely the blue parts of the image are offset down and to the right a little bit making everything look a little wonky.

    &#xA;

    Correct Image as shown by ffplay using the same driver&#xA;enter image description here&#xA;Image I am getting&#xA;enter image description here

    &#xA;

    Here is the full source code in the hopes someone can help me to get the correct image here...

    &#xA;

    #include &#xA;&#xA;#include &#xA;&#xA;#include &#xA;&#xA;#include &#xA;&#xA;#include &#xA;&#xA;#include &#xA;&#xA;#include <iostream>&#xA;&#xA;#include <fstream>&#xA;&#xA;#include <cstdlib>&#xA;&#xA;#include <chrono>&#xA;&#xA;#include <cstring>&#xA;&#xA;extern "C" {&#xA;&#xA;  //Linker errors if not inside extern. FFMPEG headers are not C&#x2B;&#x2B; aware&#xA;  #include <libavcodec></libavcodec>avcodec.h>&#xA;&#xA;  #include <libavformat></libavformat>avformat.h>&#xA;&#xA;  #include <libavutil></libavutil>pixdesc.h>&#xA;&#xA;  #include <libavutil></libavutil>hwcontext.h>&#xA;&#xA;  #include <libavutil></libavutil>opt.h>&#xA;&#xA;  #include <libavutil></libavutil>avassert.h>&#xA;&#xA;  #include <libavutil></libavutil>imgutils.h>&#xA;&#xA;}&#xA;&#xA;#include <iomanip>&#xA;&#xA;#include <string>&#xA;&#xA;#include <sstream>&#xA;&#xA;#include <opencv2></opencv2>opencv.hpp>&#xA;&#xA;#ifdef __cplusplus&#xA;extern "C" {&#xA;  #endif // __cplusplus&#xA;  #include <libavdevice></libavdevice>avdevice.h>&#xA;&#xA;  #include <libavfilter></libavfilter>avfilter.h>&#xA;&#xA;  #include <libavformat></libavformat>avio.h>&#xA;&#xA;  #include <libavutil></libavutil>avutil.h>&#xA;&#xA;  #include <libpostproc></libpostproc>postprocess.h>&#xA;&#xA;  #include <libswresample></libswresample>swresample.h>&#xA;&#xA;  #include <libswscale></libswscale>swscale.h>&#xA;&#xA;  #ifdef __cplusplus&#xA;} // end extern "C".&#xA;#endif // __cplusplus&#xA;&#xA;static AVBufferRef * hw_device_ctx = NULL;&#xA;static enum AVPixelFormat hw_pix_fmt;&#xA;static FILE * output_file_fd = NULL;&#xA;cv::Mat output_mat;&#xA;int bgr_size;&#xA;&#xA;static int hw_decoder_init(AVCodecContext * ctx,&#xA;  const enum AVHWDeviceType type) {&#xA;  int err = 0;&#xA;&#xA;  if ((err = av_hwdevice_ctx_create( &amp; hw_device_ctx, type,&#xA;      NULL, NULL, 0)) &lt; 0) {&#xA;    fprintf(stderr, "Failed to create specified HW device.\n");&#xA;    return err;&#xA;  }&#xA;  ctx -> hw_device_ctx = av_buffer_ref(hw_device_ctx);&#xA;&#xA;  return err;&#xA;}&#xA;&#xA;static enum AVPixelFormat get_hw_format(AVCodecContext * ctx,&#xA;  const enum AVPixelFormat * pix_fmts) {&#xA;  const enum AVPixelFormat * p;&#xA;&#xA;  for (p = pix_fmts;* p != -1; p&#x2B;&#x2B;) {&#xA;    if ( * p == hw_pix_fmt)&#xA;      return * p;&#xA;  }&#xA;&#xA;  fprintf(stderr, "Failed to get HW surface format.\n");&#xA;  return AV_PIX_FMT_NONE;&#xA;}&#xA;&#xA;static int decode_write(AVCodecContext * avctx, AVPacket * packet) {&#xA;  AVFrame * frame = NULL, * sw_frame = NULL;&#xA;  AVFrame * tmp_frame = NULL;&#xA;  uint8_t * buffer = NULL;&#xA;  int size;&#xA;  int ret = 0;&#xA;&#xA;  ret = avcodec_send_packet(avctx, packet);&#xA;  if (ret &lt; 0) {&#xA;    fprintf(stderr, "Error during decoding\n");&#xA;    return ret;&#xA;  }&#xA;&#xA;  while (1) {&#xA;    if (!(frame = av_frame_alloc()) || !(sw_frame = av_frame_alloc())) {&#xA;      fprintf(stderr, "Can not alloc frame\n");&#xA;      ret = AVERROR(ENOMEM);&#xA;      av_frame_free( &amp; frame);&#xA;      av_frame_free( &amp; sw_frame);&#xA;      av_freep( &amp; buffer);&#xA;      if (ret &lt; 0) {&#xA;        return ret;&#xA;      }&#xA;&#xA;    }&#xA;&#xA;    ret = avcodec_receive_frame(avctx, frame);&#xA;    if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {&#xA;      av_frame_free( &amp; frame);&#xA;      av_frame_free( &amp; sw_frame);&#xA;      return 0;&#xA;    } else if (ret &lt; 0) {&#xA;      fprintf(stderr, "Error while decoding\n");&#xA;      av_frame_free( &amp; frame);&#xA;      av_frame_free( &amp; sw_frame);&#xA;      av_freep( &amp; buffer);&#xA;      if (ret &lt; 0) {&#xA;        return ret;&#xA;      }&#xA;&#xA;    }&#xA;&#xA;    if (frame -> format == hw_pix_fmt) {&#xA;      /* retrieve data from GPU to CPU */&#xA;      if ((ret = av_hwframe_transfer_data(sw_frame, frame, 0)) &lt; 0) {&#xA;        fprintf(stderr, "Error transferring the data to system memory\n");&#xA;        av_frame_free( &amp; frame);&#xA;        av_frame_free( &amp; sw_frame);&#xA;        av_freep( &amp; buffer);&#xA;        if (ret &lt; 0) {&#xA;          return ret;&#xA;        }&#xA;&#xA;      }&#xA;      tmp_frame = sw_frame;&#xA;    } else {&#xA;      tmp_frame = frame;&#xA;    }&#xA;&#xA;    AVPixelFormat format_to_use = AV_PIX_FMT_YUVJ420P;&#xA;    cv::Mat mat_src = cv::Mat(sw_frame -> height &#x2B; (sw_frame -> height / 2), sw_frame -> width, CV_8UC1, sw_frame -> data[0]);&#xA;    cv::Mat out_mat;&#xA;    cv::cvtColor(mat_src, out_mat, cv::COLOR_YUV2RGB_NV21);&#xA;&#xA;    output_mat = out_mat;&#xA;&#xA;    if (output_mat.empty() == false) {&#xA;      cv::imshow("image", output_mat);&#xA;      cv::waitKey(1);&#xA;    }&#xA;&#xA;    av_frame_free( &amp; frame);&#xA;    av_frame_free( &amp; sw_frame);&#xA;    av_freep( &amp; buffer);&#xA;    return ret;&#xA;  }&#xA;}&#xA;&#xA;TEST_CASE("CUDAH264", "Tests hardware h264 decoding") {&#xA;&#xA;  AVFormatContext * input_ctx = NULL;&#xA;  int video_stream, ret;&#xA;  AVStream * video = NULL;&#xA;  AVCodecContext * decoder_ctx = NULL;&#xA;  AVCodec * decoder = NULL;&#xA;  AVPacket * packet = NULL;&#xA;  enum AVHWDeviceType type;&#xA;  int i;&#xA;&#xA;  std::string device_type = "cuda";&#xA;  std::string input_file = "rtsp://10.100.2.152"; //My H264 network stream here...&#xA;&#xA;  /* The stream data is below...&#xA;  Input #0, rtsp, from &#x27;rtsp://10.100.2.152&#x27;:&#xA;    Metadata:&#xA;      title           : VCP IPC Realtime stream&#xA;    Duration: N/A, start: 0.000000, bitrate: N/A&#xA;    Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1920x1080, 10 fps, 10 tbr, 90k tbn, 20 tbc&#xA;  */&#xA;&#xA;  type = av_hwdevice_find_type_by_name(device_type.c_str());&#xA;  if (type == AV_HWDEVICE_TYPE_NONE) {&#xA;    fprintf(stderr, "Device type %s is not supported.\n", device_type.c_str());&#xA;    fprintf(stderr, "Available device types:");&#xA;    while ((type = av_hwdevice_iterate_types(type)) != AV_HWDEVICE_TYPE_NONE)&#xA;      fprintf(stderr, " %s", av_hwdevice_get_type_name(type));&#xA;    fprintf(stderr, "\n");&#xA;    throw std::runtime_error("Error");&#xA;  }&#xA;&#xA;  packet = av_packet_alloc();&#xA;  if (!packet) {&#xA;    fprintf(stderr, "Failed to allocate AVPacket\n");&#xA;    throw std::runtime_error("Error");&#xA;  }&#xA;&#xA;  /* open the input file */&#xA;  if (avformat_open_input( &amp; input_ctx, input_file.c_str(), NULL, NULL) != 0) {&#xA;    fprintf(stderr, "Cannot open input file &#x27;%s&#x27;\n", input_file.c_str());&#xA;    throw std::runtime_error("Error");&#xA;  }&#xA;&#xA;  if (avformat_find_stream_info(input_ctx, NULL) &lt; 0) {&#xA;    fprintf(stderr, "Cannot find input stream information.\n");&#xA;    throw std::runtime_error("Error");&#xA;  }&#xA;&#xA;  av_dump_format(input_ctx, 0, input_file.c_str(), 0);&#xA;&#xA;  for (int i = 0; i &lt; input_ctx -> nb_streams; i&#x2B;&#x2B;) {&#xA;    auto pCodec = avcodec_find_decoder(input_ctx -> streams[i] -> codecpar -> codec_id);&#xA;    auto pCodecCtx = avcodec_alloc_context3(pCodec);&#xA;    avcodec_parameters_to_context(pCodecCtx, input_ctx -> streams[i] -> codecpar);&#xA;&#xA;    printf("Found Video stream with ID: %d\n", input_ctx -> streams[i] -> id);&#xA;    printf("\t Stream Index: %d\n", input_ctx -> streams[i] -> index);&#xA;&#xA;    AVCodecParameters * codecpar = input_ctx -> streams[i] -> codecpar;&#xA;    printf("\t Codec Type: %s\n", av_get_media_type_string(codecpar -> codec_type));&#xA;    printf("\t Side data count: %d\n", input_ctx -> streams[i] -> nb_side_data);&#xA;    printf("\t Pixel format: %i\n", input_ctx -> streams[i] -> codecpar -> format);&#xA;    printf("\t Pixel Format Name: %s\n", av_get_pix_fmt_name((AVPixelFormat) input_ctx -> streams[i] -> codecpar -> format));&#xA;    printf("\t Metadata count: %d\n", av_dict_count(input_ctx -> streams[i] -> metadata));&#xA;  }&#xA;&#xA;  /* find the video stream information */&#xA;  ret = av_find_best_stream(input_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &amp; decoder, 0);&#xA;  if (ret &lt; 0) {&#xA;    fprintf(stderr, "Cannot find a video stream in the input file\n");&#xA;    throw std::runtime_error("Error");&#xA;  }&#xA;&#xA;  video_stream = ret;&#xA;&#xA;  for (i = 0;; i&#x2B;&#x2B;) {&#xA;    const AVCodecHWConfig * config = avcodec_get_hw_config(decoder, i);&#xA;    if (!config) {&#xA;      fprintf(stderr, "Decoder %s does not support device type %s.\n",&#xA;        decoder -> name, av_hwdevice_get_type_name(type));&#xA;      throw std::runtime_error("Error");&#xA;    }&#xA;    if (config -> methods &amp; AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX &amp;&amp;&#xA;      config -> device_type == type) {&#xA;      hw_pix_fmt = config -> pix_fmt;&#xA;      break;&#xA;    }&#xA;  }&#xA;&#xA;  if (!(decoder_ctx = avcodec_alloc_context3(decoder))) {&#xA;    throw std::runtime_error("NO MEMORY");&#xA;  }&#xA;&#xA;  video = input_ctx -> streams[video_stream];&#xA;  if (avcodec_parameters_to_context(decoder_ctx, video -> codecpar) &lt; 0) {&#xA;    throw std::runtime_error("Error");&#xA;  }&#xA;&#xA;  decoder_ctx -> get_format = get_hw_format;&#xA;&#xA;  if (hw_decoder_init(decoder_ctx, type) &lt; 0) {&#xA;    throw std::runtime_error("Error");&#xA;  }&#xA;&#xA;  if ((ret = avcodec_open2(decoder_ctx, decoder, NULL)) &lt; 0) {&#xA;    fprintf(stderr, "Failed to open codec for stream #%u\n", video_stream);&#xA;    throw std::runtime_error("Error");&#xA;  }&#xA;&#xA;  /* actual decoding and dump the raw data */&#xA;  while (ret >= 0) {&#xA;    if ((ret = av_read_frame(input_ctx, packet)) &lt; 0)&#xA;      break;&#xA;&#xA;    if (video_stream == packet -> stream_index)&#xA;      ret = decode_write(decoder_ctx, packet);&#xA;&#xA;    av_packet_unref(packet);&#xA;  }&#xA;&#xA;  /* flush the decoder */&#xA;  ret = decode_write(decoder_ctx, NULL);&#xA;&#xA;  if (output_file_fd) {&#xA;    fclose(output_file_fd);&#xA;  }&#xA;  av_packet_free( &amp; packet);&#xA;  avcodec_free_context( &amp; decoder_ctx);&#xA;  avformat_close_input( &amp; input_ctx);&#xA;  av_buffer_unref( &amp; hw_device_ctx);&#xA;&#xA;}&#xA;</sstream></string></iomanip></cstring></chrono></cstdlib></fstream></iostream>

    &#xA;