Recherche avancée

Médias (91)

Autres articles (47)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (9645)

  • Merge file without data loss using FFmpeg inside of WASM

    9 septembre 2023, par Deji

    Edit : I'm rewriting this entire question

    


    Goal : To reconstruct a video from its pieces/chunks from a network stream inside of an @ffmpeg/ffmpeg worker

    


    Problems :

    


      

    1. Video chunks/pieces which come after the first piece/chunk are reported by @ffmpeg/ffmpeg to have invalid data, as seen in the log below :
    2. 


    


    {
  "type": "stderr",
  "message": "video-0_chunk-1.part: Invalid data found when processing input"
}


    


      

    1. How would I merge these chunks/pieces to reconstruct the full video using @ffmpeg/ffmpeg (after solving the first issue above)
    2. 


    


    My current code situation :

    


      

    1. For merging the video pieces
    2. 


    


    const constructFile = async (chunks: Uint8Array[], queueId: number) => {
  await Promise.all(
    chunks.map(async (chunk, index) => {
      const chunkFile = `video-${queueId}_chunk-${index}`;
      await ffmpeg.writeFile(chunkFile, chunk);

      // Return information about newly created file
      ffmpeg.exec(["-i", chunkFile]);
    })
  );
};


    


    I'm reading the logs/output for

    


    ffmpeg.exec(['-i', chunkFile])


    


    using

    


    ffmpeg.on('log', (log) => console.log(log))


    


      

    1. For fetching the videos using streams
    2. 


    


    await useFetch(Capacitor.convertFileSrc(file.path), {
  responseType: "stream",

  onResponse: async ({ response }) => {
    if (response.body) {
      const reader = response.body.getReader();

      while (true) {
        const { done, value } = await reader.read();

        if (done) break;
        file.chunks.push(value);
      }
      reader.releaseLock();
    }
  },
});


    


    Note : file.chunks is linked to a reactive value which is passed to constructFile() when initialized

    


    These are the logs I get from the code currently above :

    


    chunk-4OF65L5M.js:2710 <suspense> is an experimental feature and its API will likely change.&#xA;(index):298 native App.addListener (#25407936)&#xA;(index):298 native FilePicker.pickVideos (#25407937)&#xA;(index):272 result FilePicker.pickVideos (#25407937)&#xA;(index):298 native VideoEditor.thumbnail (#25407938)&#xA;(index):272 result VideoEditor.thumbnail (#25407938)&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;ffmpeg version 5.1.3 Copyright (c) 2000-2022 the FFmpeg developers&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  built with emcc (Emscripten gcc/clang-like repla…3.1.40 (5c27e79dd0a9c4e27ef2326841698cdd4f6b5784)&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  configuration: --target-os=none --arch=x86_32 --…e-libfreetype --enable-libfribidi --enable-libass&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavutil      57. 28.100 / 57. 28.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavcodec     59. 37.100 / 59. 37.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavformat    59. 27.100 / 59. 27.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavdevice    59.  7.100 / 59.  7.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavfilter     8. 44.100 /  8. 44.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libswscale      6.  7.100 /  6.  7.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libswresample   4.  7.100 /  4.  7.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libpostproc    56.  6.100 / 56.  6.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: "Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;video-0_chunk-0&#x27;:"}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  Metadata:&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;    major_brand     : mp42&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;    minor_version   : 0&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;    compatible_brands: isommp42&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;    creation_time   : 2022-11-29T14:46:32.000000Z&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  Duration: 00:00:51.50, start: 0.000000, bitrate: 81 kb/s&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  Stream #0:0[0x1](und): Video: h264 (High) (avc1 …6], 259 kb/s, 30 fps, 30 tbr, 15360 tbn (default)&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;    Metadata:&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;      creation_time   : 2022-11-29T14:46:32.000000Z&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;      handler_name    : ISO Media file produced by Google Inc. Created on: 11/29/2022.&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;      vendor_id       : [0][0][0][0]&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0…706D), 44100 Hz, stereo, fltp, 127 kb/s (default)&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;    Metadata:&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;      creation_time   : 2022-11-29T14:46:32.000000Z&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;      handler_name    : ISO Media file produced by Google Inc. Created on: 11/29/2022.&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;      vendor_id       : [0][0][0][0]&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;At least one output file must be specified&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;Aborted()&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;ffmpeg version 5.1.3 Copyright (c) 2000-2022 the FFmpeg developers&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  built with emcc (Emscripten gcc/clang-like repla…3.1.40 (5c27e79dd0a9c4e27ef2326841698cdd4f6b5784)&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  configuration: --target-os=none --arch=x86_32 --…e-libfreetype --enable-libfribidi --enable-libass&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavutil      57. 28.100 / 57. 28.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavcodec     59. 37.100 / 59. 37.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavformat    59. 27.100 / 59. 27.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavdevice    59.  7.100 / 59.  7.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavfilter     8. 44.100 /  8. 44.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libswscale      6.  7.100 /  6.  7.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libswresample   4.  7.100 /  4.  7.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libpostproc    56.  6.100 / 56.  6.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;video-0_chunk-1: Invalid data found when processing input&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;Aborted()&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;ffmpeg version 5.1.3 Copyright (c) 2000-2022 the FFmpeg developers&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  built with emcc (Emscripten gcc/clang-like repla…3.1.40 (5c27e79dd0a9c4e27ef2326841698cdd4f6b5784)&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  configuration: --target-os=none --arch=x86_32 --…e-libfreetype --enable-libfribidi --enable-libass&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavutil      57. 28.100 / 57. 28.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavcodec     59. 37.100 / 59. 37.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavformat    59. 27.100 / 59. 27.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavdevice    59.  7.100 / 59.  7.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavfilter     8. 44.100 /  8. 44.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libswscale      6.  7.100 /  6.  7.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libswresample   4.  7.100 /  4.  7.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libpostproc    56.  6.100 / 56.  6.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;video-0_chunk-2: Invalid data found when processing input&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;Aborted()&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;ffmpeg version 5.1.3 Copyright (c) 2000-2022 the FFmpeg developers&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  built with emcc (Emscripten gcc/clang-like repla…3.1.40 (5c27e79dd0a9c4e27ef2326841698cdd4f6b5784)&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  configuration: --target-os=none --arch=x86_32 --…e-libfreetype --enable-libfribidi --enable-libass&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavutil      57. 28.100 / 57. 28.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavcodec     59. 37.100 / 59. 37.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavformat    59. 27.100 / 59. 27.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavdevice    59.  7.100 / 59.  7.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavfilter     8. 44.100 /  8. 44.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libswscale      6.  7.100 /  6.  7.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libswresample   4.  7.100 /  4.  7.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libpostproc    56.  6.100 / 56.  6.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;video-0_chunk-3: Invalid data found when processing input&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;Aborted()&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;ffmpeg version 5.1.3 Copyright (c) 2000-2022 the FFmpeg developers&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  built with emcc (Emscripten gcc/clang-like repla…3.1.40 (5c27e79dd0a9c4e27ef2326841698cdd4f6b5784)&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  configuration: --target-os=none --arch=x86_32 --…e-libfreetype --enable-libfribidi --enable-libass&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavutil      57. 28.100 / 57. 28.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavcodec     59. 37.100 / 59. 37.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavformat    59. 27.100 / 59. 27.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavdevice    59.  7.100 / 59.  7.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libavfilter     8. 44.100 /  8. 44.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libswscale      6.  7.100 /  6.  7.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libswresample   4.  7.100 /  4.  7.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;  libpostproc    56.  6.100 / 56.  6.100&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;video-0_chunk-4: Invalid data found when processing input&#x27;}&#xA;Processing.vue:135 {type: &#x27;stderr&#x27;, message: &#x27;Aborted()&#x27;}&#xA;</suspense>

    &#xA;

    Notes :

    &#xA;

      &#xA;
    1. The sections which start with Processing.vue come from the logging system I've setup.
    2. &#xA;

    3. The pieces/chunks gotten from the network where stored in exactly the same order in which they came
    4. &#xA;

    5. If you've seen the old question, the ReferenceError happens as a result of HMR by Vite&#xA;
        &#xA;
      1. Similar to this, some logs were repeated twice because I was actively changing some things and the component had to rerun from the start
      2. &#xA;

      &#xA;

    6. &#xA;

    &#xA;

    Summary : If my problem is still not clear, you could provide another way of fetching a large file (video) from a network, loading the file into memory and passing the file data to @ffmpeg/ffmpeg for further processing

    &#xA;

  • ffmpeg command not working from bash script but working from command line [duplicate]

    6 octobre 2023, par SamTech

    I wrote a bash script that is generating command to encode video and create HLS playlist.

    &#xA;

    Here is the script. To make it short and clear, i fill variables with hard-coded values.

    &#xA;

    #!/bin/bash&#xA;&#xA;# below variables are generated from some other script&#xA;&#xA;basic="-hide_banner -y -i Sample.mp4 -profile:v main -crf 20 -sc_threshold 0 -g 50 -keyint_min 50 -hls_time 6"&#xA;&#xA;renditions="-c:v:0 h264 -filter:v:0 scale=w=426:h=240:force_original_aspect_ratio=decrease -b:v:0 128k -maxrate:v:0 136k -bufsize:v:0 192k -c:v:1 h264 -filter:v:1 scale=w=640:h=360:force_original_aspect_ratio=decrease -b:v:1 256k -maxrate:v:1 273k -bufsize:v:1 384k -b:a 128k -c:a aac -ar 48000 -ac 2"&#xA;&#xA;mapping="-map 0:v -map 0:v -map 0:a"&#xA;stream_map="-var_stream_map \"v:0,agroup:audio v:1,agroup:audio a:0,agroup:audio\""&#xA;&#xA;hls="-hls_list_size 0 -f hls -hls_playlist_type vod -hls_segment_type mpegts -master_pl_name master.m3u8 -y %v_seg.m3u8"&#xA;&#xA;# show final command&#xA;echo "Command:"&#xA;echo "ffmpeg ${basic} ${renditions} ${mapping} ${stream_map} ${hls}"&#xA;&#xA;# execute command&#xA;ffmpeg ${basic} ${renditions} ${mapping} ${stream_map} ${hls}&#xA;

    &#xA;

    Here is complete output from the script, along with the error it is giving.

    &#xA;

    [samtech@SAM tmp]$ ./hls-debug.sh &#xA;Command:&#xA;ffmpeg -hide_banner -y -i Sample.mp4 -profile:v main -crf 20 -sc_threshold 0 -g 50 -keyint_min 50 -hls_time 10 -c:v:0 h264 -filter:v:0 scale=w=426:h=240:force_original_aspect_ratio=decrease -b:v:0 128k -maxrate:v:0 136k -bufsize:v:0 192k -c:v:1 h264 -filter:v:1 scale=w=640:h=360:force_original_aspect_ratio=decrease -b:v:1 256k -maxrate:v:1 273k -bufsize:v:1 384k -b:a 128k -c:a aac -ar 48000 -ac 2 -map 0:v -map 0:v -map 0:a -var_stream_map "v:0,agroup:audio v:1,agroup:audio a:0,agroup:audio" -hls_list_size 0 -f hls -hls_playlist_type vod -hls_segment_type mpegts -master_pl_name master.m3u8 -y %v_seg.m3u8&#xA;[mov,mp4,m4a,3gp,3g2,mj2 @ 0x562fabc34480] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 640x480, 997 kb/s): unspecified pixel format&#xA;Consider increasing the value for the &#x27;analyzeduration&#x27; (0) and &#x27;probesize&#x27; (5000000) options&#xA;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;Sample.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : isom&#xA;    minor_version   : 512&#xA;    compatible_brands: isomiso2avc1mp41&#xA;    encoder         : Lavf60.3.100&#xA;  Duration: 00:00:31.00, start: 0.002000, bitrate: 1146 kb/s&#xA;  Stream #0:0[0x1](und): Video: h264 (avc1 / 0x31637661), none, 640x480, 997 kb/s, SAR 1:1 DAR 4:3, 25 fps, 25 tbr, 12800 tbn (default)&#xA;    Metadata:&#xA;      handler_name    : VideoHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;  Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 383 kb/s (default)&#xA;    Metadata:&#xA;      handler_name    : SoundHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;[NULL @ 0x562fabc69680] Unable to find a suitable output format for &#x27;v:1,agroup:audio&#x27;&#xA;v:1,agroup:audio: Invalid argument&#xA;

    &#xA;

    And here is output when i copy command and run in terminal

    &#xA;

    [samtech@SAM tmp]$ ffmpeg -hide_banner -y -i Sample.mp4 -profile:v main -crf 20 -sc_threshold 0 -g 50 -keyint_min 50 -hls_time 10 -c:v:0 h264 -filter:v:0 scale=w=426:h=240:force_original_aspect_ratio=decrease -b:v:0 128k -maxrate:v:0 136k -bufsize:v:0 192k -c:v:1 h264 -filter:v:1 scale=w=640:h=360:force_original_aspect_ratio=decrease -b:v:1 256k -maxrate:v:1 273k -bufsize:v:1 384k -b:a 128k -c:a aac -ar 48000 -ac 2 -map 0:v -map 0:v -map 0:a -var_stream_map "v:0,agroup:audio v:1,agroup:audio a:0,agroup:audio" -hls_list_size 0 -f hls -hls_playlist_type vod -hls_segment_type mpegts -master_pl_name master.m3u8 -y %v_seg.m3u8&#xA;[mov,mp4,m4a,3gp,3g2,mj2 @ 0x55eeed0c7500] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 640x480, 997 kb/s): unspecified pixel format&#xA;Consider increasing the value for the &#x27;analyzeduration&#x27; (0) and &#x27;probesize&#x27; (5000000) options&#xA;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;Sample.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : isom&#xA;    minor_version   : 512&#xA;    compatible_brands: isomiso2avc1mp41&#xA;    encoder         : Lavf60.3.100&#xA;  Duration: 00:00:31.00, start: 0.002000, bitrate: 1146 kb/s&#xA;  Stream #0:0[0x1](und): Video: h264 (avc1 / 0x31637661), none, 640x480, 997 kb/s, SAR 1:1 DAR 4:3, 25 fps, 25 tbr, 12800 tbn (default)&#xA;    Metadata:&#xA;      handler_name    : VideoHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;  Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 383 kb/s (default)&#xA;    Metadata:&#xA;      handler_name    : SoundHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))&#xA;  Stream #0:0 -> #0:1 (h264 (native) -> h264 (libx264))&#xA;  Stream #0:1 -> #0:2 (aac (native) -> aac (native))&#xA;Press [q] to stop, [?] for help&#xA;[libx264 @ 0x55eeed0ff5c0] using SAR=1/1&#xA;[libx264 @ 0x55eeed0ff5c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2&#xA;[libx264 @ 0x55eeed0ff5c0] profile Main, level 1.3, 4:2:0, 8-bit&#xA;[libx264 @ 0x55eeed0ff5c0] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x1:0x111 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=7 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=50 keyint_min=26 scenecut=0 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=20.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=136 vbv_bufsize=192 crf_max=0.0 nal_hrd=none filler=0 ip_ratio=1.40 aq=1:1.00&#xA;[libx264 @ 0x55eeed106f40] using SAR=1/1&#xA;[libx264 @ 0x55eeed106f40] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2&#xA;[libx264 @ 0x55eeed106f40] profile Main, level 2.1, 4:2:0, 8-bit&#xA;[libx264 @ 0x55eeed106f40] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x1:0x111 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=11 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=50 keyint_min=26 scenecut=0 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=20.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=273 vbv_bufsize=384 crf_max=0.0 nal_hrd=none filler=0 ip_ratio=1.40 aq=1:1.00&#xA;[mpegts @ 0x55eef094a180] frame size not set&#xA;Output #0, hls, to &#x27;%v_seg.m3u8&#x27;:&#xA;  Metadata:&#xA;    major_brand     : isom&#xA;    minor_version   : 512&#xA;    compatible_brands: isomiso2avc1mp41&#xA;    encoder         : Lavf60.3.100&#xA;  Stream #0:0(und): Video: h264, yuv420p(tv, progressive), 320x240 [SAR 1:1 DAR 4:3], q=2-31, 128 kb/s, 25 fps, 90k tbn (default)&#xA;    Metadata:&#xA;      handler_name    : VideoHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;      encoder         : Lavc60.3.100 libx264&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 136000/0/128000 buffer size: 192000 vbv_delay: N/A&#xA;  Stream #0:1(und): Video: h264, yuv420p(tv, progressive), 480x360 [SAR 1:1 DAR 4:3], q=2-31, 256 kb/s, 25 fps, 90k tbn (default)&#xA;    Metadata:&#xA;      handler_name    : VideoHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;      encoder         : Lavc60.3.100 libx264&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 273000/0/256000 buffer size: 384000 vbv_delay: N/A&#xA;  Stream #0:2(und): Audio: aac (LC), 48000 Hz, stereo, fltp, 128 kb/s (default)&#xA;    Metadata:&#xA;      handler_name    : SoundHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;      encoder         : Lavc60.3.100 aac&#xA;[hls @ 0x55eeed0fc700] Opening &#x27;0_seg0.ts&#x27; for writing10.98 bitrate=N/A dup=370 drop=0 speed=11.2x    &#xA;[hls @ 0x55eeed0fc700] Opening &#x27;1_seg0.ts&#x27; for writing&#xA;[hls @ 0x55eeed0fc700] Opening &#x27;2_seg0.ts&#x27; for writing&#xA;[hls @ 0x55eeed0fc700] Opening &#x27;0_seg1.ts&#x27; for writing17.53 bitrate=N/A dup=370 drop=0 speed=11.8x    &#xA;[hls @ 0x55eeed0fc700] Opening &#x27;1_seg1.ts&#x27; for writing&#xA;[hls @ 0x55eeed0fc700] Opening &#x27;2_seg1.ts&#x27; for writing&#xA;[hls @ 0x55eeed0fc700] Opening &#x27;0_seg2.ts&#x27; for writing28.75 bitrate=N/A dup=370 drop=0 speed=11.6x    &#xA;[hls @ 0x55eeed0fc700] Opening &#x27;1_seg2.ts&#x27; for writing&#xA;[hls @ 0x55eeed0fc700] Opening &#x27;2_seg2.ts&#x27; for writing&#xA;[hls @ 0x55eeed0fc700] Opening &#x27;0_seg3.ts&#x27; for writing&#xA;[hls @ 0x55eeed0fc700] Opening &#x27;0_seg.m3u8&#x27; for writing&#xA;[hls @ 0x55eeed0fc700] Opening &#x27;1_seg3.ts&#x27; for writing&#xA;[hls @ 0x55eeed0fc700] Opening &#x27;1_seg.m3u8&#x27; for writing&#xA;[hls @ 0x55eeed0fc700] Opening &#x27;2_seg3.ts&#x27; for writing&#xA;[hls @ 0x55eeed0fc700] Opening &#x27;2_seg.m3u8&#x27; for writing&#xA;[hls @ 0x55eeed0fc700] Opening &#x27;master.m3u8&#x27; for writing&#xA;frame=  775 fps=264 q=-1.0 Lq=-1.0 size=N/A time=00:00:30.97 bitrate=N/A dup=370 drop=0 speed=10.6x    &#xA;video:1227kB audio:489kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;[libx264 @ 0x55eeed0ff5c0] frame I:16    Avg QP:17.53  size:  9672&#xA;[libx264 @ 0x55eeed0ff5c0] frame P:328   Avg QP:20.42  size:   690&#xA;[libx264 @ 0x55eeed0ff5c0] frame B:431   Avg QP:18.51  size:   125&#xA;[libx264 @ 0x55eeed0ff5c0] consecutive B-frames: 20.6% 13.2%  7.4% 58.8%&#xA;[libx264 @ 0x55eeed0ff5c0] mb I  I16..4: 33.1%  0.0% 66.9%&#xA;[libx264 @ 0x55eeed0ff5c0] mb P  I16..4:  0.8%  0.0%  2.4%  P16..4: 15.3%  7.5%  4.3%  0.0%  0.0%    skip:69.7%&#xA;[libx264 @ 0x55eeed0ff5c0] mb B  I16..4:  0.1%  0.0%  0.0%  B16..8: 10.8%  1.7%  0.4%  direct: 0.4%  skip:86.6%  L0:40.4% L1:51.0% BI: 8.7%&#xA;[libx264 @ 0x55eeed0ff5c0] coded y,uvDC,uvAC intra: 70.8% 80.4% 61.6% inter: 3.8% 4.7% 0.5%&#xA;[libx264 @ 0x55eeed0ff5c0] i16 v,h,dc,p: 17% 39% 23% 21%&#xA;[libx264 @ 0x55eeed0ff5c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 15% 26% 18%  7%  7%  6%  7%  6%  7%&#xA;[libx264 @ 0x55eeed0ff5c0] i8c dc,h,v,p: 44% 31% 14% 11%&#xA;[libx264 @ 0x55eeed0ff5c0] Weighted P-Frames: Y:0.0% UV:0.0%&#xA;[libx264 @ 0x55eeed0ff5c0] ref P L0: 73.7% 16.3%  7.4%  2.6%&#xA;[libx264 @ 0x55eeed0ff5c0] ref B L0: 94.3%  4.9%  0.9%&#xA;[libx264 @ 0x55eeed0ff5c0] ref B L1: 98.3%  1.7%&#xA;[libx264 @ 0x55eeed0ff5c0] kb/s:112.27&#xA;[libx264 @ 0x55eeed106f40] frame I:16    Avg QP:15.14  size: 18394&#xA;[libx264 @ 0x55eeed106f40] frame P:260   Avg QP:19.00  size:  1507&#xA;[libx264 @ 0x55eeed106f40] frame B:499   Avg QP:18.80  size:   269&#xA;[libx264 @ 0x55eeed106f40] consecutive B-frames:  9.7% 11.4%  6.2% 72.8%&#xA;[libx264 @ 0x55eeed106f40] mb I  I16..4: 36.5%  0.0% 63.5%&#xA;[libx264 @ 0x55eeed106f40] mb P  I16..4:  1.2%  0.0%  3.1%  P16..4: 15.5%  7.1%  3.2%  0.0%  0.0%    skip:69.9%&#xA;[libx264 @ 0x55eeed106f40] mb B  I16..4:  0.1%  0.0%  0.1%  B16..8: 11.4%  1.6%  0.3%  direct: 0.4%  skip:86.2%  L0:39.1% L1:53.5% BI: 7.4%&#xA;[libx264 @ 0x55eeed106f40] coded y,uvDC,uvAC intra: 65.1% 75.8% 54.1% inter: 3.1% 4.5% 0.3%&#xA;[libx264 @ 0x55eeed106f40] i16 v,h,dc,p: 35% 27% 12% 26%&#xA;[libx264 @ 0x55eeed106f40] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 18% 25% 15%  7%  8%  7%  8%  6%  6%&#xA;[libx264 @ 0x55eeed106f40] i8c dc,h,v,p: 46% 29% 14% 11%&#xA;[libx264 @ 0x55eeed106f40] Weighted P-Frames: Y:0.0% UV:0.0%&#xA;[libx264 @ 0x55eeed106f40] ref P L0: 71.6% 15.9%  8.6%  3.9%&#xA;[libx264 @ 0x55eeed106f40] ref B L0: 93.9%  4.6%  1.5%&#xA;[libx264 @ 0x55eeed106f40] ref B L1: 98.3%  1.7%&#xA;[libx264 @ 0x55eeed106f40] kb/s:211.68&#xA;[aac @ 0x55eeed1009c0] Qavg: 401.380&#xA;

    &#xA;

    What the wrong is going with that script ?

    &#xA;

    I am pulling my hairs from last few hours because of it, anyone help please.

    &#xA;

  • extracting video and data streams from MPEG2 TS over RTP in real-time

    10 janvier 2024, par Tejal Barnwal

    I have H264 video stream and KLV meta data encapsulated inside MPEG2 TS container which are sent over an RTP over UDP from a camera.&#xA;I intend to do the following :

    &#xA;

      &#xA;
    1. Extract both video and data streams from RTP
    2. &#xA;

    3. Process video feed using opencv in a seperate thread
    4. &#xA;

    5. process klv metadata in a seperate thread
    6. &#xA;

    &#xA;

    My problem what exact arguments should I provide to ffmpeg so as to read h264 video stream and show the images frame by frame using opencv ?

    &#xA;

    With the help of some previous posts like Simultaneously map video and data streams to one subprocess pipeline in real-time, I was able to get some idea about how could I proceed to procees the stream over RTP.

    &#xA;

    I started out by using the following script :

    &#xA;

    #!/usr/bin/env python3&#xA;from asyncio import streams&#xA;from logging.handlers import QueueListener&#xA;import klvdata&#xA;import subprocess as sp&#xA;import shlex&#xA;import threading&#xA;import numpy as np&#xA;import cv2&#xA;import time&#xA;from io import BytesIO&#xA;&#xA;# Video reader thread.&#xA;def video_reader(pipe):&#xA;    cols, rows = 1280, 720  # Assume we know frame size is 1280x720&#xA;&#xA;    counter = 0&#xA;    while True:&#xA;        print("read image")&#xA;        raw_image = pipe.read(cols*rows*3)  # Read raw video frame&#xA;&#xA;        # Break the loop when length is too small&#xA;        if len(raw_image) &lt; cols*rows*3:&#xA;            break&#xA;&#xA;        if (counter % 10) == 0:&#xA;            # Show video frame evey 60 frames&#xA;            image = np.frombuffer(raw_image, np.uint8).reshape([rows, cols, 3])&#xA;            cv2.imshow(&#x27;Video&#x27;, image) # Show video image for testing&#xA;            cv2.waitKey(1)&#xA;        counter &#x2B;= 1&#xA;        print("image showed on window")&#xA;        time.sleep(0.25)&#xA;&#xA;&#xA;&#xA;# https://github.com/paretech/klvdata/tree/master/klvdata&#xA;def bytes_to_int(value, signed=False):&#xA;    """Return integer given bytes."""&#xA;    return int.from_bytes(bytes(value), byteorder=&#x27;big&#x27;, signed=signed)&#xA;&#xA;&#xA;# Data reader thread (read KLV data).&#xA;def data_reader(pipe):&#xA;    key_length = 16  # Assume key length is 16 bytes.&#xA;&#xA;    f = open(&#x27;data.bin&#x27;, &#x27;wb&#x27;)  # For testing - store the KLV data to data.bin (binary file)&#xA;&#xA;    while True:&#xA;        # https://en.wikipedia.org/wiki/KLV&#xA;        # The first few bytes are the Key, much like a key in a standard hash table data structure.&#xA;        # Keys can be 1, 2, 4, or 16 bytes in length.&#xA;        # Presumably in a separate specification document you would agree on a key length for a given application.&#xA;        key = pipe.read(key_length)  # Read the key&#xA;        &#xA;        if len(key) &lt; key_length:&#xA;            break  # Break the loop when length is too small&#xA;        f.write(key)  # Write data to binary file for testing&#xA;&#xA;        # https://github.com/paretech/klvdata/tree/master/klvdata&#xA;        # Length field&#xA;        len_byte = pipe.read(1)&#xA;&#xA;        if len(len_byte) &lt; 1:&#xA;            break  # Break the loop when length is too small&#xA;        f.write(len_byte)  # Write data to binary file for testing&#xA;&#xA;        byte_length = bytes_to_int(len_byte)&#xA;&#xA;        # https://github.com/paretech/klvdata/tree/master/klvdata                                                &#xA;        if byte_length &lt; 128:&#xA;            # BER Short Form&#xA;            length = byte_length&#xA;            ber_len_bytes = b&#x27;&#x27;&#xA;        else:&#xA;            # BER Long Form&#xA;            ber_len = byte_length - 128&#xA;            ber_len_bytes = pipe.read(ber_len)&#xA;&#xA;            if len(ber_len_bytes) &lt; ber_len:&#xA;                break  # Break the loop when length is too small&#xA;            f.write(ber_len_bytes)  # Write ber_len_bytes to binary file for testing&#xA;&#xA;            length = bytes_to_int(ber_len_bytes)&#xA;&#xA;        # Read the value (length bytes)&#xA;        value = pipe.read(length)&#xA;        if len(value) &lt; length:&#xA;            break  # Break the loop when length is too small&#xA;        f.write(value)  # Write data to binary file for testing&#xA;&#xA;        klv_data = key &#x2B; len_byte &#x2B; ber_len_bytes &#x2B; value  # Concatenate key length and data&#xA;        klv_data_as_bytes_io = BytesIO(klv_data)  # Wrap klv_data with BytesIO (before parsing)&#xA;&#xA;        # Parse the KLV data&#xA;        for packet in klvdata.StreamParser(klv_data_as_bytes_io): &#xA;            metadata = packet.MetadataList()&#xA;            for key, value in metadata.items():&#xA;                print(key, value)&#xA;                &#xA;            print("\n") # New line&#xA;&#xA;# Execute FFmpeg as sub-process&#xA;# Map the video to stderr and map the data to stdout&#xA;process = sp.Popen(shlex.split(&#x27;ffmpeg -hide_banner -loglevel quiet &#x27;                        # Set loglevel to quiet for disabling the prints ot stderr&#xA;                               &#x27;-i "rtp://192.168.0.141:11024" &#x27;                                        # Input video "Day Flight.mpg"&#xA;                               &#x27;-map 0:v -c:v rawvideo -pix_fmt bgr24 -f:v rawvideo pipe:2 &#x27; # rawvideo format is mapped to stderr pipe (raw video codec with bgr24 pixel format)&#xA;                               &#x27;-map 0:d -c copy -copy_unknown -f:d data pipe:1 &#x27;            # Copy the data without ddecoding.&#xA;                               &#x27;-report&#x27;),                                                   # Create a log file (because we can&#x27;t the statuses that are usually printed to stderr).&#xA;                                stdout=sp.PIPE, stderr=sp.PIPE)&#xA;&#xA;&#xA;# Start video reader thread (pass stderr pipe as argument).&#xA;video_thread = threading.Thread(target=video_reader, args=(process.stderr,))&#xA;video_thread.start()&#xA;&#xA;# Start data reader thread (pass stdout pipe as argument).&#xA;data_thread = threading.Thread(target=data_reader, args=(process.stdout,))&#xA;data_thread.start()&#xA;&#xA;&#xA;# Wait for threads (and process) to finish.&#xA;video_thread.join()&#xA;data_thread.join()&#xA;process.wait()&#xA;&#xA;

    &#xA;

    With the above script, I was facing two issues :

    &#xA;

      &#xA;
    1. The second thread resulted in an attribute error
    2. &#xA;

    &#xA;

    Exception in thread Thread-2:&#xA;Traceback (most recent call last):&#xA;  File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner&#xA;    self.run()&#xA;  File "/usr/lib/python3.8/threading.py", line 870, in run&#xA;    self._target(*self._args, **self._kwargs)&#xA;  File "video_data_extraction.py", line 97, in data_reader&#xA;    print(packet.MetadataList())&#xA;AttributeError: &#x27;UnknownElement&#x27; object has no attribute &#x27;MetadataList&#x27;&#xA;&#xA;

    &#xA;

      &#xA;
    1. With this though I continuously able to see following output on the terminal regarding reading the images
    2. &#xA;

    &#xA;

    read image&#xA;image showed on window&#xA;read image&#xA;image showed on window&#xA;read image&#xA;image showed on window&#xA;read image&#xA;image showed on window&#xA;read image&#xA;image showed on window&#xA;read image&#xA;image showed on window&#xA;

    &#xA;

    The imshow windows wasnt updating properly ! It seemed stuck after a few frames.

    &#xA;

    Further diving into the lane with the help of following command, I concluded that the video stream that I am reading has H264 encoding

    &#xA;

    ffprobe -i rtp://192.168.0.141:11024 -show_streams -show_formats&#xA;

    &#xA;

    Output of the above command :

    &#xA;

    ffprobe version 4.2.7-0ubuntu0.1 Copyright (c) 2007-2022 the FFmpeg developers&#xA;  built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1)&#xA;  configuration: --prefix=/usr --extra-version=0ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/aarch64-linux-gnu --incdir=/usr/include/aarch64-linux-gnu --arch=arm64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared&#xA;  libavutil      56. 31.100 / 56. 31.100&#xA;  libavcodec     58. 54.100 / 58. 54.100&#xA;  libavformat    58. 29.100 / 58. 29.100&#xA;  libavdevice    58.  8.100 / 58.  8.100&#xA;  libavfilter     7. 57.100 /  7. 57.100&#xA;  libavresample   4.  0.  0 /  4.  0.  0&#xA;  libswscale      5.  5.100 /  5.  5.100&#xA;  libswresample   3.  5.100 /  3.  5.100&#xA;  libpostproc    55.  5.100 / 55.  5.100&#xA;[rtp @ 0xaaaac81ecce0] PES packet size mismatch&#xA;    Last message repeated 62 times&#xA;[NULL @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[rtp @ 0xaaaac81ecce0] PES packet size mismatch&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[rtp @ 0xaaaac81ecce0] PES packet size mismatch&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] non-existing PPS 0 referenced&#xA;    Last message repeated 1 times&#xA;[h264 @ 0xaaaac81f09b0] decode_slice_header error&#xA;[h264 @ 0xaaaac81f09b0] no frame!&#xA;[rtp @ 0xaaaac81ecce0] PES packet size mismatch&#xA;    Last message repeated 187 times&#xA;Input #0, rtp, from &#x27;rtp://192.168.0.141:11024&#x27;:&#xA;  Duration: N/A, start: 1317.040656, bitrate: N/A&#xA;  Program 1 &#xA;    Stream #0:1: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1280x720, 25 fps, 25 tbr, 90k tbn&#xA;    Stream #0:0: Data: klv (KLVA / 0x41564C4B)&#xA;Unsupported codec with id 100356 for input stream 0&#xA;[STREAM]&#xA;index=0&#xA;codec_name=klv&#xA;codec_long_name=SMPTE 336M Key-Length-Value (KLV) metadata&#xA;profile=unknown&#xA;codec_type=data&#xA;codec_tag_string=KLVA&#xA;codec_tag=0x41564c4b&#xA;id=N/A&#xA;r_frame_rate=0/0&#xA;avg_frame_rate=0/0&#xA;time_base=1/90000&#xA;start_pts=118533659&#xA;start_time=1317.040656&#xA;duration_ts=N/A&#xA;duration=N/A&#xA;bit_rate=N/A&#xA;max_bit_rate=N/A&#xA;bits_per_raw_sample=N/A&#xA;nb_frames=N/A&#xA;nb_read_frames=N/A&#xA;nb_read_packets=N/A&#xA;DISPOSITION:default=0&#xA;DISPOSITION:dub=0&#xA;DISPOSITION:original=0&#xA;DISPOSITION:comment=0&#xA;DISPOSITION:lyrics=0&#xA;DISPOSITION:karaoke=0&#xA;DISPOSITION:forced=0&#xA;DISPOSITION:hearing_impaired=0&#xA;DISPOSITION:visual_impaired=0&#xA;DISPOSITION:clean_effects=0&#xA;DISPOSITION:attached_pic=0&#xA;DISPOSITION:timed_thumbnails=0&#xA;[/STREAM]&#xA;[STREAM]&#xA;index=1&#xA;codec_name=h264&#xA;codec_long_name=H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10&#xA;profile=Constrained Baseline&#xA;codec_type=video&#xA;codec_time_base=1/50&#xA;codec_tag_string=[27][0][0][0]&#xA;codec_tag=0x001b&#xA;width=1280&#xA;height=720&#xA;coded_width=1280&#xA;coded_height=720&#xA;has_b_frames=0&#xA;sample_aspect_ratio=N/A&#xA;display_aspect_ratio=N/A&#xA;pix_fmt=yuv420p&#xA;level=31&#xA;color_range=unknown&#xA;color_space=unknown&#xA;color_transfer=unknown&#xA;color_primaries=unknown&#xA;chroma_location=left&#xA;field_order=progressive&#xA;timecode=N/A&#xA;refs=1&#xA;is_avc=false&#xA;nal_length_size=0&#xA;id=N/A&#xA;r_frame_rate=25/1&#xA;avg_frame_rate=25/1&#xA;time_base=1/90000&#xA;start_pts=118533659&#xA;start_time=1317.040656&#xA;duration_ts=N/A&#xA;duration=N/A&#xA;bit_rate=N/A&#xA;max_bit_rate=N/A&#xA;bits_per_raw_sample=8&#xA;nb_frames=N/A&#xA;nb_read_frames=N/A&#xA;nb_read_packets=N/A&#xA;DISPOSITION:default=0&#xA;DISPOSITION:dub=0&#xA;DISPOSITION:original=0&#xA;DISPOSITION:comment=0&#xA;DISPOSITION:lyrics=0&#xA;DISPOSITION:karaoke=0&#xA;DISPOSITION:forced=0&#xA;DISPOSITION:hearing_impaired=0&#xA;DISPOSITION:visual_impaired=0&#xA;DISPOSITION:clean_effects=0&#xA;DISPOSITION:attached_pic=0&#xA;DISPOSITION:timed_thumbnails=0&#xA;[/STREAM]&#xA;[FORMAT]&#xA;filename=rtp://192.168.0.141:11024&#xA;nb_streams=2&#xA;nb_programs=1&#xA;format_name=rtp&#xA;format_long_name=RTP input&#xA;start_time=1317.040656&#xA;duration=N/A&#xA;size=N/A&#xA;bit_rate=N/A&#xA;probe_score=100&#xA;[/FORMAT]&#xA;

    &#xA;

    Further, in the log output, I see a lot of statements in regard to missed packets and PES packet mismatch

    &#xA;

    [rtp @ 0xaaaaf31896c0] max delay reached. need to consume packet&#xA;[rtp @ 0xaaaaf31896c0] RTP: missed 98 packets&#xA;[rtp @ 0xaaaaf31896c0] Continuity check failed for pid 40 expected 14 got 10&#xA;[rtp @ 0xaaaaf31896c0] PES packet size mismatch&#xA;rtp://192.168.0.141:11024: corrupt input packet in stream 0&#xA;frame=  124 fps=2.6 q=-0.0 size=  334800kB time=00:00:05.32 bitrate=515406.0kbits/s dup=97 drop=0 speed=0.111x &#xA;

    &#xA;

    What arguments do I provide to ffmpeg and in what order because my stream 0 is metadata and stream 1 is video so as to display image frame by frame with opencv ?&#xA;I would be grateful for any help that you could provide.

    &#xA;

    Further, I also have a query regarding how does ffmpeg know to that it has to first convert the rtp packets into mpeg2 TS packets before segregating video stream and data stream ?

    &#xA;