Recherche avancée

Médias (0)

Mot : - Tags -/configuration

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (57)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Soumettre améliorations et plugins supplémentaires

    10 avril 2011

    Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
    Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)

Sur d’autres sites (5633)

  • ffmpeg piped output producing incorrect metadata frame count

    8 décembre 2024, par Xorgon

    The short version : Using piped output from ffmpeg produces a file with incorrect metadata.

    


    ffmpeg -y -i .\test_mp4.mp4 -f avi -c:v libx264 - > output.avi to make an AVI file using the pipe output.

    


    ffprobe -v error -count_frames -show_entries stream=duration,nb_read_frames,r_frame_rate .\output.avi

    


    The output will show that the metadata does not match the actual frames contained in the video.

    


    Details below.

    



    


    Using Python, I am attempting to use ffmpeg to compress videos and put them in a PowerPoint. This works great, however, the video files themselves have incorrect frame counts which can cause issues when I read from those videos in other code.

    


    Edit for clarification : by "frame count" I mean the metadata frame count. The actual number of frames contained in the video is correct, but querying the metadata gives an incorrect frame count.

    


    Having eliminated the PowerPoint aspect of the code, I've narrowed this down to the following minimal reproducing example of saving an output from an ffmpeg pipe :

    


    from subprocess import Popen, PIPE

video_path = 'test_mp4.mp4'

ffmpeg_pipe = Popen(['ffmpeg',
                     '-y',  # Overwrite files
                     '-i', f'{video_path}',  # Input from file
                     '-f', 'avi',  # Output format
                     '-c:v', 'libx264',  # Codec
                     '-'],  # Output to pipe
                    stdout=PIPE)

new_path = "piped_video.avi"
vid_file = open(new_path, "wb")
vid_file.write(ffmpeg_pipe.stdout.read())
vid_file.close()


    


    I've tested several different videos. One small example video that I've tested can be found here.

    


    I've tried a few different codecs with avi format and tried libvpx with webm format. For the avi outputs, the frame count usually reads as 1073741824 (2^30). Weirdly, for the webm format, the frame count read as -276701161105643264.

    


    Edit : This issue can also be reproduced with just ffmpeg in command prompt using the following command :
ffmpeg -y -i .\test_mp4.mp4 -f avi -c:v libx264 - > output.avi

    


    This is a snippet I used to read the frame count, but one could also see the error by opening the video details in Windows Explorer and seeing the total time as something like 9942 hours, 3 minutes, and 14 seconds.

    


    import cv2

video_path = 'test_mp4.mp4'
new_path = "piped_video.webm"

cap = cv2.VideoCapture(video_path)
print(f"Original video frame count: = {int(cap.get(cv2.CAP_PROP_FRAME_COUNT)):d}")
cap.release()

cap = cv2.VideoCapture(new_path)
print(f"Piped video frame count: = {int(cap.get(cv2.CAP_PROP_FRAME_COUNT)):d}")
cap.release()


    


    The error can also be observed using ffprobe with the following command : ffprobe -v error -count_frames -show_entries stream=duration,nb_read_frames,r_frame_rate .\output.avi. Note that the frame rate and number of frames counted by ffprobe do not match with the duration from the metadata.

    


    For completeness, here is the ffmpeg output :

    


    ffmpeg version 2023-06-11-git-09621fd7d9-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
  libavutil      58. 13.100 / 58. 13.100
  libavcodec     60. 17.100 / 60. 17.100
  libavformat    60.  6.100 / 60.  6.100
  libavdevice    60.  2.100 / 60.  2.100
  libavfilter     9.  8.101 /  9.  8.101
  libswscale      7.  3.100 /  7.  3.100
  libswresample   4. 11.100 /  4. 11.100
  libpostproc    57.  2.100 / 57.  2.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test_mp4.mp4':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: isommp42
    creation_time   : 2022-08-10T12:54:09.000000Z
  Duration: 00:00:06.67, start: 0.000000, bitrate: 567 kb/s
  Stream #0:0[0x1](eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 384x264 [SAR 1:1 DAR 16:11], 563 kb/s, 30 fps, 30 tbr, 30k tbn (default)
    Metadata:
      creation_time   : 2022-08-10T12:54:09.000000Z
      handler_name    : Mainconcept MP4 Video Media Handler
      vendor_id       : [0][0][0][0]
      encoder         : AVC Coding
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0000018c68c8b9c0] using SAR=1/1
[libx264 @ 0000018c68c8b9c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0000018c68c8b9c0] profile High, level 2.1, 4:2:0, 8-bit
Output #0, avi, to 'pipe:':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: isommp42
    ISFT            : Lavf60.6.100
  Stream #0:0(eng): Video: h264 (H264 / 0x34363248), yuv420p(progressive), 384x264 [SAR 1:1 DAR 16:11], q=2-31, 30 fps, 30 tbn (default)
    Metadata:
      creation_time   : 2022-08-10T12:54:09.000000Z
      handler_name    : Mainconcept MP4 Video Media Handler
      vendor_id       : [0][0][0][0]
      encoder         : Lavc60.17.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
[out#0/avi @ 0000018c687f47c0] video:82kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 3.631060%
frame=  200 fps=0.0 q=-1.0 Lsize=      85kB time=00:00:06.56 bitrate= 106.5kbits/s speed=76.2x    
[libx264 @ 0000018c68c8b9c0] frame I:1     Avg QP:16.12  size:  3659
[libx264 @ 0000018c68c8b9c0] frame P:80    Avg QP:21.31  size:   647
[libx264 @ 0000018c68c8b9c0] frame B:119   Avg QP:26.74  size:   243
[libx264 @ 0000018c68c8b9c0] consecutive B-frames:  3.0% 53.0%  0.0% 44.0%
[libx264 @ 0000018c68c8b9c0] mb I  I16..4: 17.6% 70.6% 11.8%
[libx264 @ 0000018c68c8b9c0] mb P  I16..4:  0.8%  1.7%  0.6%  P16..4: 17.6%  4.6%  3.3%  0.0%  0.0%    skip:71.4%
[libx264 @ 0000018c68c8b9c0] mb B  I16..4:  0.1%  0.3%  0.2%  B16..8: 11.7%  1.4%  0.4%  direct: 0.6%  skip:85.4%  L0:32.0% L1:59.7% BI: 8.3%
[libx264 @ 0000018c68c8b9c0] 8x8 transform intra:59.6% inter:62.4%
[libx264 @ 0000018c68c8b9c0] coded y,uvDC,uvAC intra: 48.5% 0.0% 0.0% inter: 3.5% 0.0% 0.0%
[libx264 @ 0000018c68c8b9c0] i16 v,h,dc,p: 19% 39% 25% 17%
[libx264 @ 0000018c68c8b9c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 25% 30%  3%  3%  4%  4%  4%  5%
[libx264 @ 0000018c68c8b9c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 20% 16%  6%  8%  8%  8%  5%  6%
[libx264 @ 0000018c68c8b9c0] i8c dc,h,v,p: 100%  0%  0%  0%
[libx264 @ 0000018c68c8b9c0] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0000018c68c8b9c0] ref P L0: 76.2%  7.9% 11.2%  4.7%
[libx264 @ 0000018c68c8b9c0] ref B L0: 85.6% 12.9%  1.5%
[libx264 @ 0000018c68c8b9c0] ref B L1: 97.7%  2.3%
[libx264 @ 0000018c68c8b9c0] kb/s:101.19


    


    So the question is : why does this happen, and how can one avoid it ?

    


  • FFmpeg RTSP drop rate increases when frame rate is reduced

    13 avril 2024, par Avishka Perera

    I need to read an RTSP stream, process the images individually in Python, and then write the images back to an RTSP stream. As the RTSP server, I am using Mediamtx [1]. For streaming, I am using FFmpeg [2].

    


    I have the following code that works perfectly fine. For simplification purposes, I am streaming three generated images.

    


    import time
import numpy as np
import subprocess

width, height = 640, 480
fps = 25
rtsp_server_address = f"rtsp://localhost:8554/mystream"

ffmpeg_cmd = [
    "ffmpeg",
    "-re",
    "-f",
    "rawvideo",
    "-pix_fmt",
    "rgb24",
    "-s",
    f"{width}x{height}",
    "-i",
    "-",
    "-r",
    str(fps),
    "-avoid_negative_ts",
    "make_zero",
    "-vcodec",
    "libx264",
    "-threads",
    "4",
    "-f",
    "rtsp",
    rtsp_server_address,
]
colors = np.array(
    [
        [255, 0, 0],
        [0, 255, 0],
        [0, 0, 255],
    ]
).reshape(3, 1, 1, 3)
images = (np.ones((3, width, height, 3)) * colors).astype(np.uint8)

if __name__ == "__main__":

    process = subprocess.Popen(ffmpeg_cmd, stdin=subprocess.PIPE)
    start = time.time()
    exported = 0
    while True:
        exported += 1
        next_time = start + exported / fps
        now = time.time()
        if next_time > now:
            sleep_dur = next_time - now
            time.sleep(sleep_dur)

        image = images[exported % 3]
        image_bytes = image.tobytes()

        process.stdin.write(image_bytes)
        process.stdin.flush()

    process.stdin.close()
    process.wait()


    


    The issue is, that I need to run this at 10 fps because the processing step is heavy and can only afford 10 fps. Hence, as I reduce the frame rate from 25 to 10, the drop rate increases from 0% to 100%. And after a few iterations, I get a BrokenPipeError: [Errno 32] Broken pipe. Refer to the appendix for the complete log.

    


    As an alternative, I can use OpenCV compiled from source with GStreamer [3], but I prefer using FFmpeg to make the shipping process simple. Since compiling OpenCV from source can be tedious and dependent on the system.

    


    References

    


    [1] Mediamtx (formerly rtsp-simple-server) : https://github.com/bluenviron/mediamtx

    


    [2] FFmpeg : https://github.com/FFmpeg/FFmpeg

    


    [3] Compile OpenCV with GStreamer : https://github.com/bluenviron/mediamtx?tab=readme-ov-file#opencv

    


    Appendix

    


    Creating the source stream

    


    To instantiate the unprocessed stream, I use the following command. This streams the content of my webcam as and RTSP stream.

    


    ffmpeg -video_size 1280x720 -i /dev/video0  -avoid_negative_ts make_zero -vcodec libx264 -r 10 -f rtsp rtsp://localhost:8554/webcam


    


    Error log

    


    ffmpeg version 6.1.1 Copyright (c) 2000-2023 the FFmpeg developers&#xA;  built with gcc 12.3.0 (conda-forge gcc 12.3.0-5)&#xA;  configuration: --prefix=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-cc --cxx=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-c&#x2B;&#x2B; --nm=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-nm --ar=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-ar --disable-doc --disable-openssl --enable-demuxer=dash --enable-hardcoded-tables --enable-libfreetype --enable-libharfbuzz --enable-libfontconfig --enable-libopenh264 --enable-libdav1d --enable-gnutls --enable-libmp3lame --enable-libvpx --enable-libass --enable-pthreads --enable-vaapi --enable-libopenvino --enable-gpl --enable-libx264 --enable-libx265 --enable-libaom --enable-libsvtav1 --enable-libxml2 --enable-pic --enable-shared --disable-static --enable-version3 --enable-zlib --enable-libopus --pkg-config=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/pkg-config&#xA;  libavutil      58. 29.100 / 58. 29.100&#xA;  libavcodec     60. 31.102 / 60. 31.102&#xA;  libavformat    60. 16.100 / 60. 16.100&#xA;  libavdevice    60.  3.100 / 60.  3.100&#xA;  libavfilter     9. 12.100 /  9. 12.100&#xA;  libswscale      7.  5.100 /  7.  5.100&#xA;  libswresample   4. 12.100 /  4. 12.100&#xA;  libpostproc    57.  3.100 / 57.  3.100&#xA;Input #0, rawvideo, from &#x27;fd:&#x27;:&#xA;  Duration: N/A, start: 0.000000, bitrate: 184320 kb/s&#xA;  Stream #0:0: Video: rawvideo (RGB[24] / 0x18424752), rgb24, 640x480, 184320 kb/s, 25 tbr, 25 tbn&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))&#xA;[libx264 @ 0x5e2ef8b01340] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2&#xA;[libx264 @ 0x5e2ef8b01340] profile High 4:4:4 Predictive, level 2.2, 4:4:4, 8-bit&#xA;[libx264 @ 0x5e2ef8b01340] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=4 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=10 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;Output #0, rtsp, to &#x27;rtsp://localhost:8554/mystream&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf60.16.100&#xA;  Stream #0:0: Video: h264, yuv444p(tv, progressive), 640x480, q=2-31, 10 fps, 90k tbn&#xA;    Metadata:&#xA;      encoder         : Lavc60.31.102 libx264&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A&#xA;[vost#0:0/libx264 @ 0x5e2ef8b01080] Error submitting a packet to the muxer: Broken pipe   &#xA;[out#0/rtsp @ 0x5e2ef8afd780] Error muxing a packet&#xA;[out#0/rtsp @ 0x5e2ef8afd780] video:1kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;frame=    1 fps=0.1 q=-1.0 Lsize=N/A time=00:00:04.70 bitrate=N/A dup=0 drop=70 speed=0.389x    &#xA;[libx264 @ 0x5e2ef8b01340] frame I:16    Avg QP: 6.00  size:   147&#xA;[libx264 @ 0x5e2ef8b01340] frame P:17    Avg QP: 9.94  size:   101&#xA;[libx264 @ 0x5e2ef8b01340] frame B:17    Avg QP: 9.94  size:    64&#xA;[libx264 @ 0x5e2ef8b01340] consecutive B-frames: 50.0%  0.0% 42.0%  8.0%&#xA;[libx264 @ 0x5e2ef8b01340] mb I  I16..4: 81.3% 18.7%  0.0%&#xA;[libx264 @ 0x5e2ef8b01340] mb P  I16..4: 52.9%  0.0%  0.0%  P16..4:  0.0%  0.0%  0.0%  0.0%  0.0%    skip:47.1%&#xA;[libx264 @ 0x5e2ef8b01340] mb B  I16..4:  0.0%  5.9%  0.0%  B16..8:  0.1%  0.0%  0.0%  direct: 0.0%  skip:94.0%  L0:56.2% L1:43.8% BI: 0.0%&#xA;[libx264 @ 0x5e2ef8b01340] 8x8 transform intra:15.4% inter:100.0%&#xA;[libx264 @ 0x5e2ef8b01340] coded y,u,v intra: 0.0% 0.0% 0.0% inter: 0.0% 0.0% 0.0%&#xA;[libx264 @ 0x5e2ef8b01340] i16 v,h,dc,p: 97%  0%  3%  0%&#xA;[libx264 @ 0x5e2ef8b01340] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  0%  0% 100%  0%  0%  0%  0%  0%  0%&#xA;[libx264 @ 0x5e2ef8b01340] Weighted P-Frames: Y:52.9% UV:52.9%&#xA;[libx264 @ 0x5e2ef8b01340] ref P L0: 88.9%  0.0%  0.0% 11.1%&#xA;[libx264 @ 0x5e2ef8b01340] kb/s:8.27&#xA;Conversion failed!&#xA;Traceback (most recent call last):&#xA;  File "/home/avishka/projects/read-process-stream/minimal-ffmpeg-error.py", line 58, in <module>&#xA;    process.stdin.write(image_bytes)&#xA;BrokenPipeError: [Errno 32] Broken pipe&#xA;</module>

    &#xA;

  • FFMPEG. Read frame, process it, put it to output video. Copy sound stream unchanged

    9 décembre 2016, par Andrey Smorodov

    I want to apply processing to a video clip with sound track, extract and process frame by frame and write result to output file. Number of frames, size of frame and speed remains unchanged in output clip. Also I want to keep the same audio track as I have in source.

    I can read clip, decode frames and process then using opencv. Audio packets are also writes fine. I’m stuck on forming output video stream.

    The minimal runnable code I have for now (sorry it not so short, but cant do it shorter) :

    extern "C" {
    #include <libavutil></libavutil>timestamp.h>
    #include <libavformat></libavformat>avformat.h>
    #include "libavcodec/avcodec.h"
    #include <libavutil></libavutil>opt.h>
    #include <libavdevice></libavdevice>avdevice.h>
    #include <libswscale></libswscale>swscale.h>
    }
    #include "opencv2/opencv.hpp"

    #if LIBAVCODEC_VERSION_INT &lt; AV_VERSION_INT(55,28,1)
    #define av_frame_alloc  avcodec_alloc_frame
    #endif

    using namespace std;
    using namespace cv;

    static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt, const char *tag)
    {
       AVRational *time_base = &amp;fmt_ctx->streams[pkt->stream_index]->time_base;

       char buf1[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_string(buf1, pkt->pts);
       char buf2[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_string(buf1, pkt->dts);
       char buf3[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_string(buf1, pkt->duration);

       char buf4[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_time_string(buf1, pkt->pts, time_base);
       char buf5[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_time_string(buf1, pkt->dts, time_base);
       char buf6[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_time_string(buf1, pkt->duration, time_base);

       printf("pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
           buf1, buf4,
           buf2, buf5,
           buf3, buf6,
           pkt->stream_index);

    }


    int main(int argc, char **argv)
    {
       AVOutputFormat *ofmt = NULL;
       AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
       AVPacket pkt;
       AVFrame *pFrame = NULL;
       AVFrame *pFrameRGB = NULL;
       int frameFinished = 0;
       pFrame = av_frame_alloc();
       pFrameRGB = av_frame_alloc();

       const char *in_filename, *out_filename;
       int ret, i;
       in_filename = "../../TestClips/Audio Video Sync Test.mp4";
       out_filename = "out.mp4";

       // Initialize FFMPEG
       av_register_all();
       // Get input file format context
       if ((ret = avformat_open_input(&amp;ifmt_ctx, in_filename, 0, 0)) &lt; 0)
       {
           fprintf(stderr, "Could not open input file '%s'", in_filename);
           goto end;
       }
       // Extract streams description
       if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) &lt; 0)
       {
           fprintf(stderr, "Failed to retrieve input stream information");
           goto end;
       }
       // Print detailed information about the input or output format,
       // such as duration, bitrate, streams, container, programs, metadata, side data, codec and time base.
       av_dump_format(ifmt_ctx, 0, in_filename, 0);

       // Allocate an AVFormatContext for an output format.
       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, out_filename);
       if (!ofmt_ctx)
       {
           fprintf(stderr, "Could not create output context\n");
           ret = AVERROR_UNKNOWN;
           goto end;
       }

       // The output container format.
       ofmt = ofmt_ctx->oformat;

       // Allocating output streams
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++)
       {
           AVStream *in_stream = ifmt_ctx->streams[i];
           AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
           if (!out_stream)
           {
               fprintf(stderr, "Failed allocating output stream\n");
               ret = AVERROR_UNKNOWN;
               goto end;
           }
           ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
           if (ret &lt; 0)
           {
               fprintf(stderr, "Failed to copy context from input to output stream codec context\n");
               goto end;
           }
           out_stream->codec->codec_tag = 0;
           if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
           {
               out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
           }
       }

       // Show output format info
       av_dump_format(ofmt_ctx, 0, out_filename, 1);

       // Open output file
       if (!(ofmt->flags &amp; AVFMT_NOFILE))
       {
           ret = avio_open(&amp;ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0)
           {
               fprintf(stderr, "Could not open output file '%s'", out_filename);
               goto end;
           }
       }
       // Write output file header
       ret = avformat_write_header(ofmt_ctx, NULL);
       if (ret &lt; 0)
       {
           fprintf(stderr, "Error occurred when opening output file\n");
           goto end;
       }

       // Search for input video codec info
       AVCodec *in_codec = nullptr;
       AVCodecContext* avctx = nullptr;

       int video_stream_index = -1;
       for (int i = 0; i &lt; ifmt_ctx->nb_streams; i++)
       {
           if (ifmt_ctx->streams[i]->codec->coder_type == AVMEDIA_TYPE_VIDEO)
           {
               video_stream_index = i;
               avctx = ifmt_ctx->streams[i]->codec;
               in_codec = avcodec_find_decoder(avctx->codec_id);
               if (!in_codec)
               {
                   fprintf(stderr, "in codec not found\n");
                   exit(1);
               }
               break;
           }
       }

       // Search for output video codec info
       AVCodec *out_codec = nullptr;
       AVCodecContext* o_avctx = nullptr;

       int o_video_stream_index = -1;
       for (int i = 0; i &lt; ofmt_ctx->nb_streams; i++)
       {
           if (ofmt_ctx->streams[i]->codec->coder_type == AVMEDIA_TYPE_VIDEO)
           {
               o_video_stream_index = i;
               o_avctx = ofmt_ctx->streams[i]->codec;
               out_codec = avcodec_find_encoder(o_avctx->codec_id);
               if (!out_codec)
               {
                   fprintf(stderr, "out codec not found\n");
                   exit(1);
               }
               break;
           }
       }

       // openCV pixel format
       AVPixelFormat pFormat = AV_PIX_FMT_RGB24;
       // Data size
       int numBytes = avpicture_get_size(pFormat, avctx->width, avctx->height);
       // allocate buffer
       uint8_t *buffer = (uint8_t *)av_malloc(numBytes * sizeof(uint8_t));
       // fill frame structure
       avpicture_fill((AVPicture *)pFrameRGB, buffer, pFormat, avctx->width, avctx->height);
       // frame area
       int y_size = avctx->width * avctx->height;
       // Open input codec
       avcodec_open2(avctx, in_codec, NULL);
       // Main loop
       while (1)
       {
           AVStream *in_stream, *out_stream;
           ret = av_read_frame(ifmt_ctx, &amp;pkt);
           if (ret &lt; 0)
           {
               break;
           }
           in_stream = ifmt_ctx->streams[pkt.stream_index];
           out_stream = ofmt_ctx->streams[pkt.stream_index];
           log_packet(ifmt_ctx, &amp;pkt, "in");
           // copy packet
           pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
           pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
           pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
           pkt.pos = -1;

           log_packet(ofmt_ctx, &amp;pkt, "out");
           if (pkt.stream_index == video_stream_index)
           {
               avcodec_decode_video2(avctx, pFrame, &amp;frameFinished, &amp;pkt);
               if (frameFinished)
               {
                   struct SwsContext *img_convert_ctx;
                   img_convert_ctx = sws_getCachedContext(NULL,
                       avctx->width,
                       avctx->height,
                       avctx->pix_fmt,
                       avctx->width,
                       avctx->height,
                       AV_PIX_FMT_BGR24,
                       SWS_BICUBIC,
                       NULL,
                       NULL,
                       NULL);
                   sws_scale(img_convert_ctx,
                       ((AVPicture*)pFrame)->data,
                       ((AVPicture*)pFrame)->linesize,
                       0,
                       avctx->height,
                       ((AVPicture *)pFrameRGB)->data,
                       ((AVPicture *)pFrameRGB)->linesize);

                   sws_freeContext(img_convert_ctx);

                   // Do some image processing
                   cv::Mat img(pFrame->height, pFrame->width, CV_8UC3, pFrameRGB->data[0],false);
                   cv::GaussianBlur(img,img,Size(5,5),3);
                   cv::imshow("Display", img);
                   cv::waitKey(5);
                   // --------------------------------
                   // Transform back to initial format
                   // --------------------------------
                   img_convert_ctx = sws_getCachedContext(NULL,
                       avctx->width,
                       avctx->height,
                       AV_PIX_FMT_BGR24,
                       avctx->width,
                       avctx->height,
                       avctx->pix_fmt,
                       SWS_BICUBIC,
                       NULL,
                       NULL,
                       NULL);
                   sws_scale(img_convert_ctx,
                       ((AVPicture*)pFrameRGB)->data,
                       ((AVPicture*)pFrameRGB)->linesize,
                       0,
                       avctx->height,
                       ((AVPicture *)pFrame)->data,
                       ((AVPicture *)pFrame)->linesize);
                       // --------------------------------------------
                       // Something must be here
                       // --------------------------------------------
                       //
                       // Write fideo frame (How to write frame to output stream ?)
                       //
                       // --------------------------------------------
                        sws_freeContext(img_convert_ctx);
               }

           }
           else // write sound frame
           {
               ret = av_interleaved_write_frame(ofmt_ctx, &amp;pkt);
           }
           if (ret &lt; 0)
           {
               fprintf(stderr, "Error muxing packet\n");
               break;
           }
           // Decrease packet ref counter
           av_packet_unref(&amp;pkt);
       }
       av_write_trailer(ofmt_ctx);
    end:
       avformat_close_input(&amp;ifmt_ctx);
       // close output
       if (ofmt_ctx &amp;&amp; !(ofmt->flags &amp; AVFMT_NOFILE))
       {
           avio_closep(&amp;ofmt_ctx->pb);
       }
       avformat_free_context(ofmt_ctx);
       if (ret &lt; 0 &amp;&amp; ret != AVERROR_EOF)
       {
           char buf_err[AV_ERROR_MAX_STRING_SIZE] = { 0 };
           av_make_error_string(buf_err, AV_ERROR_MAX_STRING_SIZE, ret);
           fprintf(stderr, "Error occurred: %s\n", buf_err);
           return 1;
       }

       avcodec_close(avctx);
       av_free(pFrame);
       av_free(pFrameRGB);

       return 0;
    }