Recherche avancée

Médias (1)

Mot : - Tags -/ogg

Autres articles (47)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Soumettre améliorations et plugins supplémentaires

    10 avril 2011

    Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
    Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (8671)

  • Java - RTSP save snapshot from Stream Packets

    9 août 2016, par Guerino Rodella

    I’m developing an application which requests snapshots to DVR and IP Cameras. The device I’m working on just offer RTSP requests to do so. Then I implemented the necessary RTSP methods to start receiving the stream packets and I started receiving then via UDP connection established. My doubt is, how can I save the received data to a jpeg file ? Where’s the begging and end of the image bytes received ?

    I searched a lot libraries which implement this type of service in Java, like Xuggler ( which it’s maintained no more ), javacpp-presets - has ffmpeg and opencv libraries included - I had some environment problems with it. If someone know an easy and good one which saves snapshots from the streams, let me know.

    My code :

    final long timeout = System.currentTimeMillis() + 3000;

    byte[] fullImage = new byte[ 1024 * 1024 ];
    DatagramSocket udpSocket = new DatagramSocket( 8000 );
    int lastByte = 0;

    // Skip first 2 packets because I think they are HEADERS
    // Since I don't know what they mean, I just print then in hexa
    for( int i = 0; i < 2; i++ ){

       byte[] buffer = new byte[ 1024 ];
       DatagramPacket dataPacket = new DatagramPacket( buffer, buffer.length );
       udpSocket.receive( dataPacket );

       int dataLenght = dataPacket.getLength();
       buffer = Arrays.copyOf( buffer, dataLenght );

       System.out.println( "RECEIVED[" + DatatypeConverter.printHexBinary( buffer ) + " L: " + dataLenght );

    }

    do{

       byte[] buffer = new byte[ 1024 ];
       DatagramPacket dataPacket = new DatagramPacket( fullImage, fullImage.length );
       udpSocket.receive( dataPacket );

       System.out.println( "RECEIVED: " + new String( fullImage ) );

       for( int i = 0; i < buffer.length; i++ ){
           fullImage[ i + lastByte ] = buffer[ i ];
           lastByte ++;

       }

    } while( System.currentTimeMillis() < timeout );
    // I know this timeout is wrong, I should stop after getting full image bytes

    The output :

    RECEIVED : 80E0000100004650000000006742E01FDA014016C4 L : 21
    RECEIVED : 80E00002000046500000000068CE30A480 L : 17
    RECEIVED : Tons of data from the streaming...
    RECEIVED : Tons of data from the streaming...
    RECEIVED : Tons of data from the streaming...
    [...]

    As you might suppose, the image I’m saving into a file is not readable because I’m doing it wrong. I think the header provide me some info about the next packets the server will sent me telling the start and the end of the image from the streaming. But I don’t understood them. Someone know how to solve it ? Any tips are welcome !

  • Save a stream of arrays to video using FFMPEG

    13 décembre 2022, par Gianluca Iacchini

    I made a simple fluid simulation using CUDA, and I'm trying to save it to a video using FFMPEG, however I get the Finishing stream 0:0 without any data written to it warning.

    


    This is how i send the data

    


    unsigned char* data = new unsigned char[SCR_WIDTH * SCR_HEIGHT * 4];
uchar4* pColors = new uchar4[SCR_WIDTH * SCR_HEIGHT];

for (int i = 0; i < N_FRAMES; i ++)
{
    // Computes a simulation step and sets pColors with the correct values.
    on_frame(pColors, timeStepSize);
    for (int j = 0; j < SCR_WIDTH * SCR_HEIGHT * 4; j+=4)
    {
        data[j] = pColors[j].x;
        data[j+1] = pColors[j].y;
        data[j+2] = pColors[j].z;
        data[j+3] = pColors[j].w;
    }
    std::cout.write(reinterpret_cast(data), SCR_WIDTH * SCR_HEIGHT * 4);
}


    


    And then I pass it to FFMPEG using the following command :

    


    ./simulation.o | ffmpeg -y -f rawvideo -pixel_format rgba -video_size 1024x1024 -i - -c:v libx264 -pix_fmt yuv444p -crf 0 video.mp4
`

    


    This works fine if I hard code the values (es. if I set data[j] = 255 I get a red screen as expected) but when I use the pColors variable I get the following message from FFMPEG

    


    Finishing stream 0:0 without any data written to it.

    


    Even though both pColors and data hold the correct values.

    


    Here is the full report from FFMPEG

    


    ffmpeg started on 2022-12-13 at 14:28:34
Report written to "ffmpeg-20221213-142834.log"
Command line:
ffmpeg -y -f rawvideo -report -pixel_format rgba -video_size 128x128 -i - -c:v libx264 -pix_fmt yuv444p -crf 0 video9.mp4
ffmpeg version 3.4.11-0ubuntu0.1 Copyright (c) 2000-2022 the FFmpeg developers
  built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
  configuration: --prefix=/usr --extra-version=0ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chrom  libavutil      55. 78.100 / 55. 78.100
  libavcodec     57.107.100 / 57.107.100
  libavformat    57. 83.100 / 57. 83.100
  libavdevice    57. 10.100 / 57. 10.100
  libavfilter     6.107.100 /  6.107.100
  libavresample   3.  7.  0 /  3.  7.  0
  libswscale      4.  8.100 /  4.  8.100
  libswresample   2.  9.100 /  2.  9.100
  libpostproc    54.  7.100 / 54.  7.100
Splitting the commandline.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'rawvideo'.
Reading option '-report' ... matched as option 'report' (generate a report) with argument '1'.
Reading option '-pixel_format' ... matched as AVOption 'pixel_format' with argument 'rgba'.
Reading option '-video_size' ... matched as AVOption 'video_size' with argument '128x128'.
Reading option '-i' ... matched as input url with argument '-'.
Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'libx264'.
Reading option '-pix_fmt' ... matched as option 'pix_fmt' (set pixel format) with argument 'yuv444p'.
Reading option '-crf' ... matched as AVOption 'crf' with argument '0'.
Reading option 'video9.mp4' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option y (overwrite output files) with argument 1.
Applying option report (generate a report) with argument 1.
Successfully parsed a group of options.
Parsing a group of options: input url -.
Applying option f (force format) with argument rawvideo.
Successfully parsed a group of options.
Opening an input file: -.
[rawvideo @ 0x558eba7b0000] Opening 'pipe:' for reading
[pipe @ 0x558eba78a080] Setting default whitelist 'crypto'
[rawvideo @ 0x558eba7b0000] Before avformat_find_stream_info() pos: 0 bytes read:0 seeks:0 nb_streams:1
[rawvideo @ 0x558eba7b0000] After avformat_find_stream_info() pos: 0 bytes read:0 seeks:0 frames:0
Input #0, rawvideo, from 'pipe:':
  Duration: N/A, bitrate: 13107 kb/s
    Stream #0:0, 0, 1/25: Video: rawvideo (RGBA / 0x41424752), rgba, 128x128, 13107 kb/s, 25 tbr, 25 tbn, 25 tbc
Successfully opened the file.
Parsing a group of options: output url video9.mp4.
Applying option c:v (codec name) with argument libx264.
Applying option pix_fmt (set pixel format) with argument yuv444p.
Successfully parsed a group of options.
Opening an output file: video9.mp4.
[file @ 0x558eba78a200] Setting default whitelist 'file,crypto'
Successfully opened the file.
Stream mapping:
  Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
No more output streams to write to, finishing.
Finishing stream 0:0 without any data written to it.
detected 2 logical cores
[graph 0 input from stream 0:0 @ 0x558eba7a4a00] Setting 'video_size' to value '128x128'
[graph 0 input from stream 0:0 @ 0x558eba7a4a00] Setting 'pix_fmt' to value '28'
[graph 0 input from stream 0:0 @ 0x558eba7a4a00] Setting 'time_base' to value '1/25'
[graph 0 input from stream 0:0 @ 0x558eba7a4a00] Setting 'pixel_aspect' to value '0/1'
[graph 0 input from stream 0:0 @ 0x558eba7a4a00] Setting 'sws_param' to value 'flags=2'
[graph 0 input from stream 0:0 @ 0x558eba7a4a00] Setting 'frame_rate' to value '25/1'
[graph 0 input from stream 0:0 @ 0x558eba7a4a00] w:128 h:128 pixfmt:rgba tb:1/25 fr:25/1 sar:0/1 sws_param:flags=2
[format @ 0x558eba7a4b40] compat: called with args=[yuv444p]
[format @ 0x558eba7a4b40] Setting 'pix_fmts' to value 'yuv444p'
[auto_scaler_0 @ 0x558eba7a4be0] Setting 'flags' to value 'bicubic'
[auto_scaler_0 @ 0x558eba7a4be0] w:iw h:ih flags:'bicubic' interl:0
[format @ 0x558eba7a4b40] auto-inserting filter 'auto_scaler_0' between the filter 'Parsed_null_0' and the filter 'format'
[AVFilterGraph @ 0x558eba76d500] query_formats: 4 queried, 2 merged, 1 already done, 0 delayed
[auto_scaler_0 @ 0x558eba7a4be0] w:128 h:128 fmt:rgba sar:0/1 -> w:128 h:128 fmt:yuv444p sar:0/1 flags:0x4
[libx264 @ 0x558eba7cf900] using mv_range_thread = 24
[libx264 @ 0x558eba7cf900] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 AVX512
[libx264 @ 0x558eba7cf900] profile High 4:4:4 Predictive, level 1.1, 4:4:4 8-bit
[libx264 @ 0x558eba7cf900] 264 - core 152 r2854 e9a5903 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x1:0x111 me=hex subme=7 psy=0 mixed_ref=1 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=0 chroma_qp_offset=0 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc=cqp mbtree=0 qp=0
Output #0, mp4, to 'video9.mp4':
  Metadata:
    encoder         : Lavf57.83.100
    Stream #0:0, 0, 1/12800: Video: h264 (libx264) (avc1 / 0x31637661), yuv444p, 128x128, q=-1--1, 25 fps, 12800 tbn, 25 tbc
    Metadata:
      encoder         : Lavc57.107.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
frame=    0 fps=0.0 q=0.0 Lsize=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Input file #0 (pipe:):
  Input stream #0:0 (video): 0 packets read (0 bytes); 0 frames decoded; 
  Total: 0 packets (0 bytes) demuxed
Output file #0 (video9.mp4):
  Output stream #0:0 (video): 0 frames encoded; 0 packets muxed (0 bytes); 
  Total: 0 packets (0 bytes) muxed
0 frames successfully decoded, 0 decoding errors
[AVIOContext @ 0x558eba7b4120] Statistics: 2 seeks, 3 writeouts
[AVIOContext @ 0x558eba7b4000] Statistics: 0 bytes read, 0 seeks



    


    I've never used FFMPEG before so I'm having a hard time finding my mistake.

    


  • What's wrong with how I save a vector of AVFrames as mp4 video using the h264 encoder ?

    8 avril 2023, par nokla

    I am trying to encode a vector of AVFrames to an MP4 file using the h264 codec.

    


    The code runs without errors but both when I try to open the saved video file with the windows media and adobe Media Encoded they it says that it is in an unsupported format.

    


    I went through it with a debugger and everything seemed to work fine.

    



    


    This is the function I used to saved the video :

    


    void SaveVideo(std::string&amp; output_filename, std::vector<avframe> video)&#xA;{&#xA;    // Initialize FFmpeg&#xA;    avformat_network_init();&#xA;&#xA;    // Open the output file context&#xA;    AVFormatContext* format_ctx = nullptr;&#xA;    int ret = avformat_alloc_output_context2(&amp;format_ctx, nullptr, nullptr, output_filename.c_str());&#xA;    if (ret &lt; 0) {&#xA;        wxMessageBox("Error creating output context: ");&#xA;        wxMessageBox(av_err2str(ret));&#xA;        return;&#xA;    }&#xA;&#xA;    // Open the output file&#xA;    ret = avio_open(&amp;format_ctx->pb, output_filename.c_str(), AVIO_FLAG_WRITE);&#xA;    if (ret &lt; 0) {&#xA;        std::cerr &lt;&lt; "Error opening output file: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;        avformat_free_context(format_ctx);&#xA;        return;&#xA;    }&#xA;&#xA;    // Create the video stream&#xA;    const AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_H264);&#xA;    if (!codec) {&#xA;        std::cerr &lt;&lt; "Error finding H.264 encoder" &lt;&lt; std::endl;&#xA;        avformat_free_context(format_ctx);&#xA;        return;&#xA;    }&#xA;&#xA;    AVStream* stream = avformat_new_stream(format_ctx, codec);&#xA;    if (!stream) {&#xA;        std::cerr &lt;&lt; "Error creating output stream" &lt;&lt; std::endl;&#xA;        avformat_free_context(format_ctx);&#xA;        return;&#xA;    }&#xA;&#xA;    // Set the stream parameters&#xA;    stream->codecpar->codec_id = AV_CODEC_ID_H264;&#xA;    stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    stream->codecpar->width =video.front().width;&#xA;    stream->codecpar->height = video.front().height;&#xA;    stream->codecpar->format = AV_PIX_FMT_YUV420P;&#xA;    stream->codecpar->bit_rate = 400000;&#xA;    AVRational framerate = { 1, 30};&#xA;    stream->time_base = av_inv_q(framerate);&#xA;&#xA;    // Open the codec context&#xA;    AVCodecContext* codec_ctx = avcodec_alloc_context3(codec);&#xA;    codec_ctx->codec_tag = 0;&#xA;    codec_ctx->time_base = stream->time_base;&#xA;    codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;    if (!codec_ctx) {&#xA;        std::cout &lt;&lt; "Error allocating codec context" &lt;&lt; std::endl;&#xA;        avformat_free_context(format_ctx);&#xA;        return;&#xA;    }&#xA;&#xA;    ret = avcodec_parameters_to_context(codec_ctx, stream->codecpar);&#xA;    if (ret &lt; 0) {&#xA;        std::cout &lt;&lt; "Error setting codec context parameters: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;        avcodec_free_context(&amp;codec_ctx);&#xA;        avformat_free_context(format_ctx);&#xA;        return;&#xA;    }&#xA;    AVDictionary* opt = NULL;&#xA;    ret = avcodec_open2(codec_ctx, codec, &amp;opt);&#xA;    if (ret &lt; 0) {&#xA;        wxMessageBox("Error opening codec: ");&#xA;        wxMessageBox(av_err2str(ret));&#xA;        avcodec_free_context(&amp;codec_ctx);&#xA;        avformat_free_context(format_ctx);&#xA;        return;&#xA;    }&#xA;&#xA;    // Allocate a buffer for the frame data&#xA;    AVFrame* frame = av_frame_alloc();&#xA;    if (!frame) {&#xA;        std::cerr &lt;&lt; "Error allocating frame" &lt;&lt; std::endl;&#xA;        avcodec_free_context(&amp;codec_ctx);&#xA;        avformat_free_context(format_ctx);&#xA;        return;&#xA;    }&#xA;&#xA;    frame->format = codec_ctx->pix_fmt;&#xA;    frame->width = codec_ctx->width;&#xA;    frame->height = codec_ctx->height;&#xA;&#xA;    ret = av_frame_get_buffer(frame, 0);&#xA;    if (ret &lt; 0) {&#xA;        std::cerr &lt;&lt; "Error allocating frame buffer: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;        av_frame_free(&amp;frame);&#xA;        avcodec_free_context(&amp;codec_ctx);&#xA;        avformat_free_context(format_ctx);&#xA;        return;&#xA;    }&#xA;&#xA;    // Allocate a buffer for the converted frame data&#xA;    AVFrame* converted_frame = av_frame_alloc();&#xA;    if (!converted_frame) {&#xA;        std::cerr &lt;&lt; "Error allocating converted frame" &lt;&lt; std::endl;&#xA;        av_frame_free(&amp;frame);&#xA;        avcodec_free_context(&amp;codec_ctx);&#xA;        avformat_free_context(format_ctx);&#xA;        return;&#xA;    }&#xA;&#xA;    converted_frame->format = AV_PIX_FMT_YUV420P;&#xA;    converted_frame->width = codec_ctx->width;&#xA;    converted_frame->height = codec_ctx->height;&#xA;&#xA;    ret = av_frame_get_buffer(converted_frame, 0);&#xA;    if (ret &lt; 0) {&#xA;        std::cerr &lt;&lt; "Error allocating converted frame buffer: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;        av_frame_free(&amp;frame);&#xA;        av_frame_free(&amp;converted_frame);&#xA;        avcodec_free_context(&amp;codec_ctx);&#xA;        avformat_free_context(format_ctx);&#xA;        return;&#xA;    }&#xA;&#xA;    // Initialize the converter&#xA;    SwsContext* converter = sws_getContext(&#xA;        codec_ctx->width, codec_ctx->height, codec_ctx->pix_fmt,&#xA;        codec_ctx->width, codec_ctx->height, AV_PIX_FMT_YUV420P,&#xA;        SWS_BICUBIC, nullptr, nullptr, nullptr&#xA;    );&#xA;    if (!converter) {&#xA;        std::cerr &lt;&lt; "Error initializing converter" &lt;&lt; std::endl;&#xA;        av_frame_free(&amp;frame);&#xA;        av_frame_free(&amp;converted_frame);&#xA;        avcodec_free_context(&amp;codec_ctx);&#xA;        avformat_free_context(format_ctx);&#xA;        return;&#xA;    }&#xA;&#xA;    // Write the header to the output file&#xA;    ret = avformat_write_header(format_ctx, nullptr);&#xA;    if (ret &lt; 0) {&#xA;        std::cerr &lt;&lt; "Error writing header to output file: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;        av_frame_free(&amp;frame);&#xA;        av_frame_free(&amp;converted_frame);&#xA;        sws_freeContext(converter);&#xA;        avcodec_free_context(&amp;codec_ctx);&#xA;        avformat_free_context(format_ctx);&#xA;        return;&#xA;    }&#xA;&#xA;    // Iterate over the frames and write them to the output file&#xA;    int frame_count = 0;&#xA;    for (auto&amp; frame: video) {&#xA;         {&#xA;            // Convert the frame to the output format&#xA;            sws_scale(converter,&#xA;                srcFrame.data, srcFrame.linesize, 0, srcFrame.height,&#xA;                converted_frame->data, converted_frame->linesize&#xA;            );&#xA;&#xA;            // Set the frame properties&#xA;            converted_frame->pts = av_rescale_q(frame_count, stream->time_base, codec_ctx->time_base);&#xA;            frame_count&#x2B;&#x2B;;&#xA;            //converted_frame->time_base.den = codec_ctx->time_base.den;&#xA;            //converted_frame->time_base.num = codec_ctx->time_base.num;&#xA;            // Encode the frame and write it to the output&#xA;            ret = avcodec_send_frame(codec_ctx, converted_frame);&#xA;            if (ret &lt; 0) {&#xA;                std::cerr &lt;&lt; "Error sending frame for encoding: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;                av_frame_free(&amp;frame);&#xA;                av_frame_free(&amp;converted_frame);&#xA;                sws_freeContext(converter);&#xA;                avcodec_free_context(&amp;codec_ctx);&#xA;                avformat_free_context(format_ctx);&#xA;                return;&#xA;            }&#xA;            AVPacket* pkt = av_packet_alloc();&#xA;            if (!pkt) {&#xA;                std::cerr &lt;&lt; "Error allocating packet" &lt;&lt; std::endl;&#xA;                return;&#xA;            }&#xA;            while (ret >= 0) {&#xA;                ret = avcodec_receive_packet(codec_ctx, pkt);&#xA;                if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {&#xA;                    std::string a = av_err2str(ret);&#xA;                    break;&#xA;                }&#xA;                else if (ret &lt; 0) {&#xA;                    wxMessageBox("Error during encoding");&#xA;                    wxMessageBox(av_err2str(ret));&#xA;                    av_packet_unref(pkt);&#xA;                    av_frame_free(&amp;frame);&#xA;                    av_frame_free(&amp;converted_frame);&#xA;                    sws_freeContext(converter);&#xA;                    avcodec_free_context(&amp;codec_ctx);&#xA;                    avformat_free_context(format_ctx);&#xA;                    return;&#xA;                }&#xA;&#xA;                // Write the packet to the output file&#xA;                av_packet_rescale_ts(pkt, codec_ctx->time_base, stream->time_base);&#xA;                pkt->stream_index = stream->index;&#xA;                ret = av_interleaved_write_frame(format_ctx, pkt);&#xA;                av_packet_unref(pkt);&#xA;                if (ret &lt; 0) {&#xA;                    std::cerr &lt;&lt; "Error writing packet to output file: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;                    av_frame_free(&amp;frame);&#xA;                    av_frame_free(&amp;converted_frame);&#xA;                    sws_freeContext(converter);&#xA;                    avcodec_free_context(&amp;codec_ctx);&#xA;                    avformat_free_context(format_ctx);&#xA;                    return;&#xA;                }&#xA;            }&#xA;        }&#xA;    }&#xA;&#xA;    // Flush the encoder&#xA;    ret = avcodec_send_frame(codec_ctx, nullptr);&#xA;    if (ret &lt; 0) {&#xA;        std::cerr &lt;&lt; "Error flushing encoder: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;        av_frame_free(&amp;frame);&#xA;        av_frame_free(&amp;converted_frame);&#xA;        sws_freeContext(converter);&#xA;        avcodec_free_context(&amp;codec_ctx);&#xA;        avformat_free_context(format_ctx);&#xA;        return;&#xA;    }&#xA;&#xA;    while (ret >= 0) {&#xA;        AVPacket* pkt = av_packet_alloc();&#xA;        if (!pkt) {&#xA;            std::cerr &lt;&lt; "Error allocating packet" &lt;&lt; std::endl;&#xA;            return;&#xA;        }&#xA;        ret = avcodec_receive_packet(codec_ctx, pkt);&#xA;        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {&#xA;            wxMessageBox("Error recieving packet");&#xA;            wxMessageBox(av_err2str(ret));&#xA;            break;&#xA;        }&#xA;        else if (ret &lt; 0) {&#xA;            std::cerr &lt;&lt; "Error during encoding: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;            av_packet_unref(pkt);&#xA;            av_frame_free(&amp;frame);&#xA;            av_frame_free(&amp;converted_frame);&#xA;            sws_freeContext(converter);&#xA;            avcodec_free_context(&amp;codec_ctx);&#xA;            avformat_free_context(format_ctx);&#xA;            return;&#xA;        }&#xA;&#xA;        // Write the packet to the output file&#xA;        av_packet_rescale_ts(pkt, codec_ctx->time_base, stream->time_base);&#xA;        pkt->stream_index = stream->index;&#xA;        ret = av_interleaved_write_frame(format_ctx, pkt);&#xA;        av_packet_unref(pkt);&#xA;        if (ret &lt; 0) {&#xA;            std::cerr &lt;&lt; "Error writing packet to output file: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;            av_frame_free(&amp;frame);&#xA;            av_frame_free(&amp;converted_frame);&#xA;            sws_freeContext(converter);&#xA;            avcodec_free_context(&amp;codec_ctx);&#xA;            avformat_free_context(format_ctx);&#xA;            return;&#xA;        }&#xA;    }&#xA;&#xA;    // Write the trailer to the output file&#xA;    ret = av_write_trailer(format_ctx);&#xA;    if (ret &lt; 0) {&#xA;        std::cerr &lt;&lt; "Error writing trailer to output file: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;    }&#xA;&#xA;    // Free all resources&#xA;    av_frame_free(&amp;frame);&#xA;    av_frame_free(&amp;converted_frame);&#xA;    sws_freeContext(converter);&#xA;    avcodec_free_context(&amp;codec_ctx);&#xA;    avformat_free_context(format_ctx);&#xA;}&#xA;&#xA;</avframe>

    &#xA;

    ** I know it is not the prettiest way to write this code, I just wanted to try and do something like that.

    &#xA;

    ** This is an altered version of the function as the original one was inside class. I changed it so you could compile it, but it might has some errors if I forgot to change something

    &#xA;

    Any help would be appreciated.

    &#xA;