Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (69)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (10327)

  • "moov atom not found" when using av_interleaved_write_frame but not avio_write

    9 octobre 2017, par icStatic

    I am attempting to put together a class that can take arbitrary frames and construct a video from it using the ffmpeg 3.3.3 API. I’ve been struggling to find a good example for this as the examples still seem to be using deprecated functions, so I’ve attempted to patch this using the documentation in the headers and by referring to a few github repos that seem to be using the new version.

    If I use av_interleaved_write_frame to write the encoded packets to the output then ffprobe outputs the following :

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0000000002760120] moov atom not found0
    X:\Diagnostics.mp4: Invalid data found when processing input

    ffplay is unable to play the file generated using this method.

    If I instead swap it out for a call to avio_write, ffprobe instead outputs :

    Input #0, h264, from 'X:\Diagnostics.mp4':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbc

    ffplay can mostly play this file until it gets towards the end, when it outputs :

    Input #0, h264, from 'X:\Diagnostics.mp4':    0KB sq=    0B f=0/0
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbc
    [h264 @ 000000000254ef80] error while decoding MB 31 22, bytestream -65
    [h264 @ 000000000254ef80] concealing 102 DC, 102 AC, 102 MV errors in I frame
       nan M-V:    nan fd=   1 aq=    0KB vq=    0KB sq=    0B f=0/0

    VLC cannot play files from either method. The second method’s file displays a single black frame then hides the video output. The first does not display anything. Neither of them give a video duration.

    Does anyone have any ideas what’s happening here ? I assume my solution is close to working as I’m getting a good chunk of valid frames coming through.

    Code :

    void main()
    {
       OutputStream Stream( "Output.mp4", 672, 380, 25, true );
       Stream.Initialize();

       int i = 100;
       while( i-- )
       {
           //... Generate a frame

           Stream.WriteFrame( Frame );
       }
       Stream.CloseFile();
    }

    OutputStream::OutputStream( const std::string& Path, unsigned int Width, unsigned int Height, int Framerate, bool IsBGR )
    : Stream()
    , FrameIndex( 0 )
    {
       auto& ID = *m_InternalData;

       ID.Path = Path;

       ID.Width = Width;
       ID.Height= Height;
       ID.Framerate.num = Framerate;
       ID.Framerate.den = 1;

       ID.PixelFormat = IsBGR ? AV_PIX_FMT_BGR24 : AV_PIX_FMT_RGB24;
       ID.CodecID = AV_CODEC_ID_H264;
       ID.CodecTag = 0;

       ID.AspectRatio.num = 1;
       ID.AspectRatio.den = 1;
    }

    CameraStreamError OutputStream::Initialize()
    {
       av_log_set_callback( &InputStream::LogCallback );
       av_register_all();
       avformat_network_init();

       auto& ID = *m_InternalData;

       av_init_packet( &ID.Packet );

       int Result = avformat_alloc_output_context2( &ID.FormatContext, nullptr, nullptr, ID.Path.c_str() );
       if( Result < 0 || !ID.FormatContext )
       {
           STREAM_ERROR( UnknownError );
       }

       AVCodec* Encoder = avcodec_find_encoder( ID.CodecID );

       if( !Encoder )
       {
           STREAM_ERROR( NoH264Support );
       }

       AVStream* OutStream = avformat_new_stream( ID.FormatContext, Encoder );
       if( !OutStream )
       {
           STREAM_ERROR( UnknownError );
       }

       ID.CodecContext = avcodec_alloc_context3( Encoder );
       if( !ID.CodecContext )
       {
           STREAM_ERROR( NoH264Support );
       }

       ID.CodecContext->time_base = av_inv_q(ID.Framerate);

       {
           AVCodecParameters* CodecParams = OutStream->codecpar;

           CodecParams->width = ID.Width;
           CodecParams->height = ID.Height;
           CodecParams->format = AV_PIX_FMT_YUV420P;
           CodecParams->codec_id = ID.CodecID;
           CodecParams->codec_type = AVMEDIA_TYPE_VIDEO;
           CodecParams->profile = FF_PROFILE_H264_MAIN;
           CodecParams->level = 40;

           Result = avcodec_parameters_to_context( ID.CodecContext, CodecParams );
           if( Result < 0 )
           {
               STREAM_ERROR( EncoderCreationError );
           }
       }

       if( ID.IsVideo )
       {
           ID.CodecContext->width = ID.Width;
           ID.CodecContext->height = ID.Height;
           ID.CodecContext->sample_aspect_ratio = ID.AspectRatio;
           ID.CodecContext->time_base = av_inv_q(ID.Framerate);

           if( Encoder->pix_fmts )
           {
               ID.CodecContext->pix_fmt = Encoder->pix_fmts[0];
           }
           else
           {
               ID.CodecContext->pix_fmt = ID.PixelFormat;
           }
       }
       //Snip

       Result = avcodec_open2( ID.CodecContext, Encoder, nullptr );
       if( Result < 0 )
       {
           STREAM_ERROR( EncoderCreationError );
       }

       Result = avcodec_parameters_from_context( OutStream->codecpar, ID.CodecContext );
       if( Result < 0 )
       {
           STREAM_ERROR( EncoderCreationError );
       }

       if( ID.FormatContext->oformat->flags & AVFMT_GLOBALHEADER )
       {
           ID.CodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }

       OutStream->time_base = ID.CodecContext->time_base;
       OutStream->avg_frame_rate= av_inv_q(OutStream->time_base);

       if( !( ID.FormatContext->oformat->flags & AVFMT_NOFILE ) )
       {
           Result = avio_open( &ID.FormatContext->pb, ID.Path.c_str(), AVIO_FLAG_WRITE );
           if( Result < 0 )
           {
               STREAM_ERROR( FileNotWriteable );
           }
       }

       Result = avformat_write_header( ID.FormatContext, nullptr );
       if( Result < 0 )
       {
           STREAM_ERROR( WriteFailed );
       }

       ID.Output = std::make_unique( ID.CodecContext->width, ID.CodecContext->height, ID.CodecContext->pix_fmt );

       ID.ConversionContext = sws_getCachedContext(
           ID.ConversionContext,
           ID.Width,
           ID.Height,
           ID.PixelFormat,
           ID.CodecContext->width,
           ID.CodecContext->height,
           ID.CodecContext->pix_fmt,
           SWS_BICUBIC,
           NULL,
           NULL,
           NULL );

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::WriteFrame( FFMPEG::Frame* Frame )
    {
       auto& ID = *m_InternalData;

       ID.Output->Prepare();

       int OutputSliceSize = sws_scale( m_InternalData->ConversionContext, Frame->GetFrame()->data, Frame->GetFrame()->linesize, 0, Frame->GetHeight(), ID.Output->GetFrame()->data, ID.Output->GetFrame()->linesize );

       ID.Output->GetFrame()->pts = ID.CodecContext->frame_number;

       int Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() );
       if( Result == AVERROR(EAGAIN) )
       {
           CameraStreamError ResultErr = SendAll();
           if( ResultErr != CameraStreamError::Success )
           {
               return ResultErr;
           }
           Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() );
       }

       if( Result == 0 )
       {
           CameraStreamError ResultErr = SendAll();
           if( ResultErr != CameraStreamError::Success )
           {
               return ResultErr;
           }
       }

       FrameIndex++;

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::SendAll( void )
    {
       auto& ID = *m_InternalData;

       int Result;
       do
       {
           AVPacket TempPacket = {};
           av_init_packet( &TempPacket );

           Result = avcodec_receive_packet( GetData().CodecContext, &TempPacket );
           if( Result == 0 )
           {
               av_packet_rescale_ts( &TempPacket, ID.CodecContext->time_base, ID.FormatContext->streams[0]->time_base );

               TempPacket.stream_index = ID.FormatContext->streams[0]->index;

               //avio_write( ID.FormatContext->pb, TempPacket.data, TempPacket.size );
               Result = av_interleaved_write_frame( ID.FormatContext, &TempPacket );
               if( Result < 0 )
               {
                   STREAM_ERROR( WriteFailed );
               }

               av_packet_unref( &TempPacket );
           }
           else if( Result != AVERROR(EAGAIN) )
           {
               continue;
           }
           else if( Result != AVERROR_EOF )
           {
               break;
           }
           else if( Result < 0 )
           {
               STREAM_ERROR( WriteFailed );
           }
       } while ( Result == 0);

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::CloseFile()
    {
       auto& ID = *m_InternalData;

       while( true )
       {
           //Flush
           int Result = avcodec_send_frame( ID.CodecContext, nullptr );
           if( Result == 0 )
           {
               CameraStreamError StrError = SendAll();
               if( StrError != CameraStreamError::Success )
               {
                   return StrError;
               }
           }
           else if( Result == AVERROR_EOF )
           {
               break;
           }
           else
           {
               STREAM_ERROR( WriteFailed );
           }
       }

       int Result = av_write_trailer( ID.FormatContext );
       if( Result < 0 )
       {
           STREAM_ERROR( WriteFailed );
       }

       if( !(ID.FormatContext->oformat->flags& AVFMT_NOFILE) )
       {
           Result = avio_close( ID.FormatContext->pb );
           if( Result < 0 )
           {
               STREAM_ERROR( WriteFailed );
           }
       }

       return CameraStreamError::Success;
    }

    Note I’ve simplified a few things and inlined a few bits that were elsewhere. I’ve also removed all the shutdown code as anything that happens after the file is closed is irrelevant.

    Full repo here : https://github.com/IanNorris/Witness If you clone this the issue is with the ’Diagnostics’ output, the Output file is fine. There are two hardcoded paths to X :.

  • "moov atom not found" when using av_interleaved_write_frame but not avio_write

    9 octobre 2017, par icStatic

    I am attempting to put together a class that can take arbitrary frames and construct a video from it using the ffmpeg 3.3.3 API. I’ve been struggling to find a good example for this as the examples still seem to be using deprecated functions, so I’ve attempted to patch this using the documentation in the headers and by referring to a few github repos that seem to be using the new version.

    If I use av_interleaved_write_frame to write the encoded packets to the output then ffprobe outputs the following :

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0000000002760120] moov atom not found0
    X:\Diagnostics.mp4: Invalid data found when processing input

    ffplay is unable to play the file generated using this method.

    If I instead swap it out for a call to avio_write, ffprobe instead outputs :

    Input #0, h264, from 'X:\Diagnostics.mp4':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbc

    ffplay can mostly play this file until it gets towards the end, when it outputs :

    Input #0, h264, from 'X:\Diagnostics.mp4':    0KB sq=    0B f=0/0
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbc
    [h264 @ 000000000254ef80] error while decoding MB 31 22, bytestream -65
    [h264 @ 000000000254ef80] concealing 102 DC, 102 AC, 102 MV errors in I frame
       nan M-V:    nan fd=   1 aq=    0KB vq=    0KB sq=    0B f=0/0

    VLC cannot play files from either method. The second method’s file displays a single black frame then hides the video output. The first does not display anything. Neither of them give a video duration.

    Does anyone have any ideas what’s happening here ? I assume my solution is close to working as I’m getting a good chunk of valid frames coming through.

    Code :

    void main()
    {
       OutputStream Stream( "Output.mp4", 672, 380, 25, true );
       Stream.Initialize();

       int i = 100;
       while( i-- )
       {
           //... Generate a frame

           Stream.WriteFrame( Frame );
       }
       Stream.CloseFile();
    }

    OutputStream::OutputStream( const std::string& Path, unsigned int Width, unsigned int Height, int Framerate, bool IsBGR )
    : Stream()
    , FrameIndex( 0 )
    {
       auto& ID = *m_InternalData;

       ID.Path = Path;

       ID.Width = Width;
       ID.Height= Height;
       ID.Framerate.num = Framerate;
       ID.Framerate.den = 1;

       ID.PixelFormat = IsBGR ? AV_PIX_FMT_BGR24 : AV_PIX_FMT_RGB24;
       ID.CodecID = AV_CODEC_ID_H264;
       ID.CodecTag = 0;

       ID.AspectRatio.num = 1;
       ID.AspectRatio.den = 1;
    }

    CameraStreamError OutputStream::Initialize()
    {
       av_log_set_callback( &InputStream::LogCallback );
       av_register_all();
       avformat_network_init();

       auto& ID = *m_InternalData;

       av_init_packet( &ID.Packet );

       int Result = avformat_alloc_output_context2( &ID.FormatContext, nullptr, nullptr, ID.Path.c_str() );
       if( Result < 0 || !ID.FormatContext )
       {
           STREAM_ERROR( UnknownError );
       }

       AVCodec* Encoder = avcodec_find_encoder( ID.CodecID );

       if( !Encoder )
       {
           STREAM_ERROR( NoH264Support );
       }

       AVStream* OutStream = avformat_new_stream( ID.FormatContext, Encoder );
       if( !OutStream )
       {
           STREAM_ERROR( UnknownError );
       }

       ID.CodecContext = avcodec_alloc_context3( Encoder );
       if( !ID.CodecContext )
       {
           STREAM_ERROR( NoH264Support );
       }

       ID.CodecContext->time_base = av_inv_q(ID.Framerate);

       {
           AVCodecParameters* CodecParams = OutStream->codecpar;

           CodecParams->width = ID.Width;
           CodecParams->height = ID.Height;
           CodecParams->format = AV_PIX_FMT_YUV420P;
           CodecParams->codec_id = ID.CodecID;
           CodecParams->codec_type = AVMEDIA_TYPE_VIDEO;
           CodecParams->profile = FF_PROFILE_H264_MAIN;
           CodecParams->level = 40;

           Result = avcodec_parameters_to_context( ID.CodecContext, CodecParams );
           if( Result < 0 )
           {
               STREAM_ERROR( EncoderCreationError );
           }
       }

       if( ID.IsVideo )
       {
           ID.CodecContext->width = ID.Width;
           ID.CodecContext->height = ID.Height;
           ID.CodecContext->sample_aspect_ratio = ID.AspectRatio;
           ID.CodecContext->time_base = av_inv_q(ID.Framerate);

           if( Encoder->pix_fmts )
           {
               ID.CodecContext->pix_fmt = Encoder->pix_fmts[0];
           }
           else
           {
               ID.CodecContext->pix_fmt = ID.PixelFormat;
           }
       }
       //Snip

       Result = avcodec_open2( ID.CodecContext, Encoder, nullptr );
       if( Result < 0 )
       {
           STREAM_ERROR( EncoderCreationError );
       }

       Result = avcodec_parameters_from_context( OutStream->codecpar, ID.CodecContext );
       if( Result < 0 )
       {
           STREAM_ERROR( EncoderCreationError );
       }

       if( ID.FormatContext->oformat->flags & AVFMT_GLOBALHEADER )
       {
           ID.CodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }

       OutStream->time_base = ID.CodecContext->time_base;
       OutStream->avg_frame_rate= av_inv_q(OutStream->time_base);

       if( !( ID.FormatContext->oformat->flags & AVFMT_NOFILE ) )
       {
           Result = avio_open( &ID.FormatContext->pb, ID.Path.c_str(), AVIO_FLAG_WRITE );
           if( Result < 0 )
           {
               STREAM_ERROR( FileNotWriteable );
           }
       }

       Result = avformat_write_header( ID.FormatContext, nullptr );
       if( Result < 0 )
       {
           STREAM_ERROR( WriteFailed );
       }

       ID.Output = std::make_unique( ID.CodecContext->width, ID.CodecContext->height, ID.CodecContext->pix_fmt );

       ID.ConversionContext = sws_getCachedContext(
           ID.ConversionContext,
           ID.Width,
           ID.Height,
           ID.PixelFormat,
           ID.CodecContext->width,
           ID.CodecContext->height,
           ID.CodecContext->pix_fmt,
           SWS_BICUBIC,
           NULL,
           NULL,
           NULL );

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::WriteFrame( FFMPEG::Frame* Frame )
    {
       auto& ID = *m_InternalData;

       ID.Output->Prepare();

       int OutputSliceSize = sws_scale( m_InternalData->ConversionContext, Frame->GetFrame()->data, Frame->GetFrame()->linesize, 0, Frame->GetHeight(), ID.Output->GetFrame()->data, ID.Output->GetFrame()->linesize );

       ID.Output->GetFrame()->pts = ID.CodecContext->frame_number;

       int Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() );
       if( Result == AVERROR(EAGAIN) )
       {
           CameraStreamError ResultErr = SendAll();
           if( ResultErr != CameraStreamError::Success )
           {
               return ResultErr;
           }
           Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() );
       }

       if( Result == 0 )
       {
           CameraStreamError ResultErr = SendAll();
           if( ResultErr != CameraStreamError::Success )
           {
               return ResultErr;
           }
       }

       FrameIndex++;

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::SendAll( void )
    {
       auto& ID = *m_InternalData;

       int Result;
       do
       {
           AVPacket TempPacket = {};
           av_init_packet( &TempPacket );

           Result = avcodec_receive_packet( GetData().CodecContext, &TempPacket );
           if( Result == 0 )
           {
               av_packet_rescale_ts( &TempPacket, ID.CodecContext->time_base, ID.FormatContext->streams[0]->time_base );

               TempPacket.stream_index = ID.FormatContext->streams[0]->index;

               //avio_write( ID.FormatContext->pb, TempPacket.data, TempPacket.size );
               Result = av_interleaved_write_frame( ID.FormatContext, &TempPacket );
               if( Result < 0 )
               {
                   STREAM_ERROR( WriteFailed );
               }

               av_packet_unref( &TempPacket );
           }
           else if( Result != AVERROR(EAGAIN) )
           {
               continue;
           }
           else if( Result != AVERROR_EOF )
           {
               break;
           }
           else if( Result < 0 )
           {
               STREAM_ERROR( WriteFailed );
           }
       } while ( Result == 0);

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::CloseFile()
    {
       auto& ID = *m_InternalData;

       while( true )
       {
           //Flush
           int Result = avcodec_send_frame( ID.CodecContext, nullptr );
           if( Result == 0 )
           {
               CameraStreamError StrError = SendAll();
               if( StrError != CameraStreamError::Success )
               {
                   return StrError;
               }
           }
           else if( Result == AVERROR_EOF )
           {
               break;
           }
           else
           {
               STREAM_ERROR( WriteFailed );
           }
       }

       int Result = av_write_trailer( ID.FormatContext );
       if( Result < 0 )
       {
           STREAM_ERROR( WriteFailed );
       }

       if( !(ID.FormatContext->oformat->flags& AVFMT_NOFILE) )
       {
           Result = avio_close( ID.FormatContext->pb );
           if( Result < 0 )
           {
               STREAM_ERROR( WriteFailed );
           }
       }

       return CameraStreamError::Success;
    }

    Note I’ve simplified a few things and inlined a few bits that were elsewhere. I’ve also removed all the shutdown code as anything that happens after the file is closed is irrelevant.

    Full repo here : https://github.com/IanNorris/Witness If you clone this the issue is with the ’Diagnostics’ output, the Output file is fine. There are two hardcoded paths to X :.

  • The C++11 Thread Timer is not working

    26 août 2017, par Gathros

    I’m trying to make a video player using SDL2 and FFmpeg API. The video is being decoded and I can display an image on screen. I can also play audio but I not doing that (I know it works, I’ve tried it).

    My problem is I can’t update the image when it should be. I’m able to get the timestamps and work out the delay then send it to a thread, where it should call an window update when the time has elapsed. But all that happens is the images flash on the screen with no delay. I have even set the delay to 1 second and the images still flash, after there being 1 second of a blank window.

    Here is my code :

    extern "C"{
       //FFmpeg libraries
       #include <libavcodec></libavcodec>avcodec.h>
       #include <libavformat></libavformat>avformat.h>
       #include <libswscale></libswscale>swscale.h>

       //SDL2 libraries
       #include <sdl2></sdl2>SDL.h>
    }
    // compatibility with newer API
    #if LIBAVCODEC_VERSION_INT &lt; AV_VERSION_INT(55,28,1)
    #define av_frame_alloc avcodec_alloc_frame
    #define av_frame_free avcodec_free_frame
    #endif

    //C++ libraries
    #include <cstdio>
    #include <chrono>
    #include <thread>
    #include <atomic>
    #include <mutex>
    #include

    typedef struct PacketQueue {
       AVPacketList                *first_pkt, *last_pkt;
       std::mutex                  mutex;
       std::condition_variable     convar;
    } PacketQueue;

    std::atomic<bool>           quitting, decoded;
    std::atomic        delay;
    Uint32                      Update_Window;

    int packet_queue_put(PacketQueue *q, AVPacket *pkt){
       AVPacketList *pkt1;
       if(av_dup_packet(pkt) &lt; 0){
           return -1;
       }
       pkt1 = (AVPacketList*) av_malloc(sizeof(AVPacketList));
       if(!pkt1){
           return -1;
       }
       pkt1->pkt = *pkt;
       pkt1->next = NULL;

       std::lock_guard lock(q->mutex);

       if (!q->last_pkt){
           q->first_pkt = pkt1;
       }else{
           q->last_pkt->next = pkt1;
       }
       q->last_pkt = pkt1;
       q->convar.notify_all();
       return 0;
    }

    static int packet_queue_get(PacketQueue *q, AVPacket *pkt, int block){
       AVPacketList *pkt1;
       int ret;

       std::unique_lock lk(q->mutex);
       while(1){
           if(quitting){
               ret = -1;
               break;
           }

           pkt1 = q->first_pkt;
           if(pkt1){
               q->first_pkt = pkt1->next;
               if(!q->first_pkt){
                   q->last_pkt = NULL;
               }
               *pkt = pkt1->pkt;
               av_free(pkt1);
               ret = 1;
               break;
           }else if(decoded){
               ret = 0;
               quitting = true;
               break;
           }else if(block){
               q->convar.wait_for(lk, std::chrono::microseconds(50));
           }else {
               ret = 0;
               break;
           }
       }
       return ret;
    }

    void UpdateEventQueue(){
       SDL_Event event;
       SDL_zero(event);
       event.type = Update_Window;
       SDL_PushEvent(&amp;event);
    }

    void VideoTimerThreadFunc(){
       UpdateEventQueue();

       while(!quitting){
           if(delay == 0){
               std::this_thread::sleep_for(std::chrono::milliseconds(1));
           }else {
               std::this_thread::sleep_for(std::chrono::microseconds(delay));
               UpdateEventQueue();
           }
       }
    }

    int main(int argc, char *argv[]){
       AVFormatContext*                FormatCtx = nullptr;
       AVCodecContext*                 CodecCtxOrig = nullptr;
       AVCodecContext*                 CodecCtx = nullptr;
       AVCodec*                        Codec = nullptr;
       int                             videoStream;
       AVFrame*                        Frame = nullptr;
       AVPacket                        packet;
       struct SwsContext*              SwsCtx = nullptr;

       PacketQueue                     videoq;
       int                             frameFinished;
       int64_t                         last_pts = 0;
       const AVRational                ms = {1, 1000};

       SDL_Event                       event;
       SDL_Window*                     screen;
       SDL_Renderer*                   renderer;
       SDL_Texture*                    texture;
       std::shared_ptr<uint8>          yPlane, uPlane, vPlane;
       int                             uvPitch;

       if (argc != 2) {
           fprintf(stderr, "Usage: %s <file>\n", argv[0]);
           return -1;
       }

       // Register all formats and codecs
       av_register_all();

       // Initialise SDL2
       if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) {
           fprintf(stderr, "Couldn't initialise SDL - %s\n", SDL_GetError());
           return -1;
       }

       // Setting things up
       quitting = false;
       decoded = false;
       delay = 0;
       Update_Window = SDL_RegisterEvents(1);
       memset(&amp;videoq, 0, sizeof(PacketQueue));

       // Open video file
       if(avformat_open_input(&amp;FormatCtx, argv[1], NULL, NULL) != 0){
           fprintf(stderr, "Couldn't open file\n");        
           return -1; // Couldn't open file
       }

       // Retrieve stream information
       if(avformat_find_stream_info(FormatCtx, NULL) &lt; 0){
           fprintf(stderr, "Couldn't find stream information\n");

           // Close the video file
           avformat_close_input(&amp;FormatCtx);

           return -1; // Couldn't find stream information
       }

       // Find the video stream
       videoStream = av_find_best_stream(FormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, NULL, 0);
       if(videoStream &lt; 0){
           fprintf(stderr, "Couldn't find video stream\n");

           // Close the video file
           avformat_close_input(&amp;FormatCtx);

           return -1; // Didn't find a video stream
       }

       // Get a pointer to the codec context for the video stream
       CodecCtxOrig = FormatCtx->streams[videoStream]->codec;

       // Find the decoder for the video stream
       Codec = avcodec_find_decoder(CodecCtxOrig->codec_id);
       if(Codec == NULL){
           fprintf(stderr, "Unsupported codec\n");

           // Close the codec
           avcodec_close(CodecCtxOrig);

           // Close the video file
           avformat_close_input(&amp;FormatCtx);

           return -1; // Codec not found
       }

       // Copy context
       CodecCtx = avcodec_alloc_context3(Codec);
       if(avcodec_copy_context(CodecCtx, CodecCtxOrig) != 0){
           fprintf(stderr, "Couldn't copy codec context");

           // Close the codec
           avcodec_close(CodecCtxOrig);

           // Close the video file
           avformat_close_input(&amp;FormatCtx);

           return -1; // Error copying codec context
       }

       // Open codec
       if(avcodec_open2(CodecCtx, Codec, NULL) &lt; 0){
           fprintf(stderr, "Couldn't open codec\n");

           // Close the codec
           avcodec_close(CodecCtx);
           avcodec_close(CodecCtxOrig);

           // Close the video file
           avformat_close_input(&amp;FormatCtx);
           return -1; // Could not open codec
       }

       // Allocate video frame
       Frame = av_frame_alloc();

       // Make a screen to put our video
       screen = SDL_CreateWindow("Video Player", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, CodecCtx->width, CodecCtx->height, 0);
       if(!screen){
           fprintf(stderr, "SDL: could not create window - exiting\n");
           quitting = true;

           // Clean up SDL2
           SDL_Quit();

           // Free the YUV frame
           av_frame_free(&amp;Frame);

           // Close the codec
           avcodec_close(CodecCtx);
           avcodec_close(CodecCtxOrig);

           // Close the video file
           avformat_close_input(&amp;FormatCtx);

           return -1;
       }

       renderer = SDL_CreateRenderer(screen, -1, 0);
       if(!renderer){
           fprintf(stderr, "SDL: could not create renderer - exiting\n");
           quitting = true;

           // Clean up SDL2
           SDL_DestroyWindow(screen);
           SDL_Quit();

           // Free the YUV frame
           av_frame_free(&amp;Frame);

           // Close the codec
           avcodec_close(CodecCtx);
           avcodec_close(CodecCtxOrig);

           // Close the video file
           avformat_close_input(&amp;FormatCtx);
           return -1;
       }

       // Allocate a place to put our YUV image on that screen
       texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12, SDL_TEXTUREACCESS_STREAMING, CodecCtx->width, CodecCtx->height);
       if(!texture){
           fprintf(stderr, "SDL: could not create texture - exiting\n");
           quitting = true;

           // Clean up SDL2
           SDL_DestroyRenderer(renderer);
           SDL_DestroyWindow(screen);
           SDL_Quit();

           // Free the YUV frame
           av_frame_free(&amp;Frame);

           // Close the codec
           avcodec_close(CodecCtx);
           avcodec_close(CodecCtxOrig);

           // Close the video file
           avformat_close_input(&amp;FormatCtx);
           return -1;
       }

       // Initialise SWS context for software scaling
       SwsCtx = sws_getContext(CodecCtx->width, CodecCtx->height, CodecCtx->pix_fmt,
                   CodecCtx->width, CodecCtx->height, PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);
       if(!SwsCtx){
           fprintf(stderr, "Couldn't create sws context\n");
           quitting = true;

           // Clean up SDL2
           SDL_DestroyTexture(texture);
           SDL_DestroyRenderer(renderer);
           SDL_DestroyWindow(screen);
           SDL_Quit();

           // Free the YUV frame
           av_frame_free(&amp;Frame);

           // Close the codec
           avcodec_close(CodecCtx);
           avcodec_close(CodecCtxOrig);

           // Close the video file
           avformat_close_input(&amp;FormatCtx);
           return -1;
       }

       // set up YV12 pixel array (12 bits per pixel)
       yPlane = std::shared_ptr<uint8>((Uint8 *)::operator new (CodecCtx->width * CodecCtx->height, std::nothrow));
       uPlane = std::shared_ptr<uint8>((Uint8 *)::operator new (CodecCtx->width * CodecCtx->height / 4, std::nothrow));
       vPlane = std::shared_ptr<uint8>((Uint8 *)::operator new (CodecCtx->width * CodecCtx->height / 4, std::nothrow));
       uvPitch = CodecCtx->width / 2;

       if (!yPlane || !uPlane || !vPlane) {
           fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
           quitting = true;

           // Clean up SDL2
           SDL_DestroyTexture(texture);
           SDL_DestroyRenderer(renderer);
           SDL_DestroyWindow(screen);
           SDL_Quit();

           // Free the YUV frame
           av_frame_free(&amp;Frame);

           // Close the codec
           avcodec_close(CodecCtx);
           avcodec_close(CodecCtxOrig);

           // Close the video file
           avformat_close_input(&amp;FormatCtx);
           return -1;
       }

       std::thread VideoTimer (VideoTimerThreadFunc);

       while (!quitting) {
           // Check for more packets
           if(av_read_frame(FormatCtx, &amp;packet) >= 0){
               // Check what stream it belongs to
               if (packet.stream_index == videoStream) {
                   packet_queue_put(&amp;videoq, &amp;packet);
               }else{
                   // Free the packet that was allocated by av_read_frame
                   av_free_packet(&amp;packet);
               }
           }else {
               decoded = true;
           }

           SDL_PollEvent(&amp;event);

           if(event.type == Update_Window){
               // Getting packet
               if(packet_queue_get(&amp;videoq, &amp;packet, 0)){
                   // Decode video frame
                   avcodec_decode_video2(CodecCtx, Frame, &amp;frameFinished, &amp;packet);

                   // Did we get a video frame?
                   if (frameFinished) {
                       AVPicture pict;
                       pict.data[0] = yPlane.get();
                       pict.data[1] = uPlane.get();
                       pict.data[2] = vPlane.get();
                       pict.linesize[0] = CodecCtx->width;
                       pict.linesize[1] = uvPitch;
                       pict.linesize[2] = uvPitch;

                       // Convert the image into YUV format that SDL uses
                       sws_scale(SwsCtx, (uint8_t const * const *) Frame->data, Frame->linesize, 0, CodecCtx->height, pict.data, pict.linesize);

                       SDL_UpdateYUVTexture(texture, NULL, yPlane.get(), CodecCtx->width, uPlane.get(), uvPitch, vPlane.get(), uvPitch);

                       SDL_RenderClear(renderer);
                       SDL_RenderCopy(renderer, texture, NULL, NULL);
                       SDL_RenderPresent(renderer);

                       // Calculating delay
                       delay = av_rescale_q(packet.dts, CodecCtx->time_base, ms) - last_pts;
                       last_pts = av_rescale_q(packet.dts, CodecCtx->time_base, ms);
                   }else{
                       //UpdateEventQueue();
                       delay = 1;
                   }

                   // Free the packet that was allocated by av_read_frame
                   av_free_packet(&amp;packet);

               }else{
                   //UpdateEventQueue();
               }
           }

           switch (event.type) {
               case SDL_QUIT:
                   quitting = true;
                   break;

               default:
                   break;
           }
       }

       VideoTimer.join();

       //SDL2 clean up
       SDL_DestroyTexture(texture);
       SDL_DestroyRenderer(renderer);
       SDL_DestroyWindow(screen);
       SDL_Quit();

       // Free the YUV frame
       av_frame_free(&amp;Frame);

       // Free Sws
       sws_freeContext(SwsCtx);

       // Close the codec
       avcodec_close(CodecCtx);
       avcodec_close(CodecCtxOrig);

       // Close the video file
       avformat_close_input(&amp;FormatCtx);

       return 0;
    }
    </uint8></uint8></uint8></file></uint8></bool></mutex></atomic></thread></chrono></cstdio>