Newest 'x264' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/x264

Les articles publiés sur le site

  • Concatenate MOV files without re-encoding on iOS with ffmpeg libs

    2 juillet 2013, par Developer82

    I would like to concatenate MOV files without re-encoding. I want to do it on iOS (iPhone). All the MOV files are recorded with the same settings, no difference in dimensions or encoding profiles.

    I have succeeded to do it with the command line ffmpeg: ffmpeg -re -f concat -i files.txt -c copy ... But I have difficulties using the libraries.

    I think the demuxing part is ok, I have the h.264+AAC packets. After demuxing I shift the PTS and DTS info of each packet to have ascending values in the joined MOV file. The hard part is the muxing.

    I have built the ffmpeg libs with x264 lib, so it can be used if necessary, but I am not sure whether I need the x264 codec since I don't want to re-encode the MOV files, I just want to join them.

    Problems I have encountered:

    1. In this case I do not use x264 codec. At muxing I create the stream with NULL codec parameter. I have successful writing of header, packets and trailer. All the function calls return with zero error code. However, the output can be opened, but black "screen" is displayed during playback. FFprobe report is attached. I have also examined the output with MediaInfo tool. I have attached that report as well (MediaInfo report - without x264 codec.txt). As you can see there is no h.264 profile or pixel info found that might be a problem.

    2. In this case I use x264 codec with functions: avcodec_find_encoder, avformat_new_stream and avcodec_open2. Again: no decode-encode! In this case I have much more metadata in the output file like h.264 profile and pixel info (YUV), but the av_interleaved_write_frame call simply does nothing but returns success code (0). No packet is written to the file. :( I don't know how this could happen. fwrite works, but results in un-openable file. I have also attached the MediaInfo report of this output (MediaInfo report - with x264 codec.txt).

    Questions:

    • How should I process the demuxed packets to feed the muxer?
    • What format context and codec context setting should be done including AVOption settings?
    • Should I use the x264 codec to do this? I just vant to re-mux the chunks into a single joined file.
    • The chunks have their own header/trailer. Should I somehow filter the demuxed packets to skip them?
    • The final goal is creating a network stream (RTP or RTMP) - also with re-muxing and without re-encoding. It works with command line ffmpeg: ffmpeg -re -f concat -i files.txt -vcodec copy -an -f rtp rtp://127.0.0.1:20000 -vn -acodec copy -f rtp rtp://127.0.0.1:30000

    Concatenating to MOV format is only an intermediate pilot. Is it recommended to work on the network format since it is so different task that there is no benefit of solving the MOV format muxing?

    Any help, advice, suggestion is greatly appreciated. I can reveal code to make deeper investigation possible.

    Thanks!

  • Trouble compiling x264 on a mac OS X

    26 juin 2013, par Bernt Habermeier

    I'm having trouble compiling x264 (http://www.videolan.org/developers/x264.html) on a Mac with the command line tools from XCode. The following steps don't work:

    git clone git://git.videolan.org/x264.git
    cd x264
    ./configure
    make
    

    That ends up giving you the following error:

    gcc -Wshadow -O3 -ffast-math -m64  -Wall -I. -I. -falign-loops=16 -mdynamic-no-pic -arch x86_64 -std=gnu99 -mpreferred-stack-boundary=5  -I/usr/local/include    -I/usr/local/include   -fomit-frame-pointer -fno-tree-vectorize   -c -o x264.o x264.c
    In file included from ./extras/cl.h:27,
                     from common/opencl.h:31,
                     from common/common.h:209,
                     from x264.c:33:
    ./extras/cl_platform.h:64:10: warning: #warning This path should never happen outside of internal operating system development. AvailabilityMacros do not function correctly here!
    In file included from common/opencl.h:31,
                     from common/common.h:209,
                     from x264.c:33:
    ./extras/cl.h:1165: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘cl_mem’
    ./extras/cl.h:1175: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘cl_mem’
    ./extras/cl.h:1187: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘cl_int’
    ./extras/cl.h:1191: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘cl_int’
    ./extras/cl.h:1196: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘cl_int’
    ./extras/cl.h:1199: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘cl_int’
    ./extras/cl.h:1202: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘void’
    make: *** [x264.o] Error 1
    

    How do you compile x264 for Mac OS X with the latest XCode Command Line tools?

  • Which additional FFMPEG codecs are needed for h264

    25 juin 2013, par Daniel Gibisch

    i´m using ffmpeg on a linux server to convert Videos to MP4.

    Source: mp4, mov, avi, wmv Target: mp4 with H264 (x264)

    Which additional external codecs are needed in my ffmpeg installation? (just the h264?)

    Is every source-format codable with H264(x264) ?

    Thanks in advance!

  • Compile FFMPEG + x264 - undefined references

    18 juin 2013, par Tishu

    I have been trying to find a solution online for a couple of days with no luck. I am using Ubuntu and trying to compile the latest FFMPEG stable version (1.0.1) with x264 support. I made sure I uninstalled any existing x264 then I downloaded the latest x264 source and compiled it with the following config:

    ./configure --prefix=$PREFIX \
        --enable-shared \
        --enable-static \
        --disable-gpac \
        --extra-cflags=" -I$ARM_INC -fPIC -DANDROID -fpic -mthumb-interwork -ffunction-sections -funwind-tables -fno-short-enums -D__ARM_ARCH_5__ -D__ARM_ARCH_5T__ -D__ARM_ARCH_5E__ -D__ARM_ARCH_5TE__ -Wno-psabi -march=armv5te -msoft-float -mthumb -Os -fomit-frame-pointer -fno-strict-aliasing -finline-limit=64 -DANDROID -Wa,--noexecstack -MMD -MP " \
        --extra-ldflags=" -nostdlib -Bdynamic -Wl,--no-undefined -Wl,-z,noexecstack -Wl,-z,nocopyreloc -Wl,-soname,/usr/lib/libz.so -Wl,-rpath-link=$ARM_LIB,-dynamic-linker=/system/bin/linker -L$ARM_LIB -lc -lm -ldl -lgcc" \
        --cross-prefix=${ARM_PRE}- \
        --disable-asm \
        --host=arm-linux \
    
        make clean
        make install
    

    All goes well, and I checked the installed version:

    x264 -V
        x264 0.129.x
        built on Dec 27 2012, gcc: 4.6.1
        configuration: --bit-depth=8 --chroma-format=all
        x264 license: GPL version 2 or later
    

    I then try to compile FFMPEG with the following options:

    ./configure --target-os=linux \
        --enable-libx264 \
        --enable-gpl \
        --prefix=$PREFIX \
        --extra-cflags="-I/home/tishu/Workspaces/ffmpeg/ffmpeg/jni/ffmpeg-1.0.1/android/armv7-a/include -I/home/tishu/Workspaces/ffmpeg/ffmpeg/jni/x264 -O3 -fpic -DANDROID -DHAVE_SYS_UIO_H=1 -Dipv6mr_interface=ipv6mr_ifindex -fasm -Wno-psabi -fno-short-enums  -fno-strict-aliasing -finline-limit=300 $OPTIMIZE_CFLAGS " \
        --extra-ldflags="-Wl,-rpath-link=$PLATFORM/usr/lib -L/home/tishu/Workspaces/ffmpeg/ffmpeg/jni/ffmpeg-1.0.1/android/armv7-a/lib -L$PLATFORM/usr/lib -nostdlib -lc -lm -ldl -llog" \
        --enable-cross-compile \
        --extra-libs="-lgcc" \
        --arch=arm \
        --cc=$PREBUILT/bin/arm-linux-androideabi-gcc \
        --cross-prefix=$PREBUILT/bin/arm-linux-androideabi- \
        --nm=$PREBUILT/bin/arm-linux-androideabi-nm \
        --sysroot=$PLATFORM \
    

    The configure and make clean/make install work well, but when I try to create the .so file the following command fails:

    /home/tishu/Apps/android-ndk-r8d/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86/bin/arm-linux-androideabi-ld 
        -rpath-link=./android/armv7-a/usr/lib -L/home/tishu/Apps/android-ndk-r8d/platforms/android-14/arch-arm/usr/lib -soname libffmpeg.so -shared -nostdlib  -z,noexecstack -Bsymbolic \
        --whole-archive --no-undefined -o ./android/armv7-a/libffmpeg.so libavcodec/libavcodec.a libavformat/libavformat.a libavutil/libavutil.a libswscale/libswscale.a -lc -lm -lz -ldl -llog  \
        --warn-once \
        --dynamic-linker=/system/bin/linker /home/tishu/Apps/android-ndk-r8d/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86/lib/gcc/arm-linux-androideabi/4.4.3/libgcc.a
    

    This fails with the following output:

    libavcodec/libavcodec.a(libx264.o): In function `X264_frame':
    /home/tishu/Workspaces/ffmpeg/ffmpeg/jni/ffmpeg-1.0.1/libavcodec/libx264.c:159: undefined reference to `x264_picture_init'
    /home/tishu/Workspaces/ffmpeg/ffmpeg/jni/ffmpeg-1.0.1/libavcodec/libx264.c:179: undefined reference to `x264_encoder_reconfig'
    /home/tishu/Workspaces/ffmpeg/ffmpeg/jni/ffmpeg-1.0.1/libavcodec/libx264.c:190: undefined reference to `x264_encoder_encode'
    /home/tishu/Workspaces/ffmpeg/ffmpeg/jni/ffmpeg-1.0.1/libavcodec/libx264.c:196: undefined reference to `x264_encoder_delayed_frames'
    libavcodec/libavcodec.a(libx264.o): In function `encode_nals':
    /home/tishu/Workspaces/ffmpeg/ffmpeg/jni/ffmpeg-1.0.1/libavcodec/libx264.c:101: undefined reference to `x264_bit_depth'
    libavcodec/libavcodec.a(libx264.o): In function `X264_close':
    /home/tishu/Workspaces/ffmpeg/ffmpeg/jni/ffmpeg-1.0.1/libavcodec/libx264.c:231: undefined reference to `x264_encoder_close'
    libavcodec/libavcodec.a(libx264.o): In function `X264_init':
    /home/tishu/Workspaces/ffmpeg/ffmpeg/jni/ffmpeg-1.0.1/libavcodec/libx264.c:284: undefined reference to `x264_param_default'
    /home/tishu/Workspaces/ffmpeg/ffmpeg/jni/ffmpeg-1.0.1/libavcodec/libx264.c:292: undefined reference to `x264_param_default_preset'
    /home/tishu/Workspaces/ffmpeg/ffmpeg/jni/ffmpeg-1.0.1/libavcodec/libx264.c:314: undefined reference to `x264_param_parse'
    /home/tishu/Workspaces/ffmpeg/ffmpeg/jni/ffmpeg-1.0.1/libavcodec/libx264.c:459: undefined reference to `x264_param_apply_fastfirstpass'
    /home/tishu/Workspaces/ffmpeg/ffmpeg/jni/ffmpeg-1.0.1/libavcodec/libx264.c:490: undefined reference to `x264_param_apply_profile'
    /home/tishu/Workspaces/ffmpeg/ffmpeg/jni/ffmpeg-1.0.1/libavcodec/libx264.c:533: undefined reference to `x264_encoder_open_129'
    /home/tishu/Workspaces/ffmpeg/ffmpeg/jni/ffmpeg-1.0.1/libavcodec/libx264.c:544: undefined reference to `x264_encoder_headers'
    

    The x264 version it is looking for (129) is the one installed and compiled succesfully with --eanable-shared. Obviously all compiles fine when I do not include libx64.

    Question: How can I specify the include path for the last command? I tried adding the path to $PATH and also adding this as an argument with no luck: -I/home/tishu/Workspaces/ffmpeg/ffmpeg/jni/x264

    Thanks

  • FFMPEG : cannot play MPEG4 video encoded from images. Duration and bitrate undefined

    17 juin 2013, par KaiK

    I've been trying to set a H264 video stream created from images, into an MPEG4 container. I've been able to get the video stream from images successfully. But when muxing it in the container, I must do something wrong because no player is able to reproduce it, despite ffplay - that plays the video until the end and after that, the image gets frozen until the eternity -.

    The ffplay cannot identify Duration neither bitrate, so I supose it might be an issue related with dts and pts, but I've searched about how to solve it with no success.

    Here's the ffplay output:

    ~$ ffplay testContainer.mp4
    ffplay version git-2012-01-31-c673671 Copyright (c) 2003-2012 the FFmpeg developers
      built on Feb  7 2012 20:32:12 with gcc 4.4.3
      configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-        libfaac --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-x11grab --enable-libvpx --enable-libmp3lame --enable-debug=3
      libavutil      51. 36.100 / 51. 36.100
      libavcodec     54.  0.102 / 54.  0.102
      libavformat    54.  0.100 / 54.  0.100
      libavdevice    53.  4.100 / 53.  4.100
      libavfilter     2. 60.100 /  2. 60.100
      libswscale      2.  1.100 /  2.  1.100
      libswresample   0.  6.100 /  0.  6.100
      libpostproc    52.  0.100 / 52.  0.100
    [h264 @ 0xa4849c0] max_analyze_duration 5000000 reached at 5000000
    [h264 @ 0xa4849c0] Estimating duration from bitrate, this may be inaccurate
    Input #0, h264, from 'testContainer.mp4':
      Duration: N/A, bitrate: N/A
        Stream #0:0: Video: h264 (High), yuv420p, 512x512, 25 fps, 25 tbr, 1200k tbn, 50 tbc
           2.74 A-V:  0.000 fd=   0 aq=    0KB vq=  160KB sq=    0B f=0/0   0/0
    

    Structure

    My code is C++ styled, so I've a class that handles all the encoding, and then a main that initilize it, passes some images in a bucle, and finally notify the end of the process as following:

    int main (int argc, const char * argv[])
    {
    
    MyVideoEncoder* videoEncoder = new MyVideoEncoder(512, 512, 512, 512, "output/testContainer.mp4", 25, 20);
    if(!videoEncoder->initWithCodec(MyVideoEncoder::H264))
    {
        std::cout << "something really bad happened. Exit!!" << std::endl;
        exit(-1);
    }
    
    /* encode 1 second of video */
    for(int i=0;i<228;i++) {
    
        std::stringstream filepath;
        filepath << "input2/image" << i << ".jpg";
    
        videoEncoder->encodeFrameFromJPG(const_cast(filepath.str().c_str()));
    
    }
    
    videoEncoder->endEncoding();
    
    }
    

    Hints

    I've seen a lot of examples about decoding of a video and encoding into another, but no working example of muxing a video from the scratch, so I'm not sure how to proceed with the pts and dts packet values. That's the reason why I suspect the issue must be in the following method:

    bool MyVideoEncoder::encodeImageAsFrame(){
        bool res = false;
    
    
        pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
        frameCount++;
    
        /* encode the image */
        out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, pTempFrame);
    
    
        if (out_size > 0) {
            AVPacket pkt;
            av_init_packet(&pkt);
            pkt.pts = pkt.dts = 0;
    
            if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
                pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
                        pVideoStream->codec->time_base, pVideoStream->time_base);
                pkt.dts = pTempFrame->pts;
    
            }
            if (pVideoStream->codec->coded_frame->key_frame) {
                pkt.flags |= AV_PKT_FLAG_KEY;
            }
            pkt.stream_index = pVideoStream->index;
            pkt.data = outbuf;
            pkt.size = out_size;
    
            res = (av_interleaved_write_frame(pFormatContext, &pkt) == 0);
        }
    
    
        return res;
    }
    

    Any help or insight would be appreciated. Thanks in advance!!

    P.S. The rest of the code, where config is done, is the following:

    // MyVideoEncoder.cpp
    
    #include "MyVideoEncoder.h"
    #include "Image.hpp"
    #include 
    #include 
    #include 
    
    #define MAX_AUDIO_PACKET_SIZE (128 * 1024)
    
    
    
    MyVideoEncoder::MyVideoEncoder(int inwidth, int inheight,
            int outwidth, int outheight, char* fileOutput, int framerate,
            int compFactor) {
        inWidth = inwidth;
        inHeight = inheight;
        outWidth = outwidth;
        outHeight = outheight;
        pathToMovie = fileOutput;
        frameRate = framerate;
        compressionFactor = compFactor;
        frameCount = 0;
    
    }
    
    MyVideoEncoder::~MyVideoEncoder() {
    
    }
    
    bool MyVideoEncoder::initWithCodec(
            MyVideoEncoder::encoderType type) {
        if (!initializeEncoder(type))
            return false;
    
        if (!configureFrames())
            return false;
    
        return true;
    
    }
    
    bool MyVideoEncoder::encodeFrameFromJPG(char* filepath) {
    
        setJPEGImage(filepath);
        return encodeImageAsFrame();
    }
    
    
    
    bool MyVideoEncoder::encodeDelayedFrames(){
        bool res = false;
    
        while(out_size > 0)
        {
            pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
            frameCount++;
    
            out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, NULL);
    
            if (out_size > 0)
            {
                AVPacket pkt;
                av_init_packet(&pkt);
                pkt.pts = pkt.dts = 0;
    
                if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
                    pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
                            pVideoStream->codec->time_base, pVideoStream->time_base);
                    pkt.dts = pTempFrame->pts;
                }
                if (pVideoStream->codec->coded_frame->key_frame) {
                    pkt.flags |= AV_PKT_FLAG_KEY;
                }
                pkt.stream_index = pVideoStream->index;
                pkt.data = outbuf;
                pkt.size = out_size;
    
    
                res = (av_interleaved_write_frame(pFormatContext, &pkt) == 0);
            }
    
        }
    
        return res;
    }
    
    
    
    
    
    
    void MyVideoEncoder::endEncoding() {
        encodeDelayedFrames();
        closeEncoder();
    }
    
    bool MyVideoEncoder::setJPEGImage(char* imgFilename) {
        Image* rgbImage = new Image();
        rgbImage->read_jpeg_image(imgFilename);
    
        bool ret = setImageFromRGBArray(rgbImage->get_data());
    
        delete rgbImage;
    
        return ret;
    }
    
    bool MyVideoEncoder::setImageFromRGBArray(unsigned char* data) {
    
        memcpy(pFrameRGB->data[0], data, 3 * inWidth * inHeight);
    
        int ret = sws_scale(img_convert_ctx, pFrameRGB->data, pFrameRGB->linesize,
                0, inHeight, pTempFrame->data, pTempFrame->linesize);
    
        pFrameRGB->pts++;
        if (ret)
            return true;
        else
            return false;
    }
    
    bool MyVideoEncoder::initializeEncoder(encoderType type) {
    
        av_register_all();
    
        pTempFrame = avcodec_alloc_frame();
        pTempFrame->pts = 0;
        pOutFormat = NULL;
        pFormatContext = NULL;
        pVideoStream = NULL;
        pAudioStream = NULL;
        bool res = false;
    
        // Create format
        switch (type) {
            case MyVideoEncoder::H264:
                pOutFormat = av_guess_format("h264", NULL, NULL);
                break;
            case MyVideoEncoder::MPEG1:
                pOutFormat = av_guess_format("mpeg", NULL, NULL);
                break;
            default:
                pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
                break;
        }
    
        if (!pOutFormat) {
            pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
            if (!pOutFormat) {
                std::cout << "output format not found" << std::endl;
                return false;
            }
        }
    
    
        // allocate context
        pFormatContext = avformat_alloc_context();
        if(!pFormatContext)
        {
            std::cout << "cannot alloc format context" << std::endl;
            return false;
        }
    
        pFormatContext->oformat = pOutFormat;
    
        memcpy(pFormatContext->filename, pathToMovie.c_str(), min( (const int) pathToMovie.length(), (const int)sizeof(pFormatContext->filename)));
    
    
        //Add video and audio streams
        pVideoStream = AddVideoStream(pFormatContext,
                pOutFormat->video_codec);
    
        // Set the output parameters
        av_dump_format(pFormatContext, 0, pathToMovie.c_str(), 1);
    
        // Open Video stream
        if (pVideoStream) {
            res = openVideo(pFormatContext, pVideoStream);
        }
    
    
        if (res && !(pOutFormat->flags & AVFMT_NOFILE)) {
            if (avio_open(&pFormatContext->pb, pathToMovie.c_str(), AVIO_FLAG_WRITE) < 0) {
                res = false;
                std::cout << "Cannot open output file" << std::endl;
            }
        }
    
        if (res) {
            avformat_write_header(pFormatContext,NULL);
        }
        else{
            freeMemory();
            std::cout << "Cannot init encoder" << std::endl;
        }
    
    
        return res;
    
    }
    
    
    
    AVStream *MyVideoEncoder::AddVideoStream(AVFormatContext *pContext, CodecID codec_id)
    {
      AVCodecContext *pCodecCxt = NULL;
      AVStream *st    = NULL;
    
      st = avformat_new_stream(pContext, NULL);
      if (!st)
      {
          std::cout << "Cannot add new video stream" << std::endl;
          return NULL;
      }
      st->id = 0;
    
      pCodecCxt = st->codec;
      pCodecCxt->codec_id = (CodecID)codec_id;
      pCodecCxt->codec_type = AVMEDIA_TYPE_VIDEO;
      pCodecCxt->frame_number = 0;
    
    
      // Put sample parameters.
      pCodecCxt->bit_rate = outWidth * outHeight * 3 * frameRate/ compressionFactor;
    
      pCodecCxt->width  = outWidth;
      pCodecCxt->height = outHeight;
    
      /* frames per second */
      pCodecCxt->time_base= (AVRational){1,frameRate};
    
      /* pixel format must be YUV */
      pCodecCxt->pix_fmt = PIX_FMT_YUV420P;
    
    
      if (pCodecCxt->codec_id == CODEC_ID_H264)
      {
          av_opt_set(pCodecCxt->priv_data, "preset", "slow", 0);
          av_opt_set(pCodecCxt->priv_data, "vprofile", "baseline", 0);
          pCodecCxt->max_b_frames = 16;
      }
      if (pCodecCxt->codec_id == CODEC_ID_MPEG1VIDEO)
      {
          pCodecCxt->mb_decision = 1;
      }
    
      if(pContext->oformat->flags & AVFMT_GLOBALHEADER)
      {
          pCodecCxt->flags |= CODEC_FLAG_GLOBAL_HEADER;
      }
    
      pCodecCxt->coder_type = 1;  // coder = 1
      pCodecCxt->flags|=CODEC_FLAG_LOOP_FILTER;   // flags=+loop
      pCodecCxt->me_range = 16;   // me_range=16
      pCodecCxt->gop_size = 50;  // g=250
      pCodecCxt->keyint_min = 25; // keyint_min=25
    
    
      return st;
    }
    
    
    bool MyVideoEncoder::openVideo(AVFormatContext *oc, AVStream *pStream)
    {
        AVCodec *pCodec;
        AVCodecContext *pContext;
    
        pContext = pStream->codec;
    
        // Find the video encoder.
        pCodec = avcodec_find_encoder(pContext->codec_id);
        if (!pCodec)
        {
            std::cout << "Cannot found video codec" << std::endl;
            return false;
        }
    
        // Open the codec.
        if (avcodec_open2(pContext, pCodec, NULL) < 0)
        {
            std::cout << "Cannot open video codec" << std::endl;
            return false;
        }
    
    
        return true;
    }
    
    
    
    bool MyVideoEncoder::configureFrames() {
    
        /* alloc image and output buffer */
        outbuf_size = outWidth*outHeight*3;
        outbuf = (uint8_t*) malloc(outbuf_size);
    
        av_image_alloc(pTempFrame->data, pTempFrame->linesize, pVideoStream->codec->width,
                pVideoStream->codec->height, pVideoStream->codec->pix_fmt, 1);
    
        //Alloc RGB temp frame
        pFrameRGB = avcodec_alloc_frame();
        if (pFrameRGB == NULL)
            return false;
        avpicture_alloc((AVPicture *) pFrameRGB, PIX_FMT_RGB24, inWidth, inHeight);
    
        pFrameRGB->pts = 0;
    
        //Set SWS context to convert from RGB images to YUV images
        if (img_convert_ctx == NULL) {
            img_convert_ctx = sws_getContext(inWidth, inHeight, PIX_FMT_RGB24,
                    outWidth, outHeight, pVideoStream->codec->pix_fmt, /*SWS_BICUBIC*/
                    SWS_FAST_BILINEAR, NULL, NULL, NULL);
            if (img_convert_ctx == NULL) {
                fprintf(stderr, "Cannot initialize the conversion context!\n");
                return false;
            }
        }
    
        return true;
    
    }
    
    void MyVideoEncoder::closeEncoder() {
        av_write_frame(pFormatContext, NULL);
        av_write_trailer(pFormatContext);
        freeMemory();
    }
    
    
    void MyVideoEncoder::freeMemory()
    {
      bool res = true;
    
      if (pFormatContext)
      {
        // close video stream
        if (pVideoStream)
        {
          closeVideo(pFormatContext, pVideoStream);
        }
    
        // Free the streams.
        for(size_t i = 0; i < pFormatContext->nb_streams; i++)
        {
          av_freep(&pFormatContext->streams[i]->codec);
          av_freep(&pFormatContext->streams[i]);
        }
    
        if (!(pFormatContext->flags & AVFMT_NOFILE) && pFormatContext->pb)
        {
          avio_close(pFormatContext->pb);
        }
    
        // Free the stream.
        av_free(pFormatContext);
        pFormatContext = NULL;
      }
    }
    
    void MyVideoEncoder::closeVideo(AVFormatContext *pContext, AVStream *pStream)
    {
      avcodec_close(pStream->codec);
      if (pTempFrame)
      {
        if (pTempFrame->data)
        {
          av_free(pTempFrame->data[0]);
          pTempFrame->data[0] = NULL;
        }
        av_free(pTempFrame);
        pTempFrame = NULL;
      }
    
      if (pFrameRGB)
      {
        if (pFrameRGB->data)
        {
          av_free(pFrameRGB->data[0]);
          pFrameRGB->data[0] = NULL;
        }
        av_free(pFrameRGB);
        pFrameRGB = NULL;
      }
    
    }