Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (111)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Les notifications de la ferme

    1er décembre 2010, par

    Afin d’assurer une gestion correcte de la ferme, il est nécessaire de notifier plusieurs choses lors d’actions spécifiques à la fois à l’utilisateur mais également à l’ensemble des administrateurs de la ferme.
    Les notifications de changement de statut
    Lors d’un changement de statut d’une instance, l’ensemble des administrateurs de la ferme doivent être notifiés de cette modification ainsi que l’utilisateur administrateur de l’instance.
    À la demande d’un canal
    Passage au statut "publie"
    Passage au (...)

  • Initialisation de MediaSPIP (préconfiguration)

    20 février 2010, par

    Lors de l’installation de MediaSPIP, celui-ci est préconfiguré pour les usages les plus fréquents.
    Cette préconfiguration est réalisée par un plugin activé par défaut et non désactivable appelé MediaSPIP Init.
    Ce plugin sert à préconfigurer de manière correcte chaque instance de MediaSPIP. Il doit donc être placé dans le dossier plugins-dist/ du site ou de la ferme pour être installé par défaut avant de pouvoir utiliser le site.
    Dans un premier temps il active ou désactive des options de SPIP qui ne le (...)

Sur d’autres sites (6680)

  • FFMPEG RTMP STREAM RECORDING TIMEOUT

    11 novembre 2020, par abreski

    [SOLVED] — solution in FINAL EDIT
i am recording an rtmp livestream with ffmpeg, and looking to add a flag that will automatically stop processing the moment new data stops arriving.

    


    If i start and stop the script (stop by killing the process on demand), everything is fine, recording is saved and can be played.
However, when the stream is stopped from the source without a manual call to STOP, the script will still run for a bit and the resulting file will be corrupted (tested with manual stop call - works , and with stopping the stream before the recording , simulating browser/tab close or disconnect - fails)

    


    the command i'm running

    


    $command = "ffmpeg -i {$rtmpUrl} -c:v copy -c:a copy -t 3600 {$path} >/dev/null  2>/dev/null &";
$res = shell_exec($command)


    


    i tried adding -timeout 0 option before and after the input like this

    


    $command = "ffmpeg -timeout 0 -i {$rtmpUrl} -c:v copy -c:a copy -t 3600 {$path} >/dev/null  2>/dev/null &"; 


    


    and

    


    $command = "ffmpeg -i {$rtmpUrl} -c:v copy -c:a copy -timeout 0 -t 3600 {$path} >/dev/null  2>/dev/null &";


    


    but no improvement.

    


    What am i missing here ? is there any way to automatically stop the script when new data stops ariving from the livestream (meaning that stream is stopped and recording should stop aswell).

    


    Note $rtmpUrl and $path have been checked and everything works fine as long as the script is stopped before the livestream ends.

    


    Any suggestions are highly appreciated

    


    LATER EDIT : realised timeout was set in the wrong place, added it first but result was still the same, so still looking for any suggestions

    


    $command = "timout 0 ffmpeg -i {$rtmpUrl} -c:v copy -c:a copy -t 3600 {$path} >/dev/null  2>/dev/null &";


    


    FINAL EDIT
in case someone finds this thread looking for a solution in a similar case,
solved, timeout was not what i was looking for, instead using the -re flag fixed it for us.
Now script stops when no more new frames come in

    


    $command = "ffmpeg -re -i {$rtmpUrl} -c:v copy -c:a copy -t 3600 {$path} >/dev/null  2>/dev/null &";


    


  • android ffmpeg bad video output

    20 août 2014, par Sujith Manjavana

    I’m following this tutorial to create my first ffmpeg app. I have successfully build the shared libs and compiled the project without any errors. But when i run the app on my nexus 5 the output is this this

    Here is the native code

    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>
    #include <libavutil></libavutil>pixfmt.h>

    #include
    #include

    #include
    #include <android></android>native_window.h>
    #include <android></android>native_window_jni.h>

    #define LOG_TAG "android-ffmpeg-tutorial02"
    #define LOGI(...) __android_log_print(4, LOG_TAG, __VA_ARGS__);
    #define LOGE(...) __android_log_print(6, LOG_TAG, __VA_ARGS__);

    ANativeWindow*      window;
    char                *videoFileName;
    AVFormatContext     *formatCtx = NULL;
    int                 videoStream;
    AVCodecContext      *codecCtx = NULL;
    AVFrame             *decodedFrame = NULL;
    AVFrame             *frameRGBA = NULL;
    jobject             bitmap;
    void*               buffer;
    struct SwsContext   *sws_ctx = NULL;
    int                 width;
    int                 height;
    int                 stop;

    jint naInit(JNIEnv *pEnv, jobject pObj, jstring pFileName) {
       AVCodec         *pCodec = NULL;
       int             i;
       AVDictionary    *optionsDict = NULL;

       videoFileName = (char *)(*pEnv)->GetStringUTFChars(pEnv, pFileName, NULL);
       LOGI("video file name is %s", videoFileName);
       // Register all formats and codecs
       av_register_all();
       // Open video file
       if(avformat_open_input(&amp;formatCtx, videoFileName, NULL, NULL)!=0)
           return -1; // Couldn't open file
       // Retrieve stream information
       if(avformat_find_stream_info(formatCtx, NULL)&lt;0)
           return -1; // Couldn't find stream information
       // Dump information about file onto standard error
       av_dump_format(formatCtx, 0, videoFileName, 0);
       // Find the first video stream
       videoStream=-1;
       for(i=0; inb_streams; i++) {
           if(formatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
               videoStream=i;
               break;
           }
       }
       if(videoStream==-1)
           return -1; // Didn't find a video stream
       // Get a pointer to the codec context for the video stream
       codecCtx=formatCtx->streams[videoStream]->codec;
       // Find the decoder for the video stream
       pCodec=avcodec_find_decoder(codecCtx->codec_id);
       if(pCodec==NULL) {
           fprintf(stderr, "Unsupported codec!\n");
           return -1; // Codec not found
       }
       // Open codec
       if(avcodec_open2(codecCtx, pCodec, &amp;optionsDict)&lt;0)
           return -1; // Could not open codec
       // Allocate video frame
       decodedFrame=avcodec_alloc_frame();
       // Allocate an AVFrame structure
       frameRGBA=avcodec_alloc_frame();
       if(frameRGBA==NULL)
           return -1;
       return 0;
    }

    jobject createBitmap(JNIEnv *pEnv, int pWidth, int pHeight) {
       int i;
       //get Bitmap class and createBitmap method ID
       jclass javaBitmapClass = (jclass)(*pEnv)->FindClass(pEnv, "android/graphics/Bitmap");
       jmethodID mid = (*pEnv)->GetStaticMethodID(pEnv, javaBitmapClass, "createBitmap", "(IILandroid/graphics/Bitmap$Config;)Landroid/graphics/Bitmap;");
       //create Bitmap.Config
       //reference: https://forums.oracle.com/thread/1548728
       const wchar_t* configName = L"ARGB_8888";
       int len = wcslen(configName);
       jstring jConfigName;
       if (sizeof(wchar_t) != sizeof(jchar)) {
           //wchar_t is defined as different length than jchar(2 bytes)
           jchar* str = (jchar*)malloc((len+1)*sizeof(jchar));
           for (i = 0; i &lt; len; ++i) {
               str[i] = (jchar)configName[i];
           }
           str[len] = 0;
           jConfigName = (*pEnv)->NewString(pEnv, (const jchar*)str, len);
       } else {
           //wchar_t is defined same length as jchar(2 bytes)
           jConfigName = (*pEnv)->NewString(pEnv, (const jchar*)configName, len);
       }
       jclass bitmapConfigClass = (*pEnv)->FindClass(pEnv, "android/graphics/Bitmap$Config");
       jobject javaBitmapConfig = (*pEnv)->CallStaticObjectMethod(pEnv, bitmapConfigClass,
               (*pEnv)->GetStaticMethodID(pEnv, bitmapConfigClass, "valueOf", "(Ljava/lang/String;)Landroid/graphics/Bitmap$Config;"), jConfigName);
       //create the bitmap
       return (*pEnv)->CallStaticObjectMethod(pEnv, javaBitmapClass, mid, pWidth, pHeight, javaBitmapConfig);
    }

    jintArray naGetVideoRes(JNIEnv *pEnv, jobject pObj) {
       jintArray lRes;
       if (NULL == codecCtx) {
           return NULL;
       }
       lRes = (*pEnv)->NewIntArray(pEnv, 2);
       if (lRes == NULL) {
           LOGI(1, "cannot allocate memory for video size");
           return NULL;
       }
       jint lVideoRes[2];
       lVideoRes[0] = codecCtx->width;
       lVideoRes[1] = codecCtx->height;
       (*pEnv)->SetIntArrayRegion(pEnv, lRes, 0, 2, lVideoRes);
       return lRes;
    }

    void naSetSurface(JNIEnv *pEnv, jobject pObj, jobject pSurface) {
       if (0 != pSurface) {
           // get the native window reference
           window = ANativeWindow_fromSurface(pEnv, pSurface);
           // set format and size of window buffer
           ANativeWindow_setBuffersGeometry(window, 0, 0, WINDOW_FORMAT_RGBA_8888);
       } else {
           // release the native window
           ANativeWindow_release(window);
       }
    }

    jint naSetup(JNIEnv *pEnv, jobject pObj, int pWidth, int pHeight) {
       width = pWidth;
       height = pHeight;
       //create a bitmap as the buffer for frameRGBA
       bitmap = createBitmap(pEnv, pWidth, pHeight);
       if (AndroidBitmap_lockPixels(pEnv, bitmap, &amp;buffer) &lt; 0)
           return -1;
       //get the scaling context
       sws_ctx = sws_getContext (
               codecCtx->width,
               codecCtx->height,
               codecCtx->pix_fmt,
               pWidth,
               pHeight,
               AV_PIX_FMT_RGBA,
               SWS_BILINEAR,
               NULL,
               NULL,
               NULL
       );
       // Assign appropriate parts of bitmap to image planes in pFrameRGBA
       // Note that pFrameRGBA is an AVFrame, but AVFrame is a superset
       // of AVPicture
       avpicture_fill((AVPicture *)frameRGBA, buffer, AV_PIX_FMT_RGBA,
               pWidth, pHeight);
       return 0;
    }

    void finish(JNIEnv *pEnv) {
       //unlock the bitmap
       AndroidBitmap_unlockPixels(pEnv, bitmap);
       av_free(buffer);
       // Free the RGB image
       av_free(frameRGBA);
       // Free the YUV frame
       av_free(decodedFrame);
       // Close the codec
       avcodec_close(codecCtx);
       // Close the video file
       avformat_close_input(&amp;formatCtx);
    }

    void decodeAndRender(JNIEnv *pEnv) {
       ANativeWindow_Buffer    windowBuffer;
       AVPacket                packet;
       int                     i=0;
       int                     frameFinished;
       int                     lineCnt;
       while(av_read_frame(formatCtx, &amp;packet)>=0 &amp;&amp; !stop) {
           // Is this a packet from the video stream?
           if(packet.stream_index==videoStream) {
               // Decode video frame
               avcodec_decode_video2(codecCtx, decodedFrame, &amp;frameFinished,
                  &amp;packet);
               // Did we get a video frame?
               if(frameFinished) {
                   // Convert the image from its native format to RGBA
                   sws_scale
                   (
                       sws_ctx,
                       (uint8_t const * const *)decodedFrame->data,
                       decodedFrame->linesize,
                       0,
                       codecCtx->height,
                       frameRGBA->data,
                       frameRGBA->linesize
                   );
                   // lock the window buffer
                   if (ANativeWindow_lock(window, &amp;windowBuffer, NULL) &lt; 0) {
                       LOGE("cannot lock window");
                   } else {
                       // draw the frame on buffer
                       LOGI("copy buffer %d:%d:%d", width, height, width*height*4);
                       LOGI("window buffer: %d:%d:%d", windowBuffer.width,
                               windowBuffer.height, windowBuffer.stride);
                       memcpy(windowBuffer.bits, buffer,  width * height * 4);
                       // unlock the window buffer and post it to display
                       ANativeWindow_unlockAndPost(window);
                       // count number of frames
                       ++i;
                   }
               }
           }
           // Free the packet that was allocated by av_read_frame
           av_free_packet(&amp;packet);
       }
       LOGI("total No. of frames decoded and rendered %d", i);
       finish(pEnv);
    }

    /**
    * start the video playback
    */
    void naPlay(JNIEnv *pEnv, jobject pObj) {
       //create a new thread for video decode and render
       pthread_t decodeThread;
       stop = 0;
       pthread_create(&amp;decodeThread, NULL, decodeAndRender, NULL);
    }

    /**
    * stop the video playback
    */
    void naStop(JNIEnv *pEnv, jobject pObj) {
       stop = 1;
    }

    jint JNI_OnLoad(JavaVM* pVm, void* reserved) {
       JNIEnv* env;
       if ((*pVm)->GetEnv(pVm, (void **)&amp;env, JNI_VERSION_1_6) != JNI_OK) {
            return -1;
       }
       JNINativeMethod nm[8];
       nm[0].name = "naInit";
       nm[0].signature = "(Ljava/lang/String;)I";
       nm[0].fnPtr = (void*)naInit;

       nm[1].name = "naSetSurface";
       nm[1].signature = "(Landroid/view/Surface;)V";
       nm[1].fnPtr = (void*)naSetSurface;

       nm[2].name = "naGetVideoRes";
       nm[2].signature = "()[I";
       nm[2].fnPtr = (void*)naGetVideoRes;

       nm[3].name = "naSetup";
       nm[3].signature = "(II)I";
       nm[3].fnPtr = (void*)naSetup;

       nm[4].name = "naPlay";
       nm[4].signature = "()V";
       nm[4].fnPtr = (void*)naPlay;

       nm[5].name = "naStop";
       nm[5].signature = "()V";
       nm[5].fnPtr = (void*)naStop;

       jclass cls = (*env)->FindClass(env, "roman10/tutorial/android_ffmpeg_tutorial02/MainActivity");
       //Register methods with env->RegisterNatives.
       (*env)->RegisterNatives(env, cls, nm, 6);
       return JNI_VERSION_1_6;
    }

    Here is the build.sh

    #!/bin/bash
    NDK=$HOME/Desktop/adt/android-ndk-r9
    SYSROOT=$NDK/platforms/android-9/arch-arm/
    TOOLCHAIN=$NDK/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64
    function build_one
    {
    ./configure \
       --prefix=$PREFIX \
       --enable-shared \
       --disable-static \
       --disable-doc \
       --disable-ffmpeg \
       --disable-ffplay \
       --disable-ffprobe \
       --disable-ffserver \
       --disable-avdevice \
       --disable-doc \
       --disable-symver \
       --cross-prefix=$TOOLCHAIN/bin/arm-linux-androideabi- \
       --target-os=linux \
       --arch=arm \
       --enable-cross-compile \
       --sysroot=$SYSROOT \
       --extra-cflags="-Os -fpic $ADDI_CFLAGS" \
       --extra-ldflags="$ADDI_LDFLAGS" \
       $ADDITIONAL_CONFIGURE_FLAG
    make clean
    make
    make install
    }
    CPU=arm
    PREFIX=$(pwd)/android/$CPU
    ADDI_CFLAGS="-marm"
    build_one

    It works on the Galaxy tab2. what can i do to make it work on all devices ?? Please help me..

  • Encoding/Decoding H264 using libav in C++ [closed]

    20 mai, par gbock93

    I want to build an application to

    &#xA;

      &#xA;
    • capture frames in YUYV 4:2:2 format
    • &#xA;

    • encode them to H264
    • &#xA;

    • send over network
    • &#xA;

    • decode the received data
    • &#xA;

    • display the video stream
    • &#xA;

    &#xA;

    To do so I wrote 2 classes, H264Encoder and H264Decoder.

    &#xA;

    I post only the .cpp contents, the .h are trivial :

    &#xA;

    H264Encoder.cpp

    &#xA;

    #include &#xA;&#xA;#include <stdexcept>&#xA;#include <iostream>&#xA;&#xA;H264Encoder::H264Encoder(unsigned int width_, unsigned int height_, unsigned int fps_):&#xA;    m_width(width_),&#xA;    m_height(height_),&#xA;    m_fps(fps_),&#xA;    m_frame_index(0),&#xA;    m_context(nullptr),&#xA;    m_frame(nullptr),&#xA;    m_packet(nullptr),&#xA;    m_sws_ctx(nullptr)&#xA;{&#xA;    // Find the video codec&#xA;    AVCodec* codec;&#xA;    codec = avcodec_find_encoder(AV_CODEC_ID_H264);&#xA;    if (!codec)&#xA;        throw std::runtime_error("[Encoder]: Error: Codec not found");&#xA;&#xA;    // Allocate codec&#xA;    m_context = avcodec_alloc_context3(codec);&#xA;    if (!m_context)&#xA;        throw std::runtime_error("[Encoder]: Error: Could not allocate codec context");&#xA;&#xA;    // Configure codec&#xA;    av_opt_set(m_context->priv_data, "preset", "ultrafast", 0);&#xA;    av_opt_set(m_context->priv_data, "tune", "zerolatency", 0);&#xA;    av_opt_set(m_context->priv_data, "crf",           "35", 0); // Range: [0; 51], sane range: [18; 26], lower -> higher compression&#xA;&#xA;    m_context->width        = (int)width_;&#xA;    m_context->height       = (int)height_;&#xA;    m_context->time_base    = {1, (int)fps_};&#xA;    m_context->framerate    = {(int)fps_, 1};&#xA;    m_context->codec_id     = AV_CODEC_ID_H264;&#xA;    m_context->pix_fmt      = AV_PIX_FMT_YUV420P; // H265|4 codec take as input only AV_PIX_FMT_YUV420P&#xA;    m_context->bit_rate     = 400000;&#xA;    m_context->gop_size     = 10;&#xA;    m_context->max_b_frames = 1;&#xA;&#xA;    // Open codec&#xA;    if (avcodec_open2(m_context, codec, nullptr) &lt; 0)&#xA;        throw std::runtime_error("[Encoder]: Error: Could not open codec");&#xA;&#xA;    // Allocate frame and its buffer&#xA;    m_frame = av_frame_alloc();&#xA;    if (!m_frame) &#xA;        throw std::runtime_error("[Encoder]: Error: Could not allocate frame");&#xA;&#xA;    m_frame->format = m_context->pix_fmt;&#xA;    m_frame->width  = m_context->width;&#xA;    m_frame->height = m_context->height;&#xA;&#xA;    if (av_frame_get_buffer(m_frame, 0) &lt; 0)&#xA;        throw std::runtime_error("[Encoder]: Error: Cannot allocate frame buffer");&#xA;    &#xA;    // Allocate packet&#xA;    m_packet = av_packet_alloc();&#xA;    if (!m_packet) &#xA;        throw std::runtime_error("[Encoder]: Error: Could not allocate packet");&#xA;&#xA;    // Convert from YUYV422 to YUV420P&#xA;    m_sws_ctx = sws_getContext(&#xA;        width_, height_, AV_PIX_FMT_YUYV422,&#xA;        width_, height_, AV_PIX_FMT_YUV420P,&#xA;        SWS_BILINEAR, nullptr, nullptr, nullptr&#xA;    );&#xA;    if (!m_sws_ctx) &#xA;        throw std::runtime_error("[Encoder]: Error: Could not allocate sws context");&#xA;&#xA;    //&#xA;    printf("[Encoder]: H264Encoder ready.\n");&#xA;}&#xA;&#xA;H264Encoder::~H264Encoder()&#xA;{&#xA;    sws_freeContext(m_sws_ctx);&#xA;    av_packet_free(&amp;m_packet);&#xA;    av_frame_free(&amp;m_frame);&#xA;    avcodec_free_context(&amp;m_context);&#xA;&#xA;    printf("[Encoder]: H264Encoder destroyed.\n");&#xA;}&#xA;&#xA;std::vector H264Encoder::encode(const cv::Mat&amp; img_)&#xA;{&#xA;    /*&#xA;    - YUYV422 is a packed format. It has 3 components (av_pix_fmt_desc_get((AVPixelFormat)AV_PIX_FMT_YUYV422)->nb_components == 3) but&#xA;        data is stored in a single plane (av_pix_fmt_count_planes((AVPixelFormat)AV_PIX_FMT_YUYV422) == 1).&#xA;    - YUV420P is a planar format. It has 3 components (av_pix_fmt_desc_get((AVPixelFormat)AV_PIX_FMT_YUV420P)->nb_components == 3) and&#xA;        each component is stored in a separate plane (av_pix_fmt_count_planes((AVPixelFormat)AV_PIX_FMT_YUV420P) == 3) with its&#xA;        own stride.&#xA;    */&#xA;    std::cout &lt;&lt; "[Encoder]" &lt;&lt; std::endl;&#xA;    std::cout &lt;&lt; "[Encoder]: Encoding img " &lt;&lt; img_.cols &lt;&lt; "x" &lt;&lt; img_.rows &lt;&lt; " | element size " &lt;&lt; img_.elemSize() &lt;&lt; std::endl;&#xA;    assert(img_.elemSize() == 2);&#xA;&#xA;    uint8_t* input_data[1] = {(uint8_t*)img_.data};&#xA;    int input_linesize[1] = {2 * (int)m_width};&#xA;    &#xA;    if (av_frame_make_writable(m_frame) &lt; 0)&#xA;        throw std::runtime_error("[Encoder]: Error: Cannot make frame data writable");&#xA;&#xA;    // Convert from YUV422 image to YUV420 frame. Apply scaling if necessary&#xA;    sws_scale(&#xA;        m_sws_ctx,&#xA;        input_data, input_linesize, 0, m_height,&#xA;        m_frame->data, m_frame->linesize&#xA;    );&#xA;    m_frame->pts = m_frame_index;&#xA;&#xA;    int n_planes = av_pix_fmt_count_planes((AVPixelFormat)m_frame->format);&#xA;    std::cout &lt;&lt; "[Encoder]: Sending Frame " &lt;&lt; m_frame_index &lt;&lt; " with dimensions " &lt;&lt; m_frame->width &lt;&lt; "x" &lt;&lt; m_frame->height &lt;&lt; "x" &lt;&lt; n_planes &lt;&lt; std::endl;&#xA;    for (int i=0; iframerate.num) &#x2B; 1;&#xA;            break;&#xA;        case AVERROR(EAGAIN):&#xA;            throw std::runtime_error("[Encoder]: avcodec_send_frame: EAGAIN");&#xA;        case AVERROR_EOF:&#xA;            throw std::runtime_error("[Encoder]: avcodec_send_frame: EOF");&#xA;        case AVERROR(EINVAL):&#xA;            throw std::runtime_error("[Encoder]: avcodec_send_frame: EINVAL");&#xA;        case AVERROR(ENOMEM):&#xA;            throw std::runtime_error("[Encoder]: avcodec_send_frame: ENOMEM");&#xA;        default:&#xA;            throw std::runtime_error("[Encoder]: avcodec_send_frame: UNKNOWN");&#xA;    }&#xA;&#xA;    // Receive packet from codec&#xA;    std::vector result;&#xA;    while(ret >= 0)&#xA;    {&#xA;        ret = avcodec_receive_packet(m_context, m_packet);&#xA;&#xA;        switch (ret)&#xA;        {&#xA;        case 0:&#xA;            std::cout &lt;&lt; "[Encoder]: Received packet from codec of size " &lt;&lt; m_packet->size &lt;&lt; " bytes " &lt;&lt; std::endl;&#xA;            result.insert(result.end(), m_packet->data, m_packet->data &#x2B; m_packet->size);&#xA;            av_packet_unref(m_packet);&#xA;            break;&#xA;&#xA;        case AVERROR(EAGAIN):&#xA;            std::cout &lt;&lt; "[Encoder]: avcodec_receive_packet: EAGAIN" &lt;&lt; std::endl;&#xA;            break;&#xA;        case AVERROR_EOF:&#xA;            std::cout &lt;&lt; "[Encoder]: avcodec_receive_packet: EOF" &lt;&lt; std::endl;&#xA;            break;&#xA;        case AVERROR(EINVAL):&#xA;            throw std::runtime_error("[Encoder]: avcodec_receive_packet: EINVAL");&#xA;        default:&#xA;            throw std::runtime_error("[Encoder]: avcodec_receive_packet: UNKNOWN");&#xA;        }&#xA;    }&#xA;&#xA;    std::cout &lt;&lt; "[Encoder]: Encoding complete" &lt;&lt; std::endl;&#xA;    return result;&#xA;}&#xA;</iostream></stdexcept>

    &#xA;

    H264Decoder.cpp

    &#xA;

    #include &#xA;&#xA;#include <iostream>&#xA;#include <stdexcept>&#xA;&#xA;H264Decoder::H264Decoder():&#xA;    m_context(nullptr),&#xA;    m_frame(nullptr),&#xA;    m_packet(nullptr)&#xA;{&#xA;    // Find the video codec&#xA;    AVCodec* codec;&#xA;    codec = avcodec_find_decoder(AV_CODEC_ID_H264);&#xA;    if (!codec)&#xA;        throw std::runtime_error("[Decoder]: Error: Codec not found");&#xA;&#xA;    // Allocate codec&#xA;    m_context = avcodec_alloc_context3(codec);&#xA;    if (!m_context)&#xA;        throw std::runtime_error("[Decoder]: Error: Could not allocate codec context");&#xA;&#xA;    // Open codec&#xA;    if (avcodec_open2(m_context, codec, nullptr) &lt; 0)&#xA;        throw std::runtime_error("[Decoder]: Error: Could not open codec");&#xA;&#xA;    // Allocate frame&#xA;    m_frame = av_frame_alloc();&#xA;    if (!m_frame)&#xA;        throw std::runtime_error("[Decoder]: Error: Could not allocate frame");&#xA;&#xA;    // Allocate packet&#xA;    m_packet = av_packet_alloc();&#xA;    if (!m_packet) &#xA;        throw std::runtime_error("[Decoder]: Error: Could not allocate packet");&#xA;&#xA;    //&#xA;    printf("[Decoder]: H264Decoder ready.\n");&#xA;}&#xA;&#xA;H264Decoder::~H264Decoder()&#xA;{&#xA;    av_packet_free(&amp;m_packet);&#xA;    av_frame_free(&amp;m_frame);&#xA;    avcodec_free_context(&amp;m_context);&#xA;&#xA;    printf("[Decoder]: H264Decoder destroyed.\n");&#xA;}&#xA;&#xA;bool H264Decoder::decode(uint8_t* data_, size_t size_, cv::Mat&amp; img_)&#xA;{&#xA;    std::cout &lt;&lt; "[Decoder]" &lt;&lt; std::endl;&#xA;    std::cout &lt;&lt; "[Decoder]: decoding " &lt;&lt; size_ &lt;&lt; " bytes of data" &lt;&lt; std::endl;&#xA;&#xA;    // Fill packet&#xA;    m_packet->data = data_;&#xA;    m_packet->size = size_;&#xA;&#xA;    if (size_ == 0)&#xA;        return false;&#xA;&#xA;    // Send packet to codec&#xA;    int send_result = avcodec_send_packet(m_context, m_packet);&#xA;&#xA;    switch (send_result)&#xA;    {&#xA;        case 0:&#xA;            std::cout &lt;&lt; "[Decoder]: Sent packet to codec" &lt;&lt; std::endl;&#xA;            break;&#xA;        case AVERROR(EAGAIN):&#xA;            throw std::runtime_error("[Decoder]: avcodec_send_packet: EAGAIN");&#xA;        case AVERROR_EOF:&#xA;            throw std::runtime_error("[Decoder]: avcodec_send_packet: EOF");&#xA;        case AVERROR(EINVAL):&#xA;            throw std::runtime_error("[Decoder]: avcodec_send_packet: EINVAL");&#xA;        case AVERROR(ENOMEM):&#xA;            throw std::runtime_error("[Decoder]: avcodec_send_packet: ENOMEM");&#xA;        default:&#xA;            throw std::runtime_error("[Decoder]: avcodec_send_packet: UNKNOWN");&#xA;    }&#xA;&#xA;    // Receive frame from codec&#xA;    int n_planes;&#xA;    uint8_t* output_data[1];&#xA;    int output_line_size[1];&#xA;&#xA;    int receive_result = avcodec_receive_frame(m_context, m_frame);&#xA;&#xA;    switch (receive_result)&#xA;    {&#xA;        case 0:&#xA;            n_planes = av_pix_fmt_count_planes((AVPixelFormat)m_frame->format);&#xA;            std::cout &lt;&lt; "[Decoder]: Received Frame with dimensions " &lt;&lt; m_frame->width &lt;&lt; "x" &lt;&lt; m_frame->height &lt;&lt; "x" &lt;&lt; n_planes &lt;&lt; std::endl;&#xA;            for (int i=0; i/&#xA;    std::cout &lt;&lt; "[Decoder]: Decoding complete" &lt;&lt; std::endl;&#xA;    return true;&#xA;}&#xA;</stdexcept></iostream>

    &#xA;

    To test the two classes I put together a main.cpp to grab a frame, encode/decode and display the decoded frame (no network transmission in place) :

    &#xA;

    main.cpp

    &#xA;

    while(...)&#xA;{&#xA;    // get frame from custom camera class. Format is YUYV 4:2:2&#xA;    camera.getFrame(camera_frame);&#xA;    // Construct a cv::Mat to represent the grabbed frame&#xA;    cv::Mat camera_frame_yuyv = cv::Mat(camera_frame.height, camera_frame.width, CV_8UC2, camera_frame.data.data());&#xA;    // Encode image&#xA;    std::vector encoded_data = encoder.encode(camera_frame_yuyv);&#xA;    if (!encoded_data.empty())&#xA;    {&#xA;        // Decode image&#xA;        cv::Mat decoded_frame;&#xA;        if (decoder.decode(encoded_data.data(), encoded_data.size(), decoded_frame))&#xA;        {&#xA;            // Display image&#xA;            cv::imshow("Camera", decoded_frame);&#xA;            cv::waitKey(1);&#xA;        }&#xA;    }&#xA;}&#xA;

    &#xA;

    Compiling and executing the code I get random results between subsequent executions :

    &#xA;

      &#xA;
    • Sometimes the whole loop runs without problems and I see the decoded image.
    • &#xA;

    • Sometimes the program crashes at the sws_scale(...) call in the decoder with "Assertion desc failed at src/libswscale/swscale_internal.h:757".
    • &#xA;

    • Sometimes the loop runs but I see a black image and the message Slice parameters 0, 720 are invalid is displayed when executing the sws_scale(...) call in the decoder.
    • &#xA;

    &#xA;

    Why is the behaviour so random ? What am I doing wrong with the libav API ?

    &#xA;

    Some resources I found useful :

    &#xA;

    &#xA;