
Recherche avancée
Médias (2)
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (111)
-
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...) -
Les notifications de la ferme
1er décembre 2010, parAfin d’assurer une gestion correcte de la ferme, il est nécessaire de notifier plusieurs choses lors d’actions spécifiques à la fois à l’utilisateur mais également à l’ensemble des administrateurs de la ferme.
Les notifications de changement de statut
Lors d’un changement de statut d’une instance, l’ensemble des administrateurs de la ferme doivent être notifiés de cette modification ainsi que l’utilisateur administrateur de l’instance.
À la demande d’un canal
Passage au statut "publie"
Passage au (...) -
Initialisation de MediaSPIP (préconfiguration)
20 février 2010, parLors de l’installation de MediaSPIP, celui-ci est préconfiguré pour les usages les plus fréquents.
Cette préconfiguration est réalisée par un plugin activé par défaut et non désactivable appelé MediaSPIP Init.
Ce plugin sert à préconfigurer de manière correcte chaque instance de MediaSPIP. Il doit donc être placé dans le dossier plugins-dist/ du site ou de la ferme pour être installé par défaut avant de pouvoir utiliser le site.
Dans un premier temps il active ou désactive des options de SPIP qui ne le (...)
Sur d’autres sites (6680)
-
FFMPEG RTMP STREAM RECORDING TIMEOUT
11 novembre 2020, par abreski[SOLVED] — solution in FINAL EDIT
i am recording an rtmp livestream with ffmpeg, and looking to add a flag that will automatically stop processing the moment new data stops arriving.


If i start and stop the script (stop by killing the process on demand), everything is fine, recording is saved and can be played.
However, when the stream is stopped from the source without a manual call to STOP, the script will still run for a bit and the resulting file will be corrupted (tested with manual stop call - works , and with stopping the stream before the recording , simulating browser/tab close or disconnect - fails)


the command i'm running


$command = "ffmpeg -i {$rtmpUrl} -c:v copy -c:a copy -t 3600 {$path} >/dev/null 2>/dev/null &";
$res = shell_exec($command)



i tried adding -timeout 0 option before and after the input like this


$command = "ffmpeg -timeout 0 -i {$rtmpUrl} -c:v copy -c:a copy -t 3600 {$path} >/dev/null 2>/dev/null &"; 



and


$command = "ffmpeg -i {$rtmpUrl} -c:v copy -c:a copy -timeout 0 -t 3600 {$path} >/dev/null 2>/dev/null &";



but no improvement.


What am i missing here ? is there any way to automatically stop the script when new data stops ariving from the livestream (meaning that stream is stopped and recording should stop aswell).


Note $rtmpUrl and $path have been checked and everything works fine as long as the script is stopped before the livestream ends.


Any suggestions are highly appreciated


LATER EDIT : realised timeout was set in the wrong place, added it first but result was still the same, so still looking for any suggestions


$command = "timout 0 ffmpeg -i {$rtmpUrl} -c:v copy -c:a copy -t 3600 {$path} >/dev/null 2>/dev/null &";



FINAL EDIT
in case someone finds this thread looking for a solution in a similar case,
solved, timeout was not what i was looking for, instead using the
-re
flag fixed it for us.
Now script stops when no more new frames come in

$command = "ffmpeg -re -i {$rtmpUrl} -c:v copy -c:a copy -t 3600 {$path} >/dev/null 2>/dev/null &";



-
android ffmpeg bad video output
20 août 2014, par Sujith ManjavanaI’m following this tutorial to create my first ffmpeg app. I have successfully build the shared libs and compiled the project without any errors. But when i run the app on my nexus 5 the output is this
Here is the native code
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libswscale></libswscale>swscale.h>
#include <libavutil></libavutil>pixfmt.h>
#include
#include
#include
#include <android></android>native_window.h>
#include <android></android>native_window_jni.h>
#define LOG_TAG "android-ffmpeg-tutorial02"
#define LOGI(...) __android_log_print(4, LOG_TAG, __VA_ARGS__);
#define LOGE(...) __android_log_print(6, LOG_TAG, __VA_ARGS__);
ANativeWindow* window;
char *videoFileName;
AVFormatContext *formatCtx = NULL;
int videoStream;
AVCodecContext *codecCtx = NULL;
AVFrame *decodedFrame = NULL;
AVFrame *frameRGBA = NULL;
jobject bitmap;
void* buffer;
struct SwsContext *sws_ctx = NULL;
int width;
int height;
int stop;
jint naInit(JNIEnv *pEnv, jobject pObj, jstring pFileName) {
AVCodec *pCodec = NULL;
int i;
AVDictionary *optionsDict = NULL;
videoFileName = (char *)(*pEnv)->GetStringUTFChars(pEnv, pFileName, NULL);
LOGI("video file name is %s", videoFileName);
// Register all formats and codecs
av_register_all();
// Open video file
if(avformat_open_input(&formatCtx, videoFileName, NULL, NULL)!=0)
return -1; // Couldn't open file
// Retrieve stream information
if(avformat_find_stream_info(formatCtx, NULL)<0)
return -1; // Couldn't find stream information
// Dump information about file onto standard error
av_dump_format(formatCtx, 0, videoFileName, 0);
// Find the first video stream
videoStream=-1;
for(i=0; inb_streams; i++) {
if(formatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
videoStream=i;
break;
}
}
if(videoStream==-1)
return -1; // Didn't find a video stream
// Get a pointer to the codec context for the video stream
codecCtx=formatCtx->streams[videoStream]->codec;
// Find the decoder for the video stream
pCodec=avcodec_find_decoder(codecCtx->codec_id);
if(pCodec==NULL) {
fprintf(stderr, "Unsupported codec!\n");
return -1; // Codec not found
}
// Open codec
if(avcodec_open2(codecCtx, pCodec, &optionsDict)<0)
return -1; // Could not open codec
// Allocate video frame
decodedFrame=avcodec_alloc_frame();
// Allocate an AVFrame structure
frameRGBA=avcodec_alloc_frame();
if(frameRGBA==NULL)
return -1;
return 0;
}
jobject createBitmap(JNIEnv *pEnv, int pWidth, int pHeight) {
int i;
//get Bitmap class and createBitmap method ID
jclass javaBitmapClass = (jclass)(*pEnv)->FindClass(pEnv, "android/graphics/Bitmap");
jmethodID mid = (*pEnv)->GetStaticMethodID(pEnv, javaBitmapClass, "createBitmap", "(IILandroid/graphics/Bitmap$Config;)Landroid/graphics/Bitmap;");
//create Bitmap.Config
//reference: https://forums.oracle.com/thread/1548728
const wchar_t* configName = L"ARGB_8888";
int len = wcslen(configName);
jstring jConfigName;
if (sizeof(wchar_t) != sizeof(jchar)) {
//wchar_t is defined as different length than jchar(2 bytes)
jchar* str = (jchar*)malloc((len+1)*sizeof(jchar));
for (i = 0; i < len; ++i) {
str[i] = (jchar)configName[i];
}
str[len] = 0;
jConfigName = (*pEnv)->NewString(pEnv, (const jchar*)str, len);
} else {
//wchar_t is defined same length as jchar(2 bytes)
jConfigName = (*pEnv)->NewString(pEnv, (const jchar*)configName, len);
}
jclass bitmapConfigClass = (*pEnv)->FindClass(pEnv, "android/graphics/Bitmap$Config");
jobject javaBitmapConfig = (*pEnv)->CallStaticObjectMethod(pEnv, bitmapConfigClass,
(*pEnv)->GetStaticMethodID(pEnv, bitmapConfigClass, "valueOf", "(Ljava/lang/String;)Landroid/graphics/Bitmap$Config;"), jConfigName);
//create the bitmap
return (*pEnv)->CallStaticObjectMethod(pEnv, javaBitmapClass, mid, pWidth, pHeight, javaBitmapConfig);
}
jintArray naGetVideoRes(JNIEnv *pEnv, jobject pObj) {
jintArray lRes;
if (NULL == codecCtx) {
return NULL;
}
lRes = (*pEnv)->NewIntArray(pEnv, 2);
if (lRes == NULL) {
LOGI(1, "cannot allocate memory for video size");
return NULL;
}
jint lVideoRes[2];
lVideoRes[0] = codecCtx->width;
lVideoRes[1] = codecCtx->height;
(*pEnv)->SetIntArrayRegion(pEnv, lRes, 0, 2, lVideoRes);
return lRes;
}
void naSetSurface(JNIEnv *pEnv, jobject pObj, jobject pSurface) {
if (0 != pSurface) {
// get the native window reference
window = ANativeWindow_fromSurface(pEnv, pSurface);
// set format and size of window buffer
ANativeWindow_setBuffersGeometry(window, 0, 0, WINDOW_FORMAT_RGBA_8888);
} else {
// release the native window
ANativeWindow_release(window);
}
}
jint naSetup(JNIEnv *pEnv, jobject pObj, int pWidth, int pHeight) {
width = pWidth;
height = pHeight;
//create a bitmap as the buffer for frameRGBA
bitmap = createBitmap(pEnv, pWidth, pHeight);
if (AndroidBitmap_lockPixels(pEnv, bitmap, &buffer) < 0)
return -1;
//get the scaling context
sws_ctx = sws_getContext (
codecCtx->width,
codecCtx->height,
codecCtx->pix_fmt,
pWidth,
pHeight,
AV_PIX_FMT_RGBA,
SWS_BILINEAR,
NULL,
NULL,
NULL
);
// Assign appropriate parts of bitmap to image planes in pFrameRGBA
// Note that pFrameRGBA is an AVFrame, but AVFrame is a superset
// of AVPicture
avpicture_fill((AVPicture *)frameRGBA, buffer, AV_PIX_FMT_RGBA,
pWidth, pHeight);
return 0;
}
void finish(JNIEnv *pEnv) {
//unlock the bitmap
AndroidBitmap_unlockPixels(pEnv, bitmap);
av_free(buffer);
// Free the RGB image
av_free(frameRGBA);
// Free the YUV frame
av_free(decodedFrame);
// Close the codec
avcodec_close(codecCtx);
// Close the video file
avformat_close_input(&formatCtx);
}
void decodeAndRender(JNIEnv *pEnv) {
ANativeWindow_Buffer windowBuffer;
AVPacket packet;
int i=0;
int frameFinished;
int lineCnt;
while(av_read_frame(formatCtx, &packet)>=0 && !stop) {
// Is this a packet from the video stream?
if(packet.stream_index==videoStream) {
// Decode video frame
avcodec_decode_video2(codecCtx, decodedFrame, &frameFinished,
&packet);
// Did we get a video frame?
if(frameFinished) {
// Convert the image from its native format to RGBA
sws_scale
(
sws_ctx,
(uint8_t const * const *)decodedFrame->data,
decodedFrame->linesize,
0,
codecCtx->height,
frameRGBA->data,
frameRGBA->linesize
);
// lock the window buffer
if (ANativeWindow_lock(window, &windowBuffer, NULL) < 0) {
LOGE("cannot lock window");
} else {
// draw the frame on buffer
LOGI("copy buffer %d:%d:%d", width, height, width*height*4);
LOGI("window buffer: %d:%d:%d", windowBuffer.width,
windowBuffer.height, windowBuffer.stride);
memcpy(windowBuffer.bits, buffer, width * height * 4);
// unlock the window buffer and post it to display
ANativeWindow_unlockAndPost(window);
// count number of frames
++i;
}
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
}
LOGI("total No. of frames decoded and rendered %d", i);
finish(pEnv);
}
/**
* start the video playback
*/
void naPlay(JNIEnv *pEnv, jobject pObj) {
//create a new thread for video decode and render
pthread_t decodeThread;
stop = 0;
pthread_create(&decodeThread, NULL, decodeAndRender, NULL);
}
/**
* stop the video playback
*/
void naStop(JNIEnv *pEnv, jobject pObj) {
stop = 1;
}
jint JNI_OnLoad(JavaVM* pVm, void* reserved) {
JNIEnv* env;
if ((*pVm)->GetEnv(pVm, (void **)&env, JNI_VERSION_1_6) != JNI_OK) {
return -1;
}
JNINativeMethod nm[8];
nm[0].name = "naInit";
nm[0].signature = "(Ljava/lang/String;)I";
nm[0].fnPtr = (void*)naInit;
nm[1].name = "naSetSurface";
nm[1].signature = "(Landroid/view/Surface;)V";
nm[1].fnPtr = (void*)naSetSurface;
nm[2].name = "naGetVideoRes";
nm[2].signature = "()[I";
nm[2].fnPtr = (void*)naGetVideoRes;
nm[3].name = "naSetup";
nm[3].signature = "(II)I";
nm[3].fnPtr = (void*)naSetup;
nm[4].name = "naPlay";
nm[4].signature = "()V";
nm[4].fnPtr = (void*)naPlay;
nm[5].name = "naStop";
nm[5].signature = "()V";
nm[5].fnPtr = (void*)naStop;
jclass cls = (*env)->FindClass(env, "roman10/tutorial/android_ffmpeg_tutorial02/MainActivity");
//Register methods with env->RegisterNatives.
(*env)->RegisterNatives(env, cls, nm, 6);
return JNI_VERSION_1_6;
}Here is the build.sh
#!/bin/bash
NDK=$HOME/Desktop/adt/android-ndk-r9
SYSROOT=$NDK/platforms/android-9/arch-arm/
TOOLCHAIN=$NDK/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64
function build_one
{
./configure \
--prefix=$PREFIX \
--enable-shared \
--disable-static \
--disable-doc \
--disable-ffmpeg \
--disable-ffplay \
--disable-ffprobe \
--disable-ffserver \
--disable-avdevice \
--disable-doc \
--disable-symver \
--cross-prefix=$TOOLCHAIN/bin/arm-linux-androideabi- \
--target-os=linux \
--arch=arm \
--enable-cross-compile \
--sysroot=$SYSROOT \
--extra-cflags="-Os -fpic $ADDI_CFLAGS" \
--extra-ldflags="$ADDI_LDFLAGS" \
$ADDITIONAL_CONFIGURE_FLAG
make clean
make
make install
}
CPU=arm
PREFIX=$(pwd)/android/$CPU
ADDI_CFLAGS="-marm"
build_oneIt works on the Galaxy tab2. what can i do to make it work on all devices ?? Please help me..
-
Encoding/Decoding H264 using libav in C++ [closed]
20 mai, par gbock93I want to build an application to


- 

- capture frames in YUYV 4:2:2 format
- encode them to H264
- send over network
- decode the received data
- display the video stream












To do so I wrote 2 classes, H264Encoder and H264Decoder.


I post only the .cpp contents, the .h are trivial :


H264Encoder.cpp


#include 

#include <stdexcept>
#include <iostream>

H264Encoder::H264Encoder(unsigned int width_, unsigned int height_, unsigned int fps_):
 m_width(width_),
 m_height(height_),
 m_fps(fps_),
 m_frame_index(0),
 m_context(nullptr),
 m_frame(nullptr),
 m_packet(nullptr),
 m_sws_ctx(nullptr)
{
 // Find the video codec
 AVCodec* codec;
 codec = avcodec_find_encoder(AV_CODEC_ID_H264);
 if (!codec)
 throw std::runtime_error("[Encoder]: Error: Codec not found");

 // Allocate codec
 m_context = avcodec_alloc_context3(codec);
 if (!m_context)
 throw std::runtime_error("[Encoder]: Error: Could not allocate codec context");

 // Configure codec
 av_opt_set(m_context->priv_data, "preset", "ultrafast", 0);
 av_opt_set(m_context->priv_data, "tune", "zerolatency", 0);
 av_opt_set(m_context->priv_data, "crf", "35", 0); // Range: [0; 51], sane range: [18; 26], lower -> higher compression

 m_context->width = (int)width_;
 m_context->height = (int)height_;
 m_context->time_base = {1, (int)fps_};
 m_context->framerate = {(int)fps_, 1};
 m_context->codec_id = AV_CODEC_ID_H264;
 m_context->pix_fmt = AV_PIX_FMT_YUV420P; // H265|4 codec take as input only AV_PIX_FMT_YUV420P
 m_context->bit_rate = 400000;
 m_context->gop_size = 10;
 m_context->max_b_frames = 1;

 // Open codec
 if (avcodec_open2(m_context, codec, nullptr) < 0)
 throw std::runtime_error("[Encoder]: Error: Could not open codec");

 // Allocate frame and its buffer
 m_frame = av_frame_alloc();
 if (!m_frame) 
 throw std::runtime_error("[Encoder]: Error: Could not allocate frame");

 m_frame->format = m_context->pix_fmt;
 m_frame->width = m_context->width;
 m_frame->height = m_context->height;

 if (av_frame_get_buffer(m_frame, 0) < 0)
 throw std::runtime_error("[Encoder]: Error: Cannot allocate frame buffer");
 
 // Allocate packet
 m_packet = av_packet_alloc();
 if (!m_packet) 
 throw std::runtime_error("[Encoder]: Error: Could not allocate packet");

 // Convert from YUYV422 to YUV420P
 m_sws_ctx = sws_getContext(
 width_, height_, AV_PIX_FMT_YUYV422,
 width_, height_, AV_PIX_FMT_YUV420P,
 SWS_BILINEAR, nullptr, nullptr, nullptr
 );
 if (!m_sws_ctx) 
 throw std::runtime_error("[Encoder]: Error: Could not allocate sws context");

 //
 printf("[Encoder]: H264Encoder ready.\n");
}

H264Encoder::~H264Encoder()
{
 sws_freeContext(m_sws_ctx);
 av_packet_free(&m_packet);
 av_frame_free(&m_frame);
 avcodec_free_context(&m_context);

 printf("[Encoder]: H264Encoder destroyed.\n");
}

std::vector H264Encoder::encode(const cv::Mat& img_)
{
 /*
 - YUYV422 is a packed format. It has 3 components (av_pix_fmt_desc_get((AVPixelFormat)AV_PIX_FMT_YUYV422)->nb_components == 3) but
 data is stored in a single plane (av_pix_fmt_count_planes((AVPixelFormat)AV_PIX_FMT_YUYV422) == 1).
 - YUV420P is a planar format. It has 3 components (av_pix_fmt_desc_get((AVPixelFormat)AV_PIX_FMT_YUV420P)->nb_components == 3) and
 each component is stored in a separate plane (av_pix_fmt_count_planes((AVPixelFormat)AV_PIX_FMT_YUV420P) == 3) with its
 own stride.
 */
 std::cout << "[Encoder]" << std::endl;
 std::cout << "[Encoder]: Encoding img " << img_.cols << "x" << img_.rows << " | element size " << img_.elemSize() << std::endl;
 assert(img_.elemSize() == 2);

 uint8_t* input_data[1] = {(uint8_t*)img_.data};
 int input_linesize[1] = {2 * (int)m_width};
 
 if (av_frame_make_writable(m_frame) < 0)
 throw std::runtime_error("[Encoder]: Error: Cannot make frame data writable");

 // Convert from YUV422 image to YUV420 frame. Apply scaling if necessary
 sws_scale(
 m_sws_ctx,
 input_data, input_linesize, 0, m_height,
 m_frame->data, m_frame->linesize
 );
 m_frame->pts = m_frame_index;

 int n_planes = av_pix_fmt_count_planes((AVPixelFormat)m_frame->format);
 std::cout << "[Encoder]: Sending Frame " << m_frame_index << " with dimensions " << m_frame->width << "x" << m_frame->height << "x" << n_planes << std::endl;
 for (int i=0; iframerate.num) + 1;
 break;
 case AVERROR(EAGAIN):
 throw std::runtime_error("[Encoder]: avcodec_send_frame: EAGAIN");
 case AVERROR_EOF:
 throw std::runtime_error("[Encoder]: avcodec_send_frame: EOF");
 case AVERROR(EINVAL):
 throw std::runtime_error("[Encoder]: avcodec_send_frame: EINVAL");
 case AVERROR(ENOMEM):
 throw std::runtime_error("[Encoder]: avcodec_send_frame: ENOMEM");
 default:
 throw std::runtime_error("[Encoder]: avcodec_send_frame: UNKNOWN");
 }

 // Receive packet from codec
 std::vector result;
 while(ret >= 0)
 {
 ret = avcodec_receive_packet(m_context, m_packet);

 switch (ret)
 {
 case 0:
 std::cout << "[Encoder]: Received packet from codec of size " << m_packet->size << " bytes " << std::endl;
 result.insert(result.end(), m_packet->data, m_packet->data + m_packet->size);
 av_packet_unref(m_packet);
 break;

 case AVERROR(EAGAIN):
 std::cout << "[Encoder]: avcodec_receive_packet: EAGAIN" << std::endl;
 break;
 case AVERROR_EOF:
 std::cout << "[Encoder]: avcodec_receive_packet: EOF" << std::endl;
 break;
 case AVERROR(EINVAL):
 throw std::runtime_error("[Encoder]: avcodec_receive_packet: EINVAL");
 default:
 throw std::runtime_error("[Encoder]: avcodec_receive_packet: UNKNOWN");
 }
 }

 std::cout << "[Encoder]: Encoding complete" << std::endl;
 return result;
}
</iostream></stdexcept>


H264Decoder.cpp


#include 

#include <iostream>
#include <stdexcept>

H264Decoder::H264Decoder():
 m_context(nullptr),
 m_frame(nullptr),
 m_packet(nullptr)
{
 // Find the video codec
 AVCodec* codec;
 codec = avcodec_find_decoder(AV_CODEC_ID_H264);
 if (!codec)
 throw std::runtime_error("[Decoder]: Error: Codec not found");

 // Allocate codec
 m_context = avcodec_alloc_context3(codec);
 if (!m_context)
 throw std::runtime_error("[Decoder]: Error: Could not allocate codec context");

 // Open codec
 if (avcodec_open2(m_context, codec, nullptr) < 0)
 throw std::runtime_error("[Decoder]: Error: Could not open codec");

 // Allocate frame
 m_frame = av_frame_alloc();
 if (!m_frame)
 throw std::runtime_error("[Decoder]: Error: Could not allocate frame");

 // Allocate packet
 m_packet = av_packet_alloc();
 if (!m_packet) 
 throw std::runtime_error("[Decoder]: Error: Could not allocate packet");

 //
 printf("[Decoder]: H264Decoder ready.\n");
}

H264Decoder::~H264Decoder()
{
 av_packet_free(&m_packet);
 av_frame_free(&m_frame);
 avcodec_free_context(&m_context);

 printf("[Decoder]: H264Decoder destroyed.\n");
}

bool H264Decoder::decode(uint8_t* data_, size_t size_, cv::Mat& img_)
{
 std::cout << "[Decoder]" << std::endl;
 std::cout << "[Decoder]: decoding " << size_ << " bytes of data" << std::endl;

 // Fill packet
 m_packet->data = data_;
 m_packet->size = size_;

 if (size_ == 0)
 return false;

 // Send packet to codec
 int send_result = avcodec_send_packet(m_context, m_packet);

 switch (send_result)
 {
 case 0:
 std::cout << "[Decoder]: Sent packet to codec" << std::endl;
 break;
 case AVERROR(EAGAIN):
 throw std::runtime_error("[Decoder]: avcodec_send_packet: EAGAIN");
 case AVERROR_EOF:
 throw std::runtime_error("[Decoder]: avcodec_send_packet: EOF");
 case AVERROR(EINVAL):
 throw std::runtime_error("[Decoder]: avcodec_send_packet: EINVAL");
 case AVERROR(ENOMEM):
 throw std::runtime_error("[Decoder]: avcodec_send_packet: ENOMEM");
 default:
 throw std::runtime_error("[Decoder]: avcodec_send_packet: UNKNOWN");
 }

 // Receive frame from codec
 int n_planes;
 uint8_t* output_data[1];
 int output_line_size[1];

 int receive_result = avcodec_receive_frame(m_context, m_frame);

 switch (receive_result)
 {
 case 0:
 n_planes = av_pix_fmt_count_planes((AVPixelFormat)m_frame->format);
 std::cout << "[Decoder]: Received Frame with dimensions " << m_frame->width << "x" << m_frame->height << "x" << n_planes << std::endl;
 for (int i=0; i/
 std::cout << "[Decoder]: Decoding complete" << std::endl;
 return true;
}
</stdexcept></iostream>


To test the two classes I put together a main.cpp to grab a frame, encode/decode and display the decoded frame (no network transmission in place) :


main.cpp


while(...)
{
 // get frame from custom camera class. Format is YUYV 4:2:2
 camera.getFrame(camera_frame);
 // Construct a cv::Mat to represent the grabbed frame
 cv::Mat camera_frame_yuyv = cv::Mat(camera_frame.height, camera_frame.width, CV_8UC2, camera_frame.data.data());
 // Encode image
 std::vector encoded_data = encoder.encode(camera_frame_yuyv);
 if (!encoded_data.empty())
 {
 // Decode image
 cv::Mat decoded_frame;
 if (decoder.decode(encoded_data.data(), encoded_data.size(), decoded_frame))
 {
 // Display image
 cv::imshow("Camera", decoded_frame);
 cv::waitKey(1);
 }
 }
}



Compiling and executing the code I get random results between subsequent executions :


- 

- Sometimes the whole loop runs without problems and I see the decoded image.
- Sometimes the program crashes at the
sws_scale(...)
call in the decoder with"Assertion desc failed at src/libswscale/swscale_internal.h:757"
. - Sometimes the loop runs but I see a black image and the message
Slice parameters 0, 720 are invalid
is displayed when executing thesws_scale(...)
call in the decoder.








Why is the behaviour so random ? What am I doing wrong with the libav API ?


Some resources I found useful :


- 

- This article on encoding
- This article on decoding