
Recherche avancée
Médias (91)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (103)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (9387)
-
How to mention real image instead of dummy image in ffmpeg api-example.c
2 mars 2013, par MohanI am using
video_encode_example
function fromapi-example.c
of FFmpeg,
which basically creates 25 dummy images and encodes into a one second video.
How ever i am unable to mention real images instead of dummy ones.
If any one know how to do this for xcode objective C, pl submit a reply.
Below is the function/*
* Video encoding example
*/
static void video_encode_example(const char *filename)
{
AVCodec *codec;
AVCodecContext *c= NULL;
int i, out_size, size, x, y, outbuf_size;
FILE *f;
AVFrame *picture;
uint8_t *outbuf, *picture_buf;
printf("Video encoding\n");
/* find the mpeg1 video encoder */
codec = avcodec_find_encoder(CODEC_ID_MPEG1VIDEO);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
c= avcodec_alloc_context();
picture= avcodec_alloc_frame();
/* put sample parameters */
c->bit_rate = 400000;
/* resolution must be a multiple of two */
c->width = 352;
c->height = 288;
/* frames per second */
c->time_base= (AVRational){1,25};
c->gop_size = 10; /* emit one intra frame every ten frames */
c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
/* open it */
if (avcodec_open(c, codec) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}
f = fopen(filename, "wb");
if (!f) {
fprintf(stderr, "could not open %s\n", filename);
exit(1);
}
/* alloc image and output buffer */
outbuf_size = 100000;
outbuf = malloc(outbuf_size);
size = c->width * c->height;
picture_buf = malloc((size * 3) / 2); /* size for YUV 420 */
picture->data[0] = picture_buf;
picture->data[1] = picture->data[0] + size;
picture->data[2] = picture->data[1] + size / 4;
picture->linesize[0] = c->width;
picture->linesize[1] = c->width / 2;
picture->linesize[2] = c->width / 2;
/* encode 1 second of video */
for(i=0;i<25;i++) {
fflush(stdout);
/* prepare a dummy image */
/* Y */
for(y=0;yheight;y++) {
for(x=0;xwidth;x++) {
picture->data[0][y * picture->linesize[0] + x] = x + y + i * 3;
}
}
/* Cb and Cr */
for(y=0;yheight/2;y++) {
for(x=0;xwidth/2;x++) {
picture->data[1][y * picture->linesize[1] + x] = 128 + y + i * 2;
picture->data[2][y * picture->linesize[2] + x] = 64 + x + i * 5;
}
}
/* encode the image */
out_size = avcodec_encode_video(c, outbuf, outbuf_size, picture);
printf("encoding frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
}
/* get the delayed frames */
for(; out_size; i++) {
fflush(stdout);
out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
printf("write frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
}
/* add sequence end code to have a real mpeg file */
outbuf[0] = 0x00;
outbuf[1] = 0x00;
outbuf[2] = 0x01;
outbuf[3] = 0xb7;
fwrite(outbuf, 1, 4, f);
fclose(f);
free(picture_buf);
free(outbuf);
avcodec_close(c);
av_free(c);
av_free(picture);
printf("\n");
} -
Encoding/Decoding H264 using libav in C++ [closed]
20 mai, par gbock93I want to build an application to


- 

- capture frames in YUYV 4:2:2 format
- encode them to H264
- send over network
- decode the received data
- display the video stream












To do so I wrote 2 classes, H264Encoder and H264Decoder.


I post only the .cpp contents, the .h are trivial :


H264Encoder.cpp


#include 

#include <stdexcept>
#include <iostream>

H264Encoder::H264Encoder(unsigned int width_, unsigned int height_, unsigned int fps_):
 m_width(width_),
 m_height(height_),
 m_fps(fps_),
 m_frame_index(0),
 m_context(nullptr),
 m_frame(nullptr),
 m_packet(nullptr),
 m_sws_ctx(nullptr)
{
 // Find the video codec
 AVCodec* codec;
 codec = avcodec_find_encoder(AV_CODEC_ID_H264);
 if (!codec)
 throw std::runtime_error("[Encoder]: Error: Codec not found");

 // Allocate codec
 m_context = avcodec_alloc_context3(codec);
 if (!m_context)
 throw std::runtime_error("[Encoder]: Error: Could not allocate codec context");

 // Configure codec
 av_opt_set(m_context->priv_data, "preset", "ultrafast", 0);
 av_opt_set(m_context->priv_data, "tune", "zerolatency", 0);
 av_opt_set(m_context->priv_data, "crf", "35", 0); // Range: [0; 51], sane range: [18; 26], lower -> higher compression

 m_context->width = (int)width_;
 m_context->height = (int)height_;
 m_context->time_base = {1, (int)fps_};
 m_context->framerate = {(int)fps_, 1};
 m_context->codec_id = AV_CODEC_ID_H264;
 m_context->pix_fmt = AV_PIX_FMT_YUV420P; // H265|4 codec take as input only AV_PIX_FMT_YUV420P
 m_context->bit_rate = 400000;
 m_context->gop_size = 10;
 m_context->max_b_frames = 1;

 // Open codec
 if (avcodec_open2(m_context, codec, nullptr) < 0)
 throw std::runtime_error("[Encoder]: Error: Could not open codec");

 // Allocate frame and its buffer
 m_frame = av_frame_alloc();
 if (!m_frame) 
 throw std::runtime_error("[Encoder]: Error: Could not allocate frame");

 m_frame->format = m_context->pix_fmt;
 m_frame->width = m_context->width;
 m_frame->height = m_context->height;

 if (av_frame_get_buffer(m_frame, 0) < 0)
 throw std::runtime_error("[Encoder]: Error: Cannot allocate frame buffer");
 
 // Allocate packet
 m_packet = av_packet_alloc();
 if (!m_packet) 
 throw std::runtime_error("[Encoder]: Error: Could not allocate packet");

 // Convert from YUYV422 to YUV420P
 m_sws_ctx = sws_getContext(
 width_, height_, AV_PIX_FMT_YUYV422,
 width_, height_, AV_PIX_FMT_YUV420P,
 SWS_BILINEAR, nullptr, nullptr, nullptr
 );
 if (!m_sws_ctx) 
 throw std::runtime_error("[Encoder]: Error: Could not allocate sws context");

 //
 printf("[Encoder]: H264Encoder ready.\n");
}

H264Encoder::~H264Encoder()
{
 sws_freeContext(m_sws_ctx);
 av_packet_free(&m_packet);
 av_frame_free(&m_frame);
 avcodec_free_context(&m_context);

 printf("[Encoder]: H264Encoder destroyed.\n");
}

std::vector H264Encoder::encode(const cv::Mat& img_)
{
 /*
 - YUYV422 is a packed format. It has 3 components (av_pix_fmt_desc_get((AVPixelFormat)AV_PIX_FMT_YUYV422)->nb_components == 3) but
 data is stored in a single plane (av_pix_fmt_count_planes((AVPixelFormat)AV_PIX_FMT_YUYV422) == 1).
 - YUV420P is a planar format. It has 3 components (av_pix_fmt_desc_get((AVPixelFormat)AV_PIX_FMT_YUV420P)->nb_components == 3) and
 each component is stored in a separate plane (av_pix_fmt_count_planes((AVPixelFormat)AV_PIX_FMT_YUV420P) == 3) with its
 own stride.
 */
 std::cout << "[Encoder]" << std::endl;
 std::cout << "[Encoder]: Encoding img " << img_.cols << "x" << img_.rows << " | element size " << img_.elemSize() << std::endl;
 assert(img_.elemSize() == 2);

 uint8_t* input_data[1] = {(uint8_t*)img_.data};
 int input_linesize[1] = {2 * (int)m_width};
 
 if (av_frame_make_writable(m_frame) < 0)
 throw std::runtime_error("[Encoder]: Error: Cannot make frame data writable");

 // Convert from YUV422 image to YUV420 frame. Apply scaling if necessary
 sws_scale(
 m_sws_ctx,
 input_data, input_linesize, 0, m_height,
 m_frame->data, m_frame->linesize
 );
 m_frame->pts = m_frame_index;

 int n_planes = av_pix_fmt_count_planes((AVPixelFormat)m_frame->format);
 std::cout << "[Encoder]: Sending Frame " << m_frame_index << " with dimensions " << m_frame->width << "x" << m_frame->height << "x" << n_planes << std::endl;
 for (int i=0; iframerate.num) + 1;
 break;
 case AVERROR(EAGAIN):
 throw std::runtime_error("[Encoder]: avcodec_send_frame: EAGAIN");
 case AVERROR_EOF:
 throw std::runtime_error("[Encoder]: avcodec_send_frame: EOF");
 case AVERROR(EINVAL):
 throw std::runtime_error("[Encoder]: avcodec_send_frame: EINVAL");
 case AVERROR(ENOMEM):
 throw std::runtime_error("[Encoder]: avcodec_send_frame: ENOMEM");
 default:
 throw std::runtime_error("[Encoder]: avcodec_send_frame: UNKNOWN");
 }

 // Receive packet from codec
 std::vector result;
 while(ret >= 0)
 {
 ret = avcodec_receive_packet(m_context, m_packet);

 switch (ret)
 {
 case 0:
 std::cout << "[Encoder]: Received packet from codec of size " << m_packet->size << " bytes " << std::endl;
 result.insert(result.end(), m_packet->data, m_packet->data + m_packet->size);
 av_packet_unref(m_packet);
 break;

 case AVERROR(EAGAIN):
 std::cout << "[Encoder]: avcodec_receive_packet: EAGAIN" << std::endl;
 break;
 case AVERROR_EOF:
 std::cout << "[Encoder]: avcodec_receive_packet: EOF" << std::endl;
 break;
 case AVERROR(EINVAL):
 throw std::runtime_error("[Encoder]: avcodec_receive_packet: EINVAL");
 default:
 throw std::runtime_error("[Encoder]: avcodec_receive_packet: UNKNOWN");
 }
 }

 std::cout << "[Encoder]: Encoding complete" << std::endl;
 return result;
}
</iostream></stdexcept>


H264Decoder.cpp


#include 

#include <iostream>
#include <stdexcept>

H264Decoder::H264Decoder():
 m_context(nullptr),
 m_frame(nullptr),
 m_packet(nullptr)
{
 // Find the video codec
 AVCodec* codec;
 codec = avcodec_find_decoder(AV_CODEC_ID_H264);
 if (!codec)
 throw std::runtime_error("[Decoder]: Error: Codec not found");

 // Allocate codec
 m_context = avcodec_alloc_context3(codec);
 if (!m_context)
 throw std::runtime_error("[Decoder]: Error: Could not allocate codec context");

 // Open codec
 if (avcodec_open2(m_context, codec, nullptr) < 0)
 throw std::runtime_error("[Decoder]: Error: Could not open codec");

 // Allocate frame
 m_frame = av_frame_alloc();
 if (!m_frame)
 throw std::runtime_error("[Decoder]: Error: Could not allocate frame");

 // Allocate packet
 m_packet = av_packet_alloc();
 if (!m_packet) 
 throw std::runtime_error("[Decoder]: Error: Could not allocate packet");

 //
 printf("[Decoder]: H264Decoder ready.\n");
}

H264Decoder::~H264Decoder()
{
 av_packet_free(&m_packet);
 av_frame_free(&m_frame);
 avcodec_free_context(&m_context);

 printf("[Decoder]: H264Decoder destroyed.\n");
}

bool H264Decoder::decode(uint8_t* data_, size_t size_, cv::Mat& img_)
{
 std::cout << "[Decoder]" << std::endl;
 std::cout << "[Decoder]: decoding " << size_ << " bytes of data" << std::endl;

 // Fill packet
 m_packet->data = data_;
 m_packet->size = size_;

 if (size_ == 0)
 return false;

 // Send packet to codec
 int send_result = avcodec_send_packet(m_context, m_packet);

 switch (send_result)
 {
 case 0:
 std::cout << "[Decoder]: Sent packet to codec" << std::endl;
 break;
 case AVERROR(EAGAIN):
 throw std::runtime_error("[Decoder]: avcodec_send_packet: EAGAIN");
 case AVERROR_EOF:
 throw std::runtime_error("[Decoder]: avcodec_send_packet: EOF");
 case AVERROR(EINVAL):
 throw std::runtime_error("[Decoder]: avcodec_send_packet: EINVAL");
 case AVERROR(ENOMEM):
 throw std::runtime_error("[Decoder]: avcodec_send_packet: ENOMEM");
 default:
 throw std::runtime_error("[Decoder]: avcodec_send_packet: UNKNOWN");
 }

 // Receive frame from codec
 int n_planes;
 uint8_t* output_data[1];
 int output_line_size[1];

 int receive_result = avcodec_receive_frame(m_context, m_frame);

 switch (receive_result)
 {
 case 0:
 n_planes = av_pix_fmt_count_planes((AVPixelFormat)m_frame->format);
 std::cout << "[Decoder]: Received Frame with dimensions " << m_frame->width << "x" << m_frame->height << "x" << n_planes << std::endl;
 for (int i=0; i/
 std::cout << "[Decoder]: Decoding complete" << std::endl;
 return true;
}
</stdexcept></iostream>


To test the two classes I put together a main.cpp to grab a frame, encode/decode and display the decoded frame (no network transmission in place) :


main.cpp


while(...)
{
 // get frame from custom camera class. Format is YUYV 4:2:2
 camera.getFrame(camera_frame);
 // Construct a cv::Mat to represent the grabbed frame
 cv::Mat camera_frame_yuyv = cv::Mat(camera_frame.height, camera_frame.width, CV_8UC2, camera_frame.data.data());
 // Encode image
 std::vector encoded_data = encoder.encode(camera_frame_yuyv);
 if (!encoded_data.empty())
 {
 // Decode image
 cv::Mat decoded_frame;
 if (decoder.decode(encoded_data.data(), encoded_data.size(), decoded_frame))
 {
 // Display image
 cv::imshow("Camera", decoded_frame);
 cv::waitKey(1);
 }
 }
}



Compiling and executing the code I get random results between subsequent executions :


- 

- Sometimes the whole loop runs without problems and I see the decoded image.
- Sometimes the program crashes at the
sws_scale(...)
call in the decoder with"Assertion desc failed at src/libswscale/swscale_internal.h:757"
. - Sometimes the loop runs but I see a black image and the message
Slice parameters 0, 720 are invalid
is displayed when executing thesws_scale(...)
call in the decoder.








Why is the behaviour so random ? What am I doing wrong with the libav API ?


Some resources I found useful :


- 

- This article on encoding
- This article on decoding






-
Using FFmpeg encode and UDP with a Webcam ?
14 mars, par RendresI'm trying to get frames from a Webcam using OpenCV, encode them with FFmpeg and send them using UDP.


I did before a similar project that instead of sending the packets with UDP, it saved them in a video file.


My code is.


#include 
#include 
#include 
#include 

extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>mathematics.h>
#include <libswscale></libswscale>swscale.h>
#include <libswresample></libswresample>swresample.h>
}

#include <opencv2></opencv2>opencv.hpp>

using namespace std;
using namespace cv;

#define WIDTH 640
#define HEIGHT 480
#define CODEC_ID AV_CODEC_ID_H264
#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P

static AVFrame *frame, *pFrameBGR;

int main(int argc, char **argv)
{
VideoCapture cap(0);
const char *url = "udp://127.0.0.1:8080";

AVFormatContext *formatContext;
AVStream *stream;
AVCodec *codec;
AVCodecContext *c;
AVDictionary *opts = NULL;

int ret, got_packet;

if (!cap.isOpened())
{
 return -1;
}

av_log_set_level(AV_LOG_TRACE);

av_register_all();
avformat_network_init();

avformat_alloc_output_context2(&formatContext, NULL, "h264", url);
if (!formatContext)
{
 av_log(NULL, AV_LOG_FATAL, "Could not allocate an output context for '%s'.\n", url);
}

codec = avcodec_find_encoder(CODEC_ID);
if (!codec)
{
 av_log(NULL, AV_LOG_ERROR, "Could not find encoder.\n");
}

stream = avformat_new_stream(formatContext, codec);

c = avcodec_alloc_context3(codec);

stream->id = formatContext->nb_streams - 1;
stream->time_base = (AVRational){1, 25};

c->codec_id = CODEC_ID;
c->bit_rate = 400000;
c->width = WIDTH;
c->height = HEIGHT;
c->time_base = stream->time_base;
c->gop_size = 12;
c->pix_fmt = STREAM_PIX_FMT;

if (formatContext->flags & AVFMT_GLOBALHEADER)
 c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

av_dict_set(&opts, "preset", "fast", 0);

av_dict_set(&opts, "tune", "zerolatency", 0);

ret = avcodec_open2(c, codec, NULL);
if (ret < 0)
{
 av_log(NULL, AV_LOG_ERROR, "Could not open video codec.\n");
}

pFrameBGR = av_frame_alloc();
if (!pFrameBGR)
{
 av_log(NULL, AV_LOG_ERROR, "Could not allocate video frame.\n");
}

frame = av_frame_alloc();
if (!frame)
{
 av_log(NULL, AV_LOG_ERROR, "Could not allocate video frame.\n");
}

frame->format = c->pix_fmt;
frame->width = c->width;
frame->height = c->height;

ret = avcodec_parameters_from_context(stream->codecpar, c);
if (ret < 0)
{
 av_log(NULL, AV_LOG_ERROR, "Could not open video codec.\n");
}

av_dump_format(formatContext, 0, url, 1);

ret = avformat_write_header(formatContext, NULL);
if (ret != 0)
{
 av_log(NULL, AV_LOG_ERROR, "Failed to connect to '%s'.\n", url);
}

Mat image(Size(HEIGHT, WIDTH), CV_8UC3);
SwsContext *swsctx = sws_getContext(WIDTH, HEIGHT, AV_PIX_FMT_BGR24, WIDTH, HEIGHT, AV_PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);
int frame_pts = 0;

while (1)
{
 cap >> image;

 int numBytesYUV = av_image_get_buffer_size(STREAM_PIX_FMT, WIDTH, HEIGHT, 1);
 uint8_t *bufferYUV = (uint8_t *)av_malloc(numBytesYUV * sizeof(uint8_t));

 avpicture_fill((AVPicture *)pFrameBGR, image.data, AV_PIX_FMT_BGR24, WIDTH, HEIGHT);
 avpicture_fill((AVPicture *)frame, bufferYUV, STREAM_PIX_FMT, WIDTH, HEIGHT);

 sws_scale(swsctx, (uint8_t const *const *)pFrameBGR->data, pFrameBGR->linesize, 0, HEIGHT, frame->data, frame->linesize);

 AVPacket pkt = {0};
 av_init_packet(&pkt);

 frame->pts = frame_pts;

 ret = avcodec_encode_video2(c, &pkt, frame, &got_packet);
 if (ret < 0)
 {
 av_log(NULL, AV_LOG_ERROR, "Error encoding frame\n");
 }

 if (got_packet)
 {
 pkt.pts = av_rescale_q_rnd(pkt.pts, c->time_base, stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
 pkt.dts = av_rescale_q_rnd(pkt.dts, c->time_base, stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
 pkt.duration = av_rescale_q(pkt.duration, c->time_base, stream->time_base);
 pkt.stream_index = stream->index;

 return av_interleaved_write_frame(formatContext, &pkt);

 cout << "Seguro que si" << endl;
 }
 frame_pts++;
}

avcodec_free_context(&c);
av_frame_free(&frame);
avformat_free_context(formatContext);

return 0;
}



The code compiles but it returns Segmentation fault in the function av_interleaved_write_frame(). I've tried several implementations or several codecs (in this case I'm using libopenh264, but using mpeg2video returns the same segmentation fault). I tried also with av_write_frame() but it returns the same error.


As I told before, I only want to grab frames from a webcam connected via USB, encode them to H264 and send the packets through UDP to another PC.


My console log when I run the executable is.


[100%] Built target display
[OpenH264] this = 0x0x244b4f0, Info:CWelsH264SVCEncoder::SetOption():ENCODER_OPTION_TRACE_CALLBACK callback = 0x7f0c302a87c0.
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:CWelsH264SVCEncoder::InitEncoder(), openh264 codec version = 5a5c4f1
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:iUsageType = 0,iPicWidth= 640;iPicHeight= 480;iTargetBitrate= 400000;iMaxBitrate= 400000;iRCMode= 0;iPaddingFlag= 0;iTemporalLayerNum= 1;iSpatialLayerNum= 1;fFrameRate= 25.000000f;uiIntraPeriod= 12;eSpsPpsIdStrategy = 0;bPrefixNalAddingCtrl = 0;bSimulcastAVC=0;bEnableDenoise= 0;bEnableBackgroundDetection= 1;bEnableSceneChangeDetect = 1;bEnableAdaptiveQuant= 1;bEnableFrameSkip= 0;bEnableLongTermReference= 0;iLtrMarkPeriod= 30, bIsLosslessLink=0;iComplexityMode = 0;iNumRefFrame = 1;iEntropyCodingModeFlag = 0;uiMaxNalSize = 0;iLTRRefNum = 0;iMultipleThreadIdc = 1;iLoopFilterDisableIdc = 0 (offset(alpha/beta): 0,0;iComplexityMode = 0,iMaxQp = 51;iMinQp = 0)
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:sSpatialLayers[0]: .iVideoWidth= 640; .iVideoHeight= 480; .fFrameRate= 25.000000f; .iSpatialBitrate= 400000; .iMaxSpatialBitrate= 400000; .sSliceArgument.uiSliceMode= 1; .sSliceArgument.iSliceNum= 0; .sSliceArgument.uiSliceSizeConstraint= 1500;uiProfileIdc = 66;uiLevelIdc = 41
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:SliceArgumentValidationFixedSliceMode(), unsupported setting with Resolution and uiSliceNum combination under RC on! So uiSliceNum is changed to 6!
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:Setting MaxSpatialBitrate (400000) the same at SpatialBitrate (400000) will make the actual bit rate lower than SpatialBitrate
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:bEnableFrameSkip = 0,bitrate can't be controlled for RC_QUALITY_MODE,RC_BITRATE_MODE and RC_TIMESTAMP_MODE without enabling skip frame.
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:Change QP Range from(0,51) to (12,42)
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WELS CPU features/capacities (0x4007fe3f) detected: HTT: Y, MMX: Y, MMXEX: Y, SSE: Y, SSE2: Y, SSE3: Y, SSSE3: Y, SSE4.1: Y, SSE4.2: Y, AVX: Y, FMA: Y, X87-FPU: Y, 3DNOW: N, 3DNOWEX: N, ALTIVEC: N, CMOV: Y, MOVBE: Y, AES: Y, NUMBER OF LOGIC PROCESSORS ON CHIP: 8, CPU CACHE LINE SIZE (BYTES): 64
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WelsInitEncoderExt() exit, overall memory usage: 4542878 bytes
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WelsInitEncoderExt(), pCtx= 0x0x245a400.
Output #0, h264, to 'udp://192.168.100.39:8080':
Stream #0:0, 0, 1/25: Video: h264 (libopenh264), 1 reference frame, yuv420p, 640x480 (0x0), 0/1, q=2-31, 400 kb/s, 25 tbn
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:RcUpdateIntraComplexity iFrameDqBits = 385808,iQStep= 2016,iIntraCmplx = 777788928
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:[Rc]Layer 0: Frame timestamp = 0, Frame type = 2, encoding_qp = 30, average qp = 30, max qp = 33, min qp = 27, index = 0, iTid = 0, used = 385808, bitsperframe = 16000, target = 64000, remainingbits = -257808, skipbuffersize = 200000
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerNum = 2,iFrameSize = 48252
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerId = 0,iNalType = 0,iNalCount = 2, first Nal Length=18,uiSpatialId = 0,uiTemporalId = 0,iSubSeqId = 0
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerId = 1,iNalType = 1,iNalCount = 6, first Nal Length=6057,uiSpatialId = 0,uiTemporalId = 0,iSubSeqId = 0
[libopenh264 @ 0x244aa00] 6 slices
./scriptBuild.sh: line 20: 10625 Segmentation fault (core dumped) ./display



As you can see, FFmpeg uses libopenh264 and configures it correctly. However, no matter what. It always returns the same Segmentation fault error...


I've used commands like this.


ffmpeg -s 640x480 -f video4linux2 -i /dev/video0 -r 30 -vcodec libopenh264 -an -f h264 udp://127.0.0.1:8080



And it works perfectly, but I need to process the frames before sending them. Thats why I'm trying to use the libs.


My FFmpeg version is.


ffmpeg version 3.3.6 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
configuration: --disable-yasm --enable-shared --enable-libopenh264 --cc='gcc -fPIC'
libavutil 55. 58.100 / 55. 58.100
libavcodec 57. 89.100 / 57. 89.100
libavformat 57. 71.100 / 57. 71.100
libavdevice 57. 6.100 / 57. 6.100
libavfilter 6. 82.100 / 6. 82.100
libswscale 4. 6.100 / 4. 6.100
libswresample 2. 7.100 / 2. 7.100



I tried to get more information of the error using gbd, but it didn't give me debugging info.


How can I solve this problem ?