
Recherche avancée
Autres articles (63)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
MediaSPIP : Modification des droits de création d’objets et de publication définitive
11 novembre 2010, parPar défaut, MediaSPIP permet de créer 5 types d’objets.
Toujours par défaut les droits de création et de publication définitive de ces objets sont réservés aux administrateurs, mais ils sont bien entendu configurables par les webmestres.
Ces droits sont ainsi bloqués pour plusieurs raisons : parce que le fait d’autoriser à publier doit être la volonté du webmestre pas de l’ensemble de la plateforme et donc ne pas être un choix par défaut ; parce qu’avoir un compte peut servir à autre choses également, (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (11382)
-
C++, FFmpeg, save AVPacket infomation into videostate structure in ffplay.c
28 mai 2015, par YoohooI am currently working on a project that tests video streaming. In the project, video stream is encoded with H.264 before send and decoded after receive, using FFmpeg codec and functions.
I can encode video stream by
init_video_encode(AV_CODEC_ID_H264);
where
static void init_video_encode(AVCodecID codec_id){
codec = avcodec_find_encoder(codec_id);
if (!codec) {
fprintf(stderr, "Codec not found\n");
exit(1);
}
c = avcodec_alloc_context3(codec);
if (!c) {
fprintf(stderr, "Could not allocate video codec context\n");
exit(1);
}
/* put sample parameters */
c->bit_rate = 400000;
/* resolution must be a multiple of two */
c->width = 640;
c->height = 480;
/* frames per second */
c->time_base= (AVRational){1,25};
c->gop_size = 10; /* emit one intra frame every ten frames */
c->max_b_frames=max_f;
c->pix_fmt = AV_PIX_FMT_YUV420P;
if(codec_id == AV_CODEC_ID_H264)
av_opt_set(c->priv_data, "preset", "slow", 0);
/* open it */
if (avcodec_open2(c, codec, NULL) < 0) {
fprintf(stderr, "Could not open codec\n");
exit(1);
}
frame = avcodec_alloc_frame();
if (!frame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(1);
}
frame->format = c->pix_fmt;
frame->width = c->width;
frame->height = c->height;
/* the image can be allocated by any means and av_image_alloc() is
* just the most convenient way if av_malloc() is to be used */
ret = av_image_alloc(frame->data, frame->linesize, c->width, c->height,
c->pix_fmt, 32);
/* get the delayed frames */
if (ret < 0) {
fprintf(stderr, "Could not allocate raw picture buffer\n");
exit(1);
}
av_init_packet(&pkt);
//}
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;
//cout<<"\nBefore YUV\n";
if(count == 0)
read_yuv420(frame->data[0]);
count ++;
// cout<<"\nAfter YUV\n";
if(count == SUBSITY) {
count = 0;
}
frame->pts = i++;
/* encode the image */
ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding frame\n");
return -1;
}
//cout<<"\nRecord the Video\n";
if (got_output) {
//printf("Write frame %3d (size=%5d)\n", i, pkt.size);
//cout<<"\nBefore Memcpy\n\n\n";
memcpy(inbufout+totalSize,pkt.data,pkt.size);
//cout<<"\nAfter Memcpy\n\n\n";
totalSize+=pkt.size;The video encoder works very well, if I write the encoded packet into a .h264 file, it can be played. But at the decoder side, I cannot decode the received packet with :
codec = avcodec_find_decoder(AV_CODEC_ID_H264);
if (!codec) {
fprintf(stderr, "Codec not found\n");
exit(1);
}
c = avcodec_alloc_context3(codec);
if (!c) {
fprintf(stderr, "Could not allocate video codec context\n");
exit(1);
}
if(codec->capabilities&CODEC_CAP_TRUNCATED)
c->flags|= CODEC_FLAG_TRUNCATED;
/* open it */
if (avcodec_open2(c, codec, NULL) < 0) {
fprintf(stderr, "Could not open codec\n");
exit(1);
}
frame = avcodec_alloc_frame();
if (!frame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(1);
}
len = avcodec_decode_video2(avctx, frame, &got_frame, pkt);
if (len < 0) {
fprintf(stderr, "Error while decoding frame %d\n", *frame_count);
return len;
}The reason of failure is lacking parser, I have tried to build a parser but failed......
Therefore I am wondering using ffplay.c as a header file in my receiver program so that I can use it as the decoder and player.
I have took a look at ffplay.c, it actually fetch file into a videostate structure and processing it. The fetching part is from line 3188 of ffplay.c :
VideoState *is;
is = av_mallocz(sizeof(VideoState));
if (!is)
return NULL;
av_strlcpy(is->filename, filename, sizeof(is->filename));
is->iformat = iformat;
is->ytop = 0;
is->xleft = 0;
/* start video display */
if (frame_queue_init(&is->pictq, &is->videoq, VIDEO_PICTURE_QUEUE_SIZE, 1) < 0)
goto fail;
if (frame_queue_init(&is->subpq, &is->subtitleq, SUBPICTURE_QUEUE_SIZE, 0) < 0)
goto fail;
if (frame_queue_init(&is->sampq, &is->audioq, SAMPLE_QUEUE_SIZE, 1) < 0)
goto fail;
packet_queue_init(&is->videoq);
packet_queue_init(&is->audioq);
packet_queue_init(&is->subtitleq);
is->continue_read_thread = SDL_CreateCond();
init_clock(&is->vidclk, &is->videoq.serial);
init_clock(&is->audclk, &is->audioq.serial);
init_clock(&is->extclk, &is->extclk.serial);
is->audio_clock_serial = -1;
is->av_sync_type = av_sync_type;
is->read_tid = SDL_CreateThread(read_thread, is);
if (!is->read_tid) {
fail:
stream_close(is);
return NULL;
}Now instead of fetching file, I want to modify ffplay.c code so that let it fetch the received packet, I can save received packet to AVPacket by
static AVPacket avpkt;
avpkt.data = inbuf;My question is : how to put AVPacket information into videostate structure ?
-
FFMEPG error : Exactly one scaler algorithm must be chosen
28 mai 2015, par Dave_DevI am currently working on a FFMPEG project. I am trying to convert a RGB image in a YUV image using this code (I found it in the internet last night) :
void Decode::video_encode_example(const char *filename, int codec_id)
{
AVCodec *codec;
AVCodecContext *c= NULL;
int i, ret, x, y, got_output;
FILE *f;
AVFrame *frame;
AVPacket pkt;
uint8_t endcode[] = { 0, 0, 1, 0xb7 };
printf("Encode video file %s\n", filename);
/* find the mpeg1 video encoder */
codec = avcodec_find_encoder((enum AVCodecID)codec_id);
if (!codec) {
fprintf(stderr, "Codec not found\n");
exit(1);
}
c = avcodec_alloc_context3(codec);
if (!c) {
fprintf(stderr, "Could not allocate video codec context\n");
exit(2);
}
/* put sample parameters */
c->bit_rate = 400000;
/* resolution must be a multiple of two */
c->width = 352;
c->height = 288;
/* frames per second */
c->time_base = (AVRational){1,25};
/* emit one intra frame every ten frames
* check frame pict_type before passing frame
* to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
* then gop_size is ignored and the output of encoder
* will always be I frame irrespective to gop_size
*/
c->gop_size = 10;
c->max_b_frames = 1;
c->pix_fmt = AV_PIX_FMT_YUV420P;
if (codec_id == AV_CODEC_ID_H264)
av_opt_set(c->priv_data, "preset", "slow", 0);
/* open it */
if (avcodec_open2(c, codec, NULL) < 0) {
fprintf(stderr, "Could not open codec\n");
exit(3);
}
f = fopen(filename, "wb");
if (!f) {
fprintf(stderr, "Could not open %s\n", filename);
exit(4);
}
frame = avcodec_alloc_frame();// Dans une version plus récente c'est av_frame_alloc
if (!frame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(5);
}
frame->format = c->pix_fmt;
frame->width = c->width;
frame->height = c->height;
/* the image can be allocated by any means and av_image_alloc() is
* just the most convenient way if av_malloc() is to be used */
ret = av_image_alloc(frame->data, frame->linesize, c->width, c->height,
c->pix_fmt, 32);
if (ret < 0) {
fprintf(stderr, "Could not allocate raw picture buffer\n");
exit(6);
}
//
// RGB to YUV:
// http://stackoverflow.com/questions/16667687/how-to-convert-rgb-from-yuv420p-for-ffmpeg-encoder
//
// Create some dummy RGB "frame"
uint8_t *rgba32Data = new uint8_t[4*c->width*c->height];
SwsContext * ctx = sws_getContext(c->width, c->height,
AV_PIX_FMT_RGBA, c->width, c->height,
AV_PIX_FMT_YUV420P, 0, 0, 0, 0);
/* encode 1 second of video */
for (i = 0; i < 25; i++) {
av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;
fflush(stdout);
/* prepare a dummy image */
/* Y */
// for (y = 0; y < c->height; y++) {
// for (x = 0; x < c->width; x++) {
// frame->data[0][y * frame->linesize[0] + x] = x + y + i * 3;
// }
// }
//
// /* Cb and Cr */
// for (y = 0; y < c->height/2; y++) {
// for (x = 0; x < c->width/2; x++) {
// frame->data[1][y * frame->linesize[1] + x] = 128 + y + i * 2;
// frame->data[2][y * frame->linesize[2] + x] = 64 + x + i * 5;
// }
// }
uint8_t *pos = rgba32Data;
for (y = 0; y < c->height; y++)
{
for (x = 0; x < c->width; x++)
{
pos[0] = i / (float)25 * 255;
pos[1] = 0;
pos[2] = x / (float)(c->width) * 255;
pos[3] = 255;
pos += 4;
}
}
uint8_t * inData[1] = { rgba32Data }; // RGBA32 have one plane
//
// NOTE: In a more general setting, the rows of your input image may
// be padded; that is, the bytes per row may not be 4 * width.
// In such cases, inLineSize should be set to that padded width.
//
int inLinesize[1] = { 4*c->width }; // RGBA stride
sws_scale(ctx, inData, inLinesize, 0, c->height, frame->data, frame->linesize);
frame->pts = i;
/* encode the image */
ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding frame\n");
exit(7);
}
if (got_output) {
printf("Write frame %3d (size=%5d)\n", i, pkt.size);
fwrite(pkt.data, 1, pkt.size, f);
av_free_packet(&pkt);
}
}
/* get the delayed frames */
for (got_output = 1; got_output; i++) {
fflush(stdout);
ret = avcodec_encode_video2(c, &pkt, NULL, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding frame\n");
exit(8);
}
if (got_output) {
printf("Write frame %3d (size=%5d)\n", i, pkt.size);
fwrite(pkt.data, 1, pkt.size, f);
av_free_packet(&pkt);
}
}
/* add sequence end code to have a real mpeg file */
fwrite(endcode, 1, sizeof(endcode), f);
fclose(f);
avcodec_close(c);
av_free(c);
av_freep(&frame->data[0]);
avcodec_free_frame(&frame);// Dans une version plus récente c'est av_frame_alloc
printf("\n");
}
int main()
{
Decode d;
avcodec_register_all();
d.video_encode_example("/home/Dave/Desktop/test.mpg",AV_CODEC_ID_MPEG2VIDEO);
}When I run this application, my Linux terminal shows me the following error :
[swscaler @ 0x1e1dc60] Exactly one scaler algorithm must be chosen
Segmentation fault (core dumped)I do not know what is actually happening. Could you help me please ?
Best regards
Dave_Dev -
FFmpeg using Intel quicksync
3 mars 2016, par KevinAI’m trying to use FFmpeg with Intel QuickSync(qsv)
Finding the codec works, but when I go to open the codec I get a -40
I’ve traced it to :
ret = MFXVideoENCODE_GetVideoParam(q->session, &q->param) ;Below in my initialzation code :
AVCodec* m_codec = ::avcodec_find_encoder_by_name("h264_qsv");
if (!m_codec){
DBGPRINTF("Could not find encoder");
return E_INVALIDARG;
}
AVCodecContext* m_context = ::avcodec_alloc_context3(m_codec);
if (!m_context){
DBGPRINTF("Could not alloc AV context");
return E_INVALIDARG;
}
mfxIMPL impl = MFX_IMPL_AUTO;
mfxVersion ver = { { 1, 1 } };
MFXInit(impl, &ver, &m_qsvContext->session);
m_qsvContext->iopattern = MFX_IOPATTERN_IN_OPAQUE_MEMORY;
m_qsvContext->opaque_alloc = 1;
m_qsvContext->nb_opaque_surfaces = 16;
m_context->hwaccel_context = m_qsvContext;
m_context->profile = FF_PROFILE_H264_BASELINE;
AVRational fps;
AVRational sar;
fps.num = static_cast(m_targetFPS);
fps.den = 1;
sar.num = m_iHeightOut;
sar.den = m_iWidthOut;
//m_context->bit_rate = 400000;
m_context->width = m_iWidthOut;
m_context->height = m_iHeightOut;
m_context->has_b_frames = 0;
m_context->sample_aspect_ratio = sar;
m_context->time_base = fps;
m_context->gop_size = s_keyFramesMax;
m_context->pix_fmt = AV_PIX_FMT_QSV;
m_context->flags |= AV_CODEC_FLAG_QSCALE;
m_context->flags |= CODEC_FLAG_PASS1;
m_context->thread_count = 1;
m_context->codec_type = AVMEDIA_TYPE_VIDEO;
::av_opt_set(m_context->priv_data, "preset", "fast", 0);
::av_opt_set(m_context->priv_data, "look_ahead", "0", 0);
int ret = avcodec_open2(context, codec, nullptr);
if (ret < 0) {
DBGPRINTF("Could not open codec h264_qsv with code %d", ret);
return ret;
}I’m obviously missing something, but i’m not sure what. Can someone help point me in the correct direction ?