Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
avcodec_open2 error -542398533 : "Generic error in an external library"
15 février 2017, par bot1131357I am encountering an error when trying to open the codec with
avcodec_open2()
. I have tried the same code without any problems if I specifyavi
instead ofh264
in theav_guess_format()
function.I don't know what to make of it. Has anyone else encountered a similar problem?
The library that I'm using is ffmpeg-20160219-git-98a0053-win32-dev. I would really really appreciate if you could help me out of this confusion.
This is my console output:
Video encoding
[libx264 @ 01383460] broken ffmpeg default settings detected
[libx264 @ 01383460] use an encoding preset (e.g. -vpre medium)
[libx264 @ 01383460] preset usage: -vpre -vpre
[libx264 @ 01383460] speed presets are listed in x264 --help
[libx264 @ 01383460] profile is optional; x264 defaults to high
Cannot open video codec, -542398533This is the code that I'm working with:
// Video encoding sample AVCodec *codec = NULL; AVCodecContext *codecCtx= NULL; AVFormatContext *pFormatCtx = NULL; AVOutputFormat *pOutFormat = NULL; AVStream * pVideoStream = NULL;; AVFrame *picture = NULL;; int i, x, y, ret; printf("Video encoding\n"); // Register all formats and codecs av_register_all(); // guess format from file extension pOutFormat = av_guess_format("h264", NULL, NULL); if (NULL==pOutFormat){ cerr << "Could not guess output format" << endl; return -1; } // allocate context pFormatCtx = avformat_alloc_context(); pFormatCtx->oformat = pOutFormat; memcpy(pFormatCtx->filename,filename, min(strlen(filename), sizeof(pFormatCtx->filename))); // Add stream to pFormatCtx pVideoStream = avformat_new_stream(pFormatCtx, 0); if (!pVideoStream) { printf("Cannot add new video stream\n"); return -1; } // Set stream's codec context codecCtx = pVideoStream->codec; codecCtx->codec_id = (AVCodecID)pOutFormat->video_codec; codecCtx->codec_type = AVMEDIA_TYPE_VIDEO; codecCtx->frame_number = 0; // Put sample parameters. codecCtx->bit_rate = 2000000; // Resolution must be a multiple of two. codecCtx->width = 320; codecCtx->height = 240; codecCtx->time_base.den = 10; codecCtx->time_base.num = 1; pVideoStream->time_base.den = 10; pVideoStream->time_base.num = 1; codecCtx->gop_size = 12; // emit one intra frame every twelve frames at most codecCtx->pix_fmt = AV_PIX_FMT_YUV420P; if (codecCtx->codec_id == AV_CODEC_ID_H264) { // Just for testing, we also add B frames codecCtx->mb_decision = 2; } // Some formats want stream headers to be separate. if(pFormatCtx->oformat->flags & AVFMT_GLOBALHEADER) { codecCtx->flags |= CODEC_FLAG_GLOBAL_HEADER; } if(codecCtx->codec_id == AV_CODEC_ID_H264) av_opt_set(codecCtx->priv_data, "preset", "slow", 0); // Open the codec. codec = avcodec_find_encoder(codecCtx->codec_id); if (codec == NULL) { fprintf(stderr, "Codec not found\n"); return -1; } ret = avcodec_open2(codecCtx, codec, NULL); // returns -542398533 here if (ret < 0) { printf("Cannot open video codec, %d\n",ret); return -1; }
-
Cannot run transcode.c from ffmpeg examples
15 février 2017, par Kanishka JaiswalWhenever I try to run the ffmpeg example transcode.c I get the following error
[libx264 @ 000000c428ef9260] broken ffmpeg default settings detected [libx264 @ 000000c428ef9260] use an encoding preset (e.g. -vpre medium) [libx264 @ 000000c428ef9260] preset usage: -vpre
-vpre [libx264 @ 000000c428ef9260] speed presets are listed in x264 --help [libx264 @ 000000c428ef9260] profile is optional; x264 defaults to high Cannot open video encoder for stream #0 I am running it as C++ code in Visual Studio windows. I had to make some changes to run it as C++. Link for the code here http://pastebin.com/qAf7sbsp The output file I am giving is a duplicate of the input file.
-
encode .wav file using ffmpeg in objective c or c
15 février 2017, par deshuI have to encode .wav file and write it into same file,or other file using
ffmpeg library,here is my code for encoding
-(void)audioencode:(const char *)fileName { AVFrame *frame; AVPacket pkt; int i, j, k, ret, got_output; int buffer_size; FILE *f; uint16_t *samples; const char *format_name = "wav", const char *file_url = "/Users/xxxx/Downloads/simple-drum-beat.wav"; avcodec_register_all(); av_register_all(); AVOutputFormat *format = NULL; for (AVOutputFormat *formatIter = av_oformat_next(NULL); formatIter != NULL; formatIter = av_oformat_next(formatIter) { int hasEncoder = NULL != avcodec_find_encoder(formatIter->audio_codec); if (0 == strcmp(format_name, formatIter->name)) { format = formatIter; break; } } AVCodec *codec = avcodec_find_encoder(format->audio_codec); NSLog(@"tet test tststs"); AVCodecContext *c; c = avcodec_alloc_context3(codec); if (!c) { fprintf(stderr, "Could not allocate audio codec context\n"); exit(1); } c->sample_fmt = AV_SAMPLE_FMT_S16; if (!check_sample_fmt(codec, c->sample_fmt)) { fprintf(stderr, "Encoder does not support sample format %s", av_get_sample_fmt_name(c->sample_fmt)); exit(1); } c->bit_rate = 64000;//705600; c->sample_rate = select_sample_rate(codec); c->channel_layout = select_channel_layout(codec); c->channels = av_get_channel_layout_nb_channels(c->channel_layout); c->frame_size = av_get_audio_frame_duration(c, 16); int bits_per_sample = av_get_bits_per_sample(c->codec_id); int frameSize = av_get_audio_frame_duration(c,16); /* open it */ if (avcodec_open2(c, codec, NULL) < 0) { fprintf(stderr, "Could not open codec\n"); exit(1); } f = fopen(fileName, "wb"); if (!f) { fprintf(stderr, "Could not open %s\n", fileName); exit(1); } /* frame containing input raw audio */ frame = av_frame_alloc(); if (!frame) { fprintf(stderr, "Could not allocate audio frame\n"); exit(1); } frame->nb_samples = frameSize/*c->frame_size*/; frame->format = c->sample_fmt; frame->channel_layout = c->channel_layout; buffer_size = av_samples_get_buffer_size(NULL, c->channels,frameSize /*c->frame_size*/, c->sample_fmt, 0); samples = av_malloc(buffer_size); if (!samples) { fprintf(stderr, "Could not allocate %d bytes for samples buffer\n", buffer_size); exit(1); } /* setup the data pointers in the AVFrame */ ret = avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt, (const uint8_t*)samples, buffer_size, 0); if (ret < 0) { fprintf(stderr, "Could not setup audio frame\n"); exit(1); } float t, tincr; /* encode a single tone sound */ t = 0; tincr = 2 * M_PI * 440.0 / c->sample_rate; for(i=0;i<800;i++) { av_init_packet(&pkt); pkt.data = NULL; // packet data will be allocated by the encoder pkt.size = 0; for (j = 0; j < frameSize/*c->frame_size*/; j++) { samples[2*j] = (int)(sin(t) * 10000); for (k = 1; k < c->channels; k++) samples[2*j + k] = samples[2*j]; t += tincr; } /* encode the samples */ ret = avcodec_encode_audio2(c, &pkt, frame, &got_output); if (ret < 0) { fprintf(stderr, "Error encoding audio frame\n"); exit(1); } if (got_output) { fwrite(pkt.data, 1, pkt.size, f); av_free_packet(&pkt); } } }
but after encoded file size is zero, Please suggest what m doing wrong,any help will be appreciate, thanks in advance
-
Documentation on Guardian project ffmpeg android
15 février 2017, par AzhagiriI got the Gaurdian Project FFMPEG android java from the following link
https://github.com/guardianproject/android-ffmpeg-java
Is there any good documents available to use the library for code. Its difficult to use without documentation. Plz help me.
-
How to fetch live video frame and its timestamp from ffmpeg to python
15 février 2017, par vijiboySearching for an alternative as OpenCV would not provide timestamps for live camera stream, which are required in my computer vision algorithm, I found this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
Working up the code on windows I still could'nt get the frame timestamps. I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected?
Expected: It should write video frames to disk as well as print timestamp details.
Actual: It writes video files but does not get the timestamp (showinfo) details.Here's the code I tried:
import subprocess as sp import numpy import cv2 command = [ 'ffmpeg', '-i', 'e:\sample.wmv', '-pix_fmt', 'rgb24', '-vcodec', 'rawvideo', '-vf', 'showinfo', # video filter - showinfo will provide frame timestamps '-an','-sn', #-an, -sn disables audio and sub-title processing respectively '-f', 'image2pipe', '-'] # we need to output to a pipe pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.STDOUT) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes??? for i in range(10): raw_image = pipe.stdout.read(1280*720*3) img_info = pipe.stdout.read(244) # 244 characters is the current output of showinfo video filter print "showinfo output", img_info image1 = numpy.fromstring(raw_image, dtype='uint8') image2 = image1.reshape((720,1280,3)) # write video frame to file just to verify videoFrameName = 'Video_Frame{0}.png'.format(i) cv2.imwrite(videoFrameName,image2) # throw away the data in the pipe's buffer. pipe.stdout.flush()
So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...