Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
How to write frames to a video file ?
23 décembre 2013, par Mike ChenI am currently writing an application that read frames from camera, modify them, and save them into a video file. I'm planing to do it with ffmpeg. There's rarely a documentation about ffmpeg. I can't find a way. Does any know how to do it?
I need it to be done on unix, and in C or C++. Does any can provide some instructions?
Thanks.
EDIT:
Sorry, I haven't write clearly. I want some developer APIs to write frames to a video file. I open up camera stream, I get every single frame, then I save them into a video file with those APIs available in ffmpeg's public apis. So using command line tool actually doesn't help me. And I've seen output_example.c under the ffmpeg src folder. It's pretty great that I may copy some parts of the code directly without change. And I am still looking for a easier way.
Also, I'm thinking of porting my app to iPhone, as far as I know, only ffmpeg has been ported on iPhone. GStreamer is based on glib, and it's all GNU stuff. I'm not sure if I can get it work on iPhone. So ffmpeg is still the best choice for now.
Any comments is appreciated.
-
FFMPEG : Mapping YUV data to output buffer of decode function
22 décembre 2013, par ZaxI am modifying a video decoder code from FFMPEG. The decoded code is of format YUV420. I'm having 3 pointers, each pointing to Y, U and V data i.e:
yPtr-> is a pointer pointing to the Luma uPtr-> is a pointer pointing to the Chroma Cb vPtr-> is a pointer pointing to the Chroma Cr
However, the output pointer to which I need to map my YUV data is of type void. And the output pointer is just one.
i.e:
void *data
is the output pointer for which I need to point myyPtr, uPtr and vPtr
. How should I do this?One approach I have tried is, created a new buffer whose size is equal to the sum of Y, U and V data, and copy the contents of yPtr, uPtr and vPtr to the newly allocated buffer, and the pointer to this buffer I'm allocating to
*data
output pointer.However this approach is not preferred because memcpy needs to be performed and other performance drawbacks.
Can anyone please suggest an alternative for this issue. This may be not related directly to FFMPEG, but since I'm modifying FFMPEG's libavcodec's decoder code, I'm tagging it in FFMPEG.
Edit: What I'm trying to do:
Actually my understanding is if I make this pointer point to
void *data
pointer of any decode function of any decoder and setting*got_frame_ptr
to 1, the framework will take care of dumping this data into the yuv file. is my understanding right?The function prototype of my custom video decoder or any video decoder in ffmpeg is as shown below:
static int myCustomDec_decode_frame(AVCodecContext *avctx, void *data, int *data_size, uint8_t *buf, int buf_size) {
I'm referring to this post FFMPEG: Explain parameters of any codecs function pointers and assuming that I need to point *data to my YUV data pointer, and the dumping stuff will be taken care by ffmpeg. Please provide suggestions regrading the same.
-
android hw h264 encoder (x264 ?) for ffmpeg
22 décembre 2013, par user2905958I'm integrating ffmpeg into android, while found the h264 encoder is missing. I looked around and seems it requires a x264 lib for that, but I'm not sure if x264 supports hw acceleration, or just a pure sw encoder? And is there any hw h264 encoder available for ffmpeg?
-
Fundamentally, what exactly is a codec ?
22 décembre 2013, par BitwizeI really hope I don't get down-voted for this, but this is something I have wondered for quite a while now.
I have been reading through a series of articles describing what codecs are/what they do, and the difference between them and Containers, but where I become confused is in what a codec is fundamentally.
Is a codec an executable binary/library that handles the compression/decompression of files for a specific program/API? Or is it a form of library for programmers to use in order to handle these containers?
Reading various answers around the web it sounds as though it's almost treated as both, which is a little confusing. I'm hoping someone here can help clarify.
Thanks!
-
ffmpeg change resolution by condition
22 décembre 2013, par Alexey LisikhinI want to change my video resolution with ffmpeg:
-s 852×480
How can I do it only when video width or height greater than 852×480?
I want something like this with ffmpeg, not with my programming language:
if video.width > 852: resize width and proportionally resize height if video.height > 480: resize height and proportionally resize width if video.width > 852 and video.height > 480: resize height width