Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
Use ffmpeg audio and video to encode and RTMP push flow ios example, who please send a demo, thank you ! E-mail : 2603086146 @qq.com [closed]
15 novembre 2012, par struggleUse ffmpeg audio and video to encode and RTMP push flow ios example
-
Where can I find high-quality videos for testing video processing ?
15 novembre 2012, par WorkmanAside from Big Buck Bunny, Sintel, and Elephant's Dream, what are other high-quality and free sources for high quality video?
I'm using these videos internally to test video transcoding options and am not public redistributing. Any suggestions for content that falls under this category?
-
ffmpeg YUV to RGB distored color and position
15 novembre 2012, par user1542140Sorry that I still cannot post images for my question since low reputation.
I use the ffmpeg function to convert the decoded frame, from YUV to RGB24, but the color and resulted image is distorted seriously. Following is my code snip, the frame width and height is (176, 144)
len = avcodec_decode_video2(c, picture, &got_picture, &avpkt); if (got_picture) { //... AVFrame *pFrameRGB = avcodec_alloc_frame(); // Determine required buffer size and allocate buffer int numBytes=avpicture_get_size(PIX_FMT_RGB24, c->width, c->height); uint8_t *buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t)); avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24, c->width, c->height); struct SwsContext *img_convert_ctx = sws_getContext(c->width, c->height, PIX_FMT_YUV420P, c->width, c->height, PIX_FMT_BGR24, SWS_BICUBIC, NULL, NULL, NULL); sws_scale(img_convert_ctx, picture->data, picture->linesize, 0, picture->height, pFrameRGB->data, pFrameRGB->linesize); sws_freeContext(img_convert_ctx); // Save the frame to disk if(++frame<=5) SaveFrame(pFrameRGB, c->width, c->height, frame);
-
GStreamer Tee (Multiple Multiplexer)
14 novembre 2012, par user1595257I'm trying to store a video stream (coming from my webcam) into a MKV and FLV file. This means I have to split the video and audio pipeline after the h264 Encoding and mux each path with a different muxer.
This is how I imagine it should be working:
|->queue->matroskamux->filesink v4l2src->videorate->videoscale->x264enc->tee-| |->queue->flvmux->filesink
Is this assumption correct? Are all the queues at the right places? How would a GStreamer command like this look like? I'm having especially troubles with the concept of "Tees". How/where to start them in a command and how to manipulate different Tee-Paths. I looked up "Tee" in the GStreamer documentation but I'm still having troubles to apply them.
Thanks in advance!
EDIT: Ok, Thanks to mreithub I got it working for video. This is how the command looks like for now:
gst-launch-0.10 -v -m v4l2src ! videorate ! videoscale ! ffmpegcolorspace ! x264enc ! tee name=muxtee ! queue2 ! matroskamux name=mkvmux ! filesink location=file1.mkv muxtee. ! queue ! flvmux name=flvmux ! filesink location=file1.flv
Here is my attempt to get audio running:
gst-launch-0.10 -v -m v4l2src ! videorate ! videoscale ! ffmpegcolorspace ! x264enc ! tee name=muxtee ! queue2 ! matroskamux name=mkvmux pulsesrc ! ffenc_aac ! filesink location=file1.mkv muxtee. ! queue ! flvmux name=flvmux pulsesrc ! ffenc_aac ! filesink location=file1.flv
This does not work (command executes but immediately stops - no error message). But I'm also having trouble determining the position where to put the audio encoding. In my attempted solution I encode the audio in each Tee-Pipeline (right?). But I'd like to encode audio only once and then just mux it in both pipeline-paths accordingly.
Here's another try: after the audio encoding I split the pipleine using a Tee and assign it to the mkvmuxer and flvmuxer:
gst-launch-0.10 -v -m v4l2src ! videorate ! videoscale ! ffmpegcolorspace ! x264enc ! tee name=muxtee ! queue2 ! matroskamux name=mkvmux ! filesink location=file1.mkv muxtee. ! queue ! flvmux name=flvmux ! filesink location=file1.flv pulsesrc ! ffenc_aac ! tee name=t2 ! queue ! mkvmux. t2. ! queue ! flvmux.
But with this one I'm getting the following error message:
could not link queue1 to flvmux
Thanks!
-
ffmpeg(-mt) and TBB
14 novembre 2012, par ronagI just started using the latest build of ffmpeg into which ffmpeg-mt has been merged.
However, since my application uses TBB (Intel Threading Building Blocks), the ffmpeg-mt imlementation with new thread creation and synchronization does not quite fit, as it could potentially block my TBB tasks executing the decode functions. Also it would trash the cache unnecessarily.
I was looking around in pthread.c which seems to implement the interface which ffmpeg uses to enable multithreading.
My question is whether it would be possible to create a tbb.c which implements the same functions but using tbb tasks instead of explicit threads?
I am not experienced with C, but my guess is that it would not be possible to easily compile tbb (which is C++) into ffmpeg. So maybe somehow overwriting the ffmpeg function pointers during run-time would be the way to go?
I would appreciate any suggestions or comments in regards to implementing TBB into ffmpeg threading api.