Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
ffmpeg reencoding of flv files from MAC
22 septembre 2011, par Nick MitinI've recorded a video from a webcam on mac and now i'm trying to reencode it, but ffmpeg does not recognize audio stream:
FFmpeg version git-120610e, Copyright (c) 2000-2010 the FFmpeg developers built on Sep 21 2010 15:56:57 with gcc 4.4.1 configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libmp3lame --enable-libx264 libavutil 50.27. 0 / 50.27. 0 libavcore 0. 9. 0 / 0. 9. 0 libavcodec 52.89. 0 / 52.89. 0 libavformat 52.78. 5 / 52.78. 5 libavdevice 52. 2. 2 / 52. 2. 2 libavfilter 1.39. 0 / 1.39. 0 libswscale 0.11. 0 / 0.11. 0 libpostproc 51. 2. 0 / 51. 2. 0 [flv @ 0x1e79470] Estimating duration from bitrate, this may be inaccurate Input #0, flv, from '10125174c09241f6536ccbe503ebbc00.flv': Duration: 00:00:22.68, start: 0.000000, bitrate: N/A Stream #0.0: Video: flv, yuv420p, 640x480, 1k tbr, 1k tbn, 1k tbc Stream #0.1: Audio: [0][0][0][0] / 0x0000, 0 channels
Is it possible to make ffmpeg to support audio stream form flv recorded on MAC?
-
How to capture camera devices on Windows using Libav ?
22 septembre 2011, par OccultaIs there any way to capture frames from as many camera types as DirectShow do on Windows platform using Libav? I need to capture a camera output without using DirectShow filters and I want my application to work with many camera devices types.
I have searched the Internet about this capability of libav and found that it can be done via libav using special input format "vfwcap". Something like that (don't sure about code correctness - I wrote it by myself):
AVFormatParameters formatParams = NULL; AVInputFormat* pInfmt = NULL; pInFormatCtx* pInFormatCtx = NULL; av_register_all(); //formatParams.device = NULL; //this was probably deprecated and then removed formatParams.channel = 0; formatParams.standard = "ntsc"; //deprecated too but still available formatParams.width = 640; formatParams.height = 480; formatParams.time_base.num = 1000; formatParams.time_base.den = 30000; //so we want 30000/1000 = 30 frames per second formatParams.prealloced_context = 0; pInfmt = av_find_input_format("vfwcap"); if( !pInfmt ) { fprintf(stderr,"Unknown input format\n"); return -1; } // Open video file (formatParams can be NULL for autodetecting probably) if (av_open_input_file(&pInFormatCtx, 0, pInfmt, 0, formatParams) < 0) return -1; // Couldn't open device /* Same as video4linux code*/
So another question is: how many devices are supported by Libav? All I have found about capture cameras output with libav on windows is advice to use DirectShow for this purpose because libav supports too few devices. Maybe situation has already changed now and it does support enough devices to use it in production applications?
If this isn't possible.. Well I hope my question won't be useless and this composed from different sources piece of code will help someone interested in this theme 'coz there are really too few information about it in the whole internet.
-
FFMPeg Windows C# H264
21 septembre 2011, par Allen HoI am trying to use SharpFFMpeg
http://sourceforge.net/projects/sharpffmpeg/
I found avcodec-52.dll and avformat-52.dll somewhere on the Net...
When I use SharpFFMpeg and make calls like av_init_packet
I get PInvoke errors like so
PInvokeStackImbalance was detected Message: A call to PInvoke function 'WpfApplicationFFMpegTest!FFmpegSharp.Interop.FFmpeg::av_init_packet' has unbalanced the stack. This is likely because the managed PInvoke signature does not match the unmanaged target signature. Check that the calling convention and parameters of the PInvoke signature match the target unmanaged signature.
In a nutshell I am trying to decode H264 and display the incoming stream from a camera...
Just wondering if anyone has been able to do this succesfully in C#?
Thanks
-
How to get correct video dimensions
21 septembre 2011, par David542I've been using
ffmpeg -i
andmediainfo (CLI)
to get the dimensions of a video. Unfortunately, neither have been very good at returning the correct dimensions. This is especially true if the video has been modified since its initial export.What is the best way to get the correct dimensions of a video file?
-
How to encode a stream of RGBA values to video ?
20 septembre 2011, par Rob OplawarMore specifically: I have a sequence of 32 bit unsigned RGBA integers for pixels- e.g. 640 integers per row starting at the left pixel, 480 rows per frame starting at the top row, repeat for n frames. Is there an easy way to feed this to ffmpeg (or some other encoder) without first encoding it to a common image format?
I'm assuming ffmpeg is the best tool for me to use in this case, but I'm open to suggestions (the output video format doesn't matter too much).
I know the documentation would enlighten me if I just knew the right keywords... In case I'm asking the wrong question, here's what I'm trying to do at the highest level:
I have some Actionscript code that draws and animates on the display tree, and I've wrapped it in an AIR application that draws BitmapData frame-by-frame. AIR has proved to be woefully inefficient at directly encoding this output- the best I've managed is a few frames per second, and I need to render at least 15 fps, preferably more like 100 fps, which I get out of ffmpeg when I feed it PNG images (AIR can take 1+ seconds to encode one 640x480 png... appalling). Instead of encoding inside AIR I can send the raw byte data out to an encoder or to disk as fast as it's rendered.
If you're wondering why I'm using Actionscript to render an animation or why it has to be encoded quickly, don't. Suffice it to say, the frames are computed at execution time (not stored as an animation in a .swf file, for example), I have a very large amount of video to create and limited time to do so, and using something other than Actionscript to produce the frames is not an option.