Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
Creating .flv files on the iphone without using ffmpeg
15 mars 2014, par inwitHow does one create .flv files on the iPhone?
I am aware of ffmpeg (with its LGPL restrictions). But are there other APIs that have been ported to
iOS
? -
wm player acting up on cmd prompt edited video
15 mars 2014, par Trevader24135I'm trying to 1) convert a video from mp4 - mp3, which is working 2) increase the audio volume -af volume=3 3) and slightly cut the ends from the song (I'm using ffmpeg). It is working just great, except for quicktime player, which is reading a 4:43 second song to 10:42 minute song! why is this happening? all other video readers work just fine, but this also affects my Ipods song! here's my command:
ffmpeg -i movie.mp4 -ss 00:00:03 -t 00:00:08 -async 1 -strict -2 cut.mp4
-
Implementing a chained overlay filter with the Libavfilter library in Android NDK
14 mars 2014, par gookmanI am trying to use the
overlay
filter with multiple input sources, for an Android app. Basically, I want to overlay multiple video sources on top of a static image. I have looked at the sample that comes with ffmpeg and implemented my code based on that, but things don't seem to be working as expected.In the ffmpeg filtering sample there seems to be a single video input. I have to handle multiple video inputs and I am not sure that my solution is the correct one. I have tried to find other examples, but looks like this is the only one.
Here is my code:
AVFilterContext **inputContexts; AVFilterContext *outputContext; AVFilterGraph *graph; int initFilters(AVFrame *bgFrame, int inputCount, AVCodecContext **codecContexts, char *filters) { int i; int returnCode; char args[512]; char name[9]; AVFilterInOut **graphInputs = NULL; AVFilterInOut *graphOutput = NULL; AVFilter *bufferSrc = avfilter_get_by_name("buffer"); AVFilter *bufferSink = avfilter_get_by_name("buffersink"); graph = avfilter_graph_alloc(); if(graph == NULL) return -1; //allocate inputs graphInputs = av_calloc(inputCount + 1, sizeof(AVFilterInOut *)); for(i = 0; i <= inputCount; i++) { graphInputs[i] = avfilter_inout_alloc(); if(graphInputs[i] == NULL) return -1; } //allocate input contexts inputContexts = av_calloc(inputCount + 1, sizeof(AVFilterContext *)); //first is the background snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=1/1:pixel_aspect=0", bgFrame->width, bgFrame->height, bgFrame->format); returnCode = avfilter_graph_create_filter(&inputContexts[0], bufferSrc, "background", args, NULL, graph); if(returnCode < 0) return returnCode; graphInputs[0]->filter_ctx = inputContexts[0]; graphInputs[0]->name = av_strdup("background"); graphInputs[0]->next = graphInputs[1]; //allocate the rest for(i = 1; i <= inputCount; i++) { AVCodecContext *codecCtx = codecContexts[i - 1]; snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", codecCtx->width, codecCtx->height, codecCtx->pix_fmt, codecCtx->time_base.num, codecCtx->time_base.den, codecCtx->sample_aspect_ratio.num, codecCtx->sample_aspect_ratio.den); snprintf(name, sizeof(name), "video_%d", i); returnCode = avfilter_graph_create_filter(&inputContexts[i], bufferSrc, name, args, NULL, graph); if(returnCode < 0) return returnCode; graphInputs[i]->filter_ctx = inputContexts[i]; graphInputs[i]->name = av_strdup(name); graphInputs[i]->pad_idx = 0; if(i < inputCount) { graphInputs[i]->next = graphInputs[i + 1]; } else { graphInputs[i]->next = NULL; } } //allocate outputs graphOutput = avfilter_inout_alloc(); returnCode = avfilter_graph_create_filter(&outputContext, bufferSink, "out", NULL, NULL, graph); if(returnCode < 0) return returnCode; graphOutput->filter_ctx = outputContext; graphOutput->name = av_strdup("out"); graphOutput->next = NULL; graphOutput->pad_idx = 0; returnCode = avfilter_graph_parse_ptr(graph, filters, graphInputs, &graphOutput, NULL); if(returnCode < 0) return returnCode; returnCode = avfilter_graph_config(graph, NULL); return returnCode; return 0; }
The
filters
argument of the function is passed on toavfilter_graph_parse_ptr
and it can looks like this:[background] scale=512x512 [base]; [video_1] scale=256x256 [tmp_1]; [base][tmp_1] overlay=0:0 [out]
The call breaks after the call to
avfilter_graph_config
with the warning:Output pad "default" with type video of the filter instance "background" of buffer not connected to any destination
and the errorInvalid argument
.What is it that I am not doing correctly?
-
Convert mp3 to wav on the fly using ffmpeg in Python
14 mars 2014, par user3013067I'm trying to convert a mp3 file on the fly in Python to wav file using ffmpeg.
I call it using subprocess, how can I get it's output to play it as wav on the fly wthout saving it as the file (or playing it while its converting) and then playing it?
This is what I have so far:
I'm using aplay just for a example.
FileLocation = "/home/file.mp3" subprocess.call(["ffmpeg", "-i", FileLocation etc etc "newfail.wav"]) os.system("aplay ... ") #play it on the fly here
As far as I understand, if I put "-" as file name, it will output it instead to stdout, but I don't know how to read stdout...
-
ffmpeg - How to use all the CPU of the server ?
14 mars 2014, par SummitI am using this command to run ffmpeg -
ffmpeg -i - -isync -threads 16 -vcodec libx264 -acodec aac -ar 22050 -r 25 -s 640x360 -strict experimental -b:a 32k -b:v 100k -f flv "rtmp://" -threads 16 -vcodec libx264 -acodec aac -ar 22050 -r 25 -s 640x360 -strict experimental -b:a 32k -b:v 400k -f flv "rtmp://"
I am running above 20 ffmpeg processes on my server. Currently ffmpeg don't use all my CPU, I want to use all my CPU usage. This is my CPU usage - http://i.stack.imgur.com/iCfhW.png
My server has 24 CPU, 16 GB RAM, and 1 TB HDD. Right now, my streams are not smooth . Please tell me the command to use all my CPU usage, and to make my streams smooth.
Thanks