Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
How to extract 16-bit PNG frame from lossless x264 video
29 mai 2017, par whiskeyspiderI encoded a series of 16-bit grayscale PNGs to a lossless video with the following command:
ffmpeg -i image%04d.png -crf 0 -c:v libx264 -preset veryslow output.mp4
I am now trying to verify that the conversion to video was truly lossless by pulling out the PNGs at the same quality. The command I'm using:
ffmpeg -i output.mp4 image%04d.png
However, this is outputting 8-bit PNGs. I've tried various options I've read about such as
-vcodec png
and-qscale 0
but so far nothing appears to make it output 16-bit PNGs.How do I extract all frames from the video at the same quality as they were going in? Or did I make a mistake in creating the lossless video in the first place?
Edit: I get this error message when trying to use
-pix_fmt gray16be
.[swscaler @ 0x7fef1a8f0800] deprecated pixel format used, make sure you did set range correctly
Full output:
ffmpeg -i output.mp4 -pix_fmt gray16be image%04d.png ffmpeg version 3.3.1 Copyright (c) 2000-2017 the FFmpeg developers built with Apple LLVM version 8.0.0 (clang-800.0.42.1) configuration: --prefix=/usr/local/Cellar/ffmpeg/3.3.1 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-vda libavutil 55. 58.100 / 55. 58.100 libavcodec 57. 89.100 / 57. 89.100 libavformat 57. 71.100 / 57. 71.100 libavdevice 57. 6.100 / 57. 6.100 libavfilter 6. 82.100 / 6. 82.100 libavresample 3. 5. 0 / 3. 5. 0 libswscale 4. 6.100 / 4. 6.100 libswresample 2. 7.100 / 2. 7.100 libpostproc 54. 5.100 / 54. 5.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'output.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf57.71.100 Duration: 00:00:09.76, start: 0.000000, bitrate: 1337 kb/s Stream #0:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuvj444p(pc), 512x512, 1336 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default) Metadata: handler_name : VideoHandler Stream mapping: Stream #0:0 -> #0:0 (h264 (native) -> png (native)) Press [q] to stop, [?] for help [swscaler @ 0x7fef1a8f0800] deprecated pixel format used, make sure you did set range correctly Output #0, image2, to 'image%04d.png': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf57.71.100 Stream #0:0(und): Video: png, gray16be, 512x512, q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc (default) Metadata: handler_name : VideoHandler encoder : Lavc57.89.100 png frame= 244 fps=0.0 q=-0.0 Lsize=N/A time=00:00:09.76 bitrate=N/A speed= 21x video:4038kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
I'm happy to use a non-ffmpeg solution if there is one.
-
How can I extract the first frame from a movie AND overlay a watermark on it at the same time ?
29 mai 2017, par Eric VasilikI want to, with a single ffmpeg command line, extract the first frame from a movie (as a .jpg and sized to fit inside a box of a given size), and overlay a centered PNG on that frame.
I've run into problems using -vf and -filter_complex at the same time.
-
Huge memory leak when filtering video with libavfilter
29 mai 2017, par Captain JackI have a relatively simple FFMPEG C program, to which a video frame is fed, processed via filter graph and sent to frame renderer.
Here are some code snippets:
/* Filter graph here */ char args[512]; enum AVPixelFormat pix_fmts[] = {AV_PIX_FMT_RGB32 }; AVFilterGraph *filter_graph; avfilter_register_all(); AVFilter *buffersrc = avfilter_get_by_name("buffer"); AVFilter *buffersink = avfilter_get_by_name("ffbuffersink"); AVBufferSinkParams *buffersink_params; AVFilterInOut *outputs = avfilter_inout_alloc(); AVFilterInOut *inputs = avfilter_inout_alloc(); filter_graph = avfilter_graph_alloc(); snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", av->codec_ctx->width, av->codec_ctx->height, av->codec_ctx->pix_fmt, av->codec_ctx->time_base.num, av->codec_ctx->time_base.den, av->codec_ctx->sample_aspect_ratio.num, av->codec_ctx->sample_aspect_ratio.den); if(avfilter_graph_create_filter(&av->buffersrc_ctx, buffersrc, "in",args, NULL, filter_graph) < 0) { fprintf(stderr, "Cannot create buffer source\n"); return(0); } /* buffer video sink: to terminate the filter chain. */ buffersink_params = av_buffersink_params_alloc(); buffersink_params->pixel_fmts = pix_fmts; if(avfilter_graph_create_filter(&av->buffersink_ctx, buffersink, "out",NULL, buffersink_params, filter_graph) < 0) { printf("Cannot create buffer sink\n"); return(HACKTV_ERROR); } /* Endpoints for the filter graph. */ outputs->name = av_strdup("in"); outputs->filter_ctx = av->buffersrc_ctx; outputs->pad_idx = 0; outputs->next = NULL; inputs->name = av_strdup("out"); inputs->filter_ctx = av->buffersink_ctx; inputs->pad_idx = 0; inputs->next = NULL; const char *filter_descr = "vflip"; if (avfilter_graph_parse_ptr(filter_graph, filter_descr, &inputs, &outputs, NULL) < 0) { printf("Cannot parse filter graph\n"); return(0); } if (avfilter_graph_config(filter_graph, NULL) < 0) { printf("Cannot configure filter graph\n"); return(0); } av_free(buffersink_params); avfilter_inout_free(&inputs); avfilter_inout_free(&outputs);
The above code is called by these elsewhere:
av->frame_in->pts = av_frame_get_best_effort_timestamp(av->frame_in); /* push the decoded frame into the filtergraph*/ if (av_buffersrc_add_frame(av->buffersrc_ctx, av->frame_in) < 0) { printf( "Error while feeding the filtdergraph\n"); break; } /* pull filtered pictures from the filtergraph */ if(av_buffersink_get_frame(av->buffersink_ctx, av->frame_out) < 0) { printf( "Error while sourcing the filtergraph\n"); break; } /* do stuff with frame */
Now, the code works absolutely fine and the video comes out the way I expect it to (vertically flipped for testing purposes).
The biggest issue I have is that there is a massive memory leak. An high res video will consume 2Gb in a matter of seconds and crash the program. I traced the leak to this piece of code:
/* push the decoded frame into the filtergraph*/ if (av_buffersrc_add_frame(av->buffersrc_ctx, av->frame_in) < 0)
If I bypass the filter by doing
av->frame_out=av->frame_in;
without pushing the frame into it (and obviously not pulling from it), there is no leak and memory usage is stable.Now, I am very new to C, so be gentle, but it seems like I should be clearing out the buffersrc_ctx somehow but no idea how. I've looked in official documentations but couldn't find anything.
Can someone advise?
-
when i use FFmpeg libary to compress video,ide show this error
29 mai 2017, par leslieE/FFmpeg: Exception while trying to run: /data/user/0/com.example/files/ffmpeg -y -i /storage/emulated/0/DCIM/Camera/VID_20170517_112234.mp4 -vf scale=500:-1 qscale:v 8 -acodec copy -vcodec mpeg4/storage/emulated/0/wobingwoyi/1496060268815.mp4 java.io.IOException: Error running exec(). Command: [/data/user/0/com.wobingwoyi/files/ffmpeg, -y, -i, /storage/emulated/0/DCIM/Camera/VID_20170517_112234.mp4, -vf, scale=500:-1, -qscale:v, 8, -acodec, copy, -vcodec, mpeg4, /storage/emulated/0/wobingwoyi/1496060268815.mp4] Working Directory: null Environment: null
-
FFMPEG multi thread decoding
29 mai 2017, par liran_11I'm decoding a multi sliced HEVC video using multi threads (ffmpeg -threads 4 -thread_type slice), and I was wondering, Are there any specific configuration that I should use while encoding in order to improve my results (fps wise)? Maybe some parameters that enable thread per slice work better.
thanks.