Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
FFMPEG GIF with transparency from png image sequence
11 février 2020, par Nick SI've been trying to use ffmpeg to create a gif with a transparent background, but whenever the movement goes on top of the background, the pixels stay there. It's a tree with a wind animation, this is how it ends up: https://i.imgur.com/pq4ArBG.png
I first try to create the palette, and then the gif:
ffmpeg -i Tree_%04d.png -vf palettegen=reserve_transparent=1 palette.png ffmpeg -framerate 30 -i Tree_%04d.png -i palette.png -lavfi paletteuse=alpha_threshold=128 treegif.gif
It seems the previous frames simply stay there, but I can't figure out how to dispose of them.
-
What does ffmpeg think is the difference between an audio frame and audio sample ?
11 février 2020, par MossmyrHere's a curious option listed in the man pages of ffmpeg:
-aframes number (output) Set the number of audio frames to output. This is an obsolete alias for "-frames:a", which you should use instead.
What an 'audio frame' is seems dubious to me. This SO answer says that frame is synonymous with sample, but that can't be what ffmpeg thinks a frame is. Just look at this example when I resample some audio to 22.05 kHz and a length of exactly 313 frames:
$ ffmpeg -i input.mp3 -frames:a 313 -ar:a 22.05K output.wav
If 'frame' and 'sample' were synonymous, we would expect audio duration to be 0.014 seconds, but the actual duration is 8 seconds. ffmpeg thinks the frame rate of my input is 39.125.
What's going on here? What does ffmpeg think an audio frame really is? How do I go about finding this frame rate of my input audio?
-
Script to cut video by silence part with FFMPEG
11 février 2020, par fricadelleThis is a question that is raised here How to split video or audio by silent parts or here How can I split an mp4 video with ffmpeg every time the volume is zero?
So I was able to come up with a straightforward bash script that works on my Mac.
Here it is (only argument is the name of the video to be cut, it will generate a file start_timestamps.txt with the list of silence starts if the file does not exist and reuse it otherwise):
#!/bin/bash INPUT=$1 filename=$(basename -- "$INPUT") extension="${filename##*.}" filename="${filename%.*}" SILENCE_DETECT="silence_detect_logs.txt" TIMESTAMPS="start_timestamps.txt" if [ ! -f $TIMESTAMPS ]; then echo "Probing start timestamps" ffmpeg -i "$INPUT" -af "silencedetect=n=-50dB:d=3" -f null - 2> "$SILENCE_DETECT" cat "$SILENCE_DETECT"| grep "silence_start: [0-9.]*" -o| grep -E '[0-9]+(?:\.[0-9]*)?' -o > "$TIMESTAMPS" fi PREV=0.0 number=0 cat "$TIMESTAMPS"| ( while read ts do printf -v fname -- "$filename-%02d.$extension" "$(( ++number ))" DURATION=$( bc <<< "$ts - $PREV") ffmpeg -y -ss "$PREV" -i "$INPUT" -t "$DURATION" -c copy "$fname" PREV=$ts done printf -v fname -- "$filename-%02d.$extension" "$(( ++number ))" ffmpeg -y -ss "$PREV" -i "$INPUT" -c copy "$fname" )
Unfortunately it does not seem to work:
I have a video that is basically a collection of clips, each clip being introduced by a ~5 second silence with a static frame with a title on it. So I want to cut the original video so that each chunk is the 5 seconds "introduction" + video until the next introduction. Hope it's clear.
Anyway, in my script I first find all silence_start using ffmpeg silencedetect plugin. I get a start_timestamps.txt that read:
141.126 350.107 1016.07 etc.
Then for example I would call (I don't need to transcode again the video), knowing that (1016.07 - 350.107) = 665.963
ffmpeg -ss 350.107 -i Some_video.mp4 -t 665.963 -c copy "Some_video02.mp4"
The edge cases being the first chunk that has to go from 0 to 141.126 and the last chunk that has to go from last timestamp to end of the video.
Anyway the start_timestamps seem legit. But my output chunks are completely wrong. Sometimes the video does not even play anymore in Quicktime. I don't even have my static frame with the title in any of the videos...
Hope someone can help. Thanks.
EDIT Ok as explained in the comments, if I echo $PREV while commenting out the ffmpeg command I get a perfectly legit list of values:
0.0 141.126 350.107 1016.07 etc.
With the ffmpeg command I get:
0.0 141.126 50.107 016.07 etc.
bash variable changes in loop with ffmpeg shows why.
I just need to append < /dev/null to the ffmpeg command or add -nostdin argument. Thanks everybody.
-
Find frame in video using ffmpeg
11 février 2020, par TomI have an image saved as 1.jpg and I want to find frames similar to this image in a video and get frame numbers of these frames or timestamps. This command can find similar images but it outputs results in a hard-to-parse format. How can I fix this command to just get the similar frames and no other information?
ffmpeg.exe -i "1.mkv" -r 1 -loop 1 -i 1.jpg -an -filter_complex "blend=difference:shortest=1,blackframe=99:32" -f null -
-
How to release resources from ffmpeg
11 février 2020, par SummitI have built a class that reads avi file and displays it.
This is the defination for the class.
typedef struct { AVFormatContext *fmt_ctx; int stream_idx; AVStream *video_stream; AVCodecContext *codec_ctx; AVCodec *decoder; AVPacket *packet; AVFrame *av_frame; AVFrame *gl_frame; struct SwsContext *conv_ctx; unsigned int frame_tex; }AppData; class ClipPlayer{ private: AppData data; std::vector< AVFrame* > cache; public: ClipPlayer(); ClipPlayer(const ClipPlayer& player); ClipPlayer& operator=(const ClipPlayer& player); ~ClipPlayer(); void initializeAppData(); void clearAppData(); bool readFrame(); bool initReadFrame(); void playCache(); void init(); void draw(); void reset(); }
In the init function the AVI file is read and the frames are saved in memory.
void init() { initializeAppData(); // open video if (avformat_open_input(&data.fmt_ctx, stdstrPathOfVideo.c_str(), NULL, NULL) < 0) { clearAppData(); return; } // find stream info if (avformat_find_stream_info(data.fmt_ctx, NULL) < 0) { clearAppData(); return; } // find the video stream for (unsigned int i = 0; i < data.fmt_ctx->nb_streams; ++i) { if (data.fmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) { data.stream_idx = i; break; } } if (data.stream_idx == -1) { clearAppData(); return; } data.video_stream = data.fmt_ctx->streams[data.stream_idx]; data.codec_ctx = data.video_stream->codec; // find the decoder data.decoder = avcodec_find_decoder(data.codec_ctx->codec_id); if (data.decoder == NULL) { clearAppData(); return; } // open the decoder if (avcodec_open2(data.codec_ctx, data.decoder, NULL) < 0) { clearAppData(); return; } // allocate the video frames data.av_frame = av_frame_alloc(); data.gl_frame = av_frame_alloc(); int size = avpicture_get_size(AV_PIX_FMT_RGBA, data.codec_ctx->width, data.codec_ctx->height); uint8_t *internal_buffer = (uint8_t *)av_malloc(size * sizeof(uint8_t)); avpicture_fill((AVPicture *)data.gl_frame, internal_buffer, AV_PIX_FMT_RGBA, data.codec_ctx->width, data.codec_ctx->height); data.packet = (AVPacket *)av_malloc(sizeof(AVPacket)); }
/////////////////////////////////////////////////////////////
bool ClipPlayer::initReadFrame() { do { glBindTexture(GL_TEXTURE_2D, data.frame_tex); int error = av_read_frame(data.fmt_ctx, data.packet); if (error) { av_free_packet(data.packet); return false; } if (data.packet->stream_index == data.stream_idx) { int frame_finished = 0; if (avcodec_decode_video2(data.codec_ctx, data.av_frame, &frame_finished, data.packet) < 0) { av_free_packet(data.packet); return false; } if (frame_finished) { if (!data.conv_ctx) { data.conv_ctx = sws_getContext(data.codec_ctx->width, data.codec_ctx->height, data.codec_ctx->pix_fmt, data.codec_ctx->width, data.codec_ctx->height, AV_PIX_FMT_RGBA, SWS_BICUBIC, NULL, NULL, NULL); } sws_scale(data.conv_ctx, data.av_frame->data, data.av_frame->linesize, 0, data.codec_ctx->height, data.gl_frame->data, data.gl_frame->linesize); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, data.codec_ctx->width, data.codec_ctx->height, GL_RGBA, GL_UNSIGNED_BYTE, data.gl_frame->data[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); AVFrame *cachedValue = av_frame_alloc(); cachedValue->format = data.av_frame->format; cachedValue->width = data.av_frame->width; cachedValue->height = data.av_frame->height; cachedValue->channels = data.av_frame->channels; cachedValue->channel_layout = data.av_frame->channel_layout; cachedValue->nb_samples = data.av_frame->nb_samples; av_frame_get_buffer(cachedValue, 32); av_frame_copy(cachedValue, data.av_frame); av_frame_copy_props(cachedValue, data.av_frame); cache.push_back((cachedValue)); } } } while (data.packet->stream_index != data.stream_idx);
////////////////////////////////////////////////////////////////////
In the play cache function the frames are displayed
void ClipPlayer::playCache() { glActiveTexture(GL_TEXTURE0); sws_scale(data.conv_ctx, cache[loop]->data, cache[loop]->linesize, 0, data.codec_ctx->height, data.gl_frame->data, data.gl_frame->linesize); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, data.codec_ctx->width, data.codec_ctx->height, GL_RGBA, GL_UNSIGNED_BYTE,data.gl_frame->data[0]); glBindTexture(GL_TEXTURE_2D, data.frame_tex); }
In the destructor i try to free the memory
~ClipPlayer() { for (auto &frame : cache) { av_freep(frame); } }
I am not very proficient in using FFmpeg , my question is that have i freed the memory properly.