Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Randomly extract video frames from multiple files

    2 juillet, par PatraoPedro

    Basically I have a folder with hundreds of video files(*.avi) each one with more or less an hour long. What I would like to achieve is a piece of code that could go through each one of those videos and randomly select two or three frames from each file and then stitch it back together or in alternative save the frames in a folder as jpegs. Initially I thought I could do this using R but quickly I've realised that I would need something else possible working together with R.

    Is it possible to call FFMPEG from R to do the task above?

  • UserWarning : x bytes wanted but 0 bytes read at frame index y (out of a total z frames), at time q sec. Using the last valid frame instead

    1er juillet, par Wiz

    I have a script which subclips a larger video and then stitches needed parts together using concatenate and write video file in moviepy. But the problem is everytime i run the script i get the video but the audio is off sync, usually the audio starts normally, but over time becomes delayed. I suspect it is because i want to concatenate around 220 smaller mp4 videos into one big mp4 video, but every smaller clips receives the error:'UserWarning: In file test.mp4, 1555200 bytes wanted but 0 bytes read at frame index y (out of a total y+1 frames), at time x sec. Using the last valid frame instead.'

    I use moviepy v2

    MY CODE (it doesnt produce any strict errors, but does give the aformentioned UserWarning when writing video_unsilenced.mp4):

    n = 0
    
    cuts = []
    input_paths = []
    vc = []
    os.makedirs(r"ShortsBot\SUBCLIPS")
    for timer in range(len(startsilence)-1):
       w = VideoFileClip(r"ShortsBot\output\cropped_video.mp4").subclipped(endsilence[n],(startsilence[n+1]+0.5))
       w.write_videofile(r"ShortsBot\SUBCLIPS\video" + str(n) + ".mp4")
       a = VideoFileClip(r"ShortsBot\SUBCLIPS\video" + str(n) + ".mp4")
       vc.append(a)
       n+=1
    
    output_fname = "video_unsilenced.mp4"
    clip = mpy.concatenate_videoclips(clips=vc, method= 'compose')
    clip.write_videofile(filename=output_fname, fps=30)
    _ = [a.close() for a in vc]
    

    Because moviepy is shaving off a frame or two for every video clip and at the same time writing the audio of the concatenated clip normally (without shaving off the audio that is in the missing frames), the video and audio slowly become out of sync.And the more clips i want to concatenate, the more the audio becomes out of sync, basically confirming my suspicion that it is because moviepy is using the last valid frame and writing the audio normally. The question i have is how can i fix this? Ive looked for similar questions, but havent found the exact answer i was looking for. Sorry if this is something basic, im a beginner programmer in python and would really appreciate some tips or some sort of fix. Thanks everyone!

  • ffmpeg create blank screen with text video

    1er juillet, par kalafun

    I would like to create a video consisting of some text. The video will only be 0.5 seconds long. The background should just be some colour, I am able to create such video from a photo, but cant find anywhere how this could be done without a photo, just using text to create such video.

  • ffmpeg dshow - Getting duplicate/dropped frames, I'm not doing anything [closed]

    30 juin, par ENunn

    I'm messing around with recording my capture card with ffmpeg. No matter what I do I get duplicate and dropped frames. d

    I'm not even doing anything on my computer. Is there a fix for this at all? Here's my command.

    ffmpeg -hide_banner -rtbufsize 2G -f dshow -video_size 2560x1440 -framerate 60.0002 -pix_fmt bgr24 -video_pin_name 0 -audio_pin_name 1 -i video="AVerMedia HD Capture GC573 1":audio="AVerMedia HD Capture GC573 1" -async 1 -rtbufsize 1M -f dshow -sample_rate 48000 -i audio="Digital Audio (S/PDIF) (Sound Blaster X-Fi Xtreme Audio)" -map 0 -map 1 -colorspace:v "bt709" -color_primaries:v "bt709" -color_trc:v "bt709" -color_range:v "tv" -c:v hevc_nvenc -pix_fmt yuv444p16le -gpu any -g 30 -rc vbr -cq 16 -qmin 16 -qmax 16 -b:v 0K -b_ref_mode 1 -spatial_aq 1 -temporal_aq 1 -preset p7 -c:a copy -f segment -segment_time 9999999999 -strftime 1 "F:\ffmpeg recordings%%Y-%%m-%%d_%%H-%%M-%%S.mkv"

  • C++/FFmpeg VLC doesn't play 5.1 audio output

    30 juin, par widgg

    I'm trying to convert audio streams to AAC in C++. FFplay plays everything fine (now) but VLC still has problems with one particular situation: 5.1(side). FFplay only plays it if I filter 5.1(side) to 5.1. Filtering to stereo or mono works well and as expected.

    My setup right now is:

    • send packet
    • receive audio AVFrame
    • apply filter
    • resample to produce output AVFrame with 1024 samples (required by AAC)
    • send new audio frame
    • receive audio packet

    Weirdly enough, using FFmpeg's CLI converts my file properly.

    ffmpeg -i  test.mp4
    

    But FFprobe tells me that the audio stream is now 6 channels instead of 5.1(side) or 5.1. I did try to set AAC to 6 channels in both the AVStream and the AVCodecContext. Setting it in the AVStream doesn't change anything in FFprobe and the AVCodecContext for AAC doesn't allow it.

    FFprobe of the audio stream in the source is:

    ac3 (AC-3 / 0x332D4341), 48000 Hz, 5.1(side), fltp, 384 kb/s
    

    FFprobe of the file created with FFmpeg's CLI:

    aac (LC) (mp4a / 0x6134706D), 48000 Hz, 6 channels, fltp, 395 kb/s (default)
    

    FFprobe of my current version:

    aac (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 321 kb/s (default)
    

    Update 1

    Here's how the filter is created (removed error handling code to make it shorter and more compact):

    struct FilterInfo {
        const AVCodecContext *srcContext, *sinkContext;
    };
    
    struct Filter {
        AVFilterGraph* graph = avfilter_graph_alloc();
        AVFilterContext* src, sink;
    };
    
    std::string getLayoutName(const AVChannelLayout* layout) {
        char layoutName[256];
        auto ret = av_channel_layout_describe(layout, layoutName, 256);
        return std::string(layoutName, ret);
    }
    
    std::string getFilterArgs(const FilterInfo& info) {
        const auto layout = &info.sinkContext->ch_layout;
        std::string dstLayout = getLayoutName(layout);
    
        std::string chans("0"sv);
        for (int c = 1; c < layout->nb_channels; ++c) {
            chans = std::format("{}|{}"sv, chans, c);
        }
        return std::format("channelmap={}:{}"sv, chans, dstLayout);
    }
    
    AVFilterContext* createSrcFilterContext(
        const AVCodecContext* cctx, AVFilterGraph* graph) {
        //
        std::string layout=getLayoutName(&cctx->ch_layout);
    
        const auto args = std::format("time_base={}/{}:sample_rate={}:sample_fmt={}:channel_layout={}"sv,
            cctx->time_base.num, cctx->time_base.den, cctx->sample_rate,
            av_get_sample_fmt_name(cctx->sample_fmt), layout);
    
        const AVFilter* filt = avfilter_get_by_name("abuffer");
        AVFilterContext* fctx = nullptr;
        avfilter_graph_create_filter(&fctx, filt, "in", args.c_str(), nullptr, graph);
        return fctx;
    }
    
    AVFilterContext* createSinkFilterContext(
        const AVCodecContext* cctx, AVFilterGraph* graph) {
        //
        std::string layout = getLayoutName(&cctx->ch_layout));
    
        const AVFilter* filt = avfilter_get_by_name("abuffersink");
        AVFilterContext* fctx = nullptr;
        avfilter_graph_create_filter(&fctx, filt, "out", nullptr, nullptr, graph);
        av_opt_set(fctx, "ch_layouts", layout.c_str(), AV_OPT_SEARCH_CHILDREN);
    
        const AVSampleFormat sampleFmts[] = {cctx->sample_fmt, AV_SAMPLE_FMT_NONE};
        av_opt_set_int_list(fctx, "sample_fmts", sampleFmts, AV_SAMPLE_FMT_NONE, AV_OPT_SEARCH_CHILDREN);
    
        const int sampleRates[] = {cctx->sample_rate, -1};
        av_opt_set_int_list(fctx, "sample_rates", sampleRates, -1, AV_OPT_SEARCH_CHILDREN);
        return fctx;
    }
    
    Filter create(const FilterInfo& info) {
        std::string filterArgs = getFilterArgs(info);
    
        filter.graph = avfilter_graph_alloc();
        filter.src, createSrcFilterContext(info.srcContext, filter.graph);
        filter.sink, createSinkFilterContext(info.sinkContext, filter.graph);
    
        AVFilterInOut* inputs = avfilter_inout_alloc();
        AVFilterInOut* outputs = avfilter_inout_alloc();
    
        outputs->name = av_strdup("in");
        outputs->filter_ctx = filter.src.get();
        outputs->pad_idx = 0;
        outputs->next = nullptr;
    
        inputs->name = av_strdup("out");
        inputs->filter_ctx = filter.sink.get();
        inputs->pad_idx = 0;
        inputs->next = nullptr;
    
        avfilter_graph_parse_ptr(filter.graph, filterArgs.c_str(), &inputs, &outputs, nullptr);
        avfilter_inout_free(&inputs);
        avfilter_inout_free(&outputs);
    
        avfilter_graph_config(filter.graph, nullptr);
        return filter; // didn't fail, but not needed
    }
    

    This is how it gets executed:

    
    AVFrame* receiveFrame(Filter& f) {
        AVFrame* frm = av_frame_alloc();
        if (int ret = av_buffersink_get_frame(f.sink, frm); ret < 0) {
            if (ret == AVERROR(EAGAIN)) {
                return nullptr;
            }
            else {
                // throw error
            }
        }
        return frm;
    }
    
    void filteredFrameToProcess(Filter& f, SomeFrameQueue& queue) {
        while (true) {
            if (auto frm = receiveFrame(f); frm) {
                queue.emplace_back(frm);
            }
            else {
                break;
            }
        }
    }
    
    void filter(Filter& f, SomeFrameQueue& dst, SomeFrameQueue& src) {
        filteredFrameToProcess(f, dst); 
        if (!src.empty()) {
            av_buffersrc_write_frame(f.src, src.front());
            src.pop_front();
        }
        filteredFrameToProcess(f, dst);
    }