Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • What's FFmpeg doing with avcodec_send_packet() ?

    4 avril, par Jim

    I'm trying to optimise a piece of software for playing video, which internally uses the FFmpeg libraries for decoding. We've found that on some large (4K, 60fps) video, it sometimes takes longer to decode a frame than that frame should be displayed for; sadly, because of the problem domain, simply buffering/skipping frames is not an option.

    However, it appears that the FFmpeg executable is able to decode the video in question fine, at about 2x speed, so I've been trying to work out what we're doing wrong.

    I've written a very stripped-back decoder program for testing; the source is here (it's about 200 lines). From profiling it, it appears that the one major bottleneck during decoding is the avcodec_send_packet() function, which can take up to 50ms per call. However, measuring the same call in FFmpeg shows strange behaviour:

    Yes, you can embed images

    (these are the times taken for each call to avcodec_send_packet() in milliseconds, when decoding a 4K 25fps VP9-encoded video.)

    Basically, it seems that when FFmpeg uses this function, it only really takes any amount of time to complete every N calls, where N is the number of threads being used for decoding. However, both my test decoder and the actual product use 4 threads for decoding, and this doesn't happen; when using frame-based threading, the test decoder behaves like FFmpeg using only 1 thread. This would seem to indicate that we're not using multithreading at all, but we've still seen performance improvements by using more threads.

    FFmpeg's results average out to being about twice as fast overall as our decoders, so clearly we're doing something wrong. I've been reading through FFmpeg's source to try to find any clues, but it's so far eluded me.

    My question is: what's FFmpeg doing here that we're not? Alternatively, how can we increase the performance of our decoder?

    Any help is greatly appreciated.

  • FFprobe Throws JsonSyntaxException with NumberFormatException on Some Video Files [closed]

    3 avril, par Daydreamer067

    I'm using FFprobe through the Java wrapper to analyze video files:

        FFprobe ffprobe = new FFprobe("pathTo/ffprobe.exe");
        FFmpegProbeResult probeResult = ffprobe.probe(fichierTemp.getPath());
    

    This works fine for most files. However, on some video files, I get the following error:

    com.google.gson.JsonSyntaxException: java.lang.NumberFormatException: Expected an int but was 109893054318340751 at line 160 column 37 path $.chapters[0].id
    

    It seems like FFprobe is returning a chapter ID that is a long but it expected an int. How can I handle this situation and avoid this exception?

    Is there a way to customize the JSON parsing or configure FFprobe to return int values?

  • Using HEVC with Alpha to Compose Moviepy Video

    3 avril, par James Grace

    I'm using moviepy, PIL and numpy and trying to compile a video with 3 components : A background image that is a PNG with no transparency, an Overlay Video that is a HEVC with Alpha, and a primary clip that is produced with a collection of PNG images with transparency.

    The video is composed with background + overlay video + main video.

    The problem I'm having is that the overlay video has a black background, so the background image is covered completely. Moviepy is able to import the HEVC video successfully by it seems as though the Alpha channel is lost on import.

    Any ideas?

    Here's my code :

    from PIL import Image
    import moviepy.editor as mpe
    import numpy as np
    
    def CompileVideo() :
    
        frames = ["list_of_png_files_with_transparency"]
        fps = 30.0
        clips = [mpe.ImageClip(np.asarray(Image.open(frame))).set_duration(1 / int(fps)) for frame in frames]
        ad_clip = mpe.concatenate_videoclips(clips, method="compose")
        bg_clip = mpe.ImageClip(np.asarray(Image.open("path_to_background_file_no_transparency"))).set_duration(ad_clip.duration)
    
        overlay_clip = mpe.VideoFileClip("path_to_HEVC_with_Alpha.mov")
    
        comp = [bg_clip, overlay_clip, ad_clip]
    
        final = mpe.CompositeVideoClip(comp).set_duration(ad_clip.duration)
        final.write_videofile("output.mp4", fps=fps)
    
  • FFmpeg av_seek_frame() successful, but decoding always starts from the beginning

    3 avril, par Summit

    I'm working on a video player using FFmpeg in C++, where I need to seek to a specific PTS (timestamp) and start decoding from there. However, even after a successful av_seek_frame(), decoding always starts from frame 0 instead of the target frame.

    void decodeLoop() {
          while (!stopThread) {
    
              std::unique_lock lock(mtx);
              cv.wait(lock, [this] { return decoding || stopThread; });
    
              if (stopThread) break;
    
              int64_t reqTimestamp = requestedTimestamp.load();
    
              // **Skip redundant seeks**
              if (reqTimestamp == lastRequestedTimestamp.load()) {
                  continue;  // Avoid unnecessary seeks
              }
    
              int64_t requestedTS = requestedTimestamp.load();
              AVRational stream_tb = fmt_ctx->streams[video_stream_index]->time_base;
              int64_t target_pts = reqTimestamp;
              target_pts = FFMAX(target_pts, 0); // Ensure it's not negative
    
              int seek_flags = AVSEEK_FLAG_BACKWARD;
    
              if (av_seek_frame(fmt_ctx, video_stream_index, target_pts, seek_flags) >= 0) {
                  avcodec_flush_buffers(codec_ctx);  // Clear old frames from the decoder
                  avformat_flush(fmt_ctx);          // **Flush demuxer**
                  qDebug() << "Seek successful to PTS:" << target_pts;
           //      avfilter_graph_free(&filter_graph);  // Reset filter graph
              }
              else {
                  qDebug() << "Seeking failed!";
                  decoding = false;
                  continue;
              }
    
              lock.unlock();
    
              // Keep decoding frames until we reach (or slightly exceed) requestedTimestamp
              bool frameDecoded = false;
              while (av_read_frame(fmt_ctx, pkt) >= 0) {
                  if (pkt->stream_index == video_stream_index) {
                      if (avcodec_send_packet(codec_ctx, pkt) == 0) {
                          while (avcodec_receive_frame(codec_ctx, frame) == 0) {
                              int64_t frame_pts = frame->pts;
    
                              // Skip frames before reaching target PTS
                              if (frame_pts < requestedTimestamp.load()) {
    
                                  qDebug() << "SKIPPING FRAMES " << frame_pts;
                                  av_frame_unref(frame);  // Free unused frame memory
                                  continue;  // Discard early frames
                              }
    
                              qDebug() << "DECODING THE SEEKED FRAMES " << frame_pts;
    
                              if (frame_pts >= requestedTimestamp.load()) {
                                 
                                      // Accept frame only if it's close enough to the target PTS
                                      current_pts.store(frame_pts);
                                      lastRequestedTimestamp.store(requestedTimestamp.load());
                                      convertHWFrameToImage(frame);
                                      emit frameDecodedSignal(outputImage);
                                      frameDecoded = true;
                                      break;  // Stop decoding once we reach the desired frame
                                 
                              }
                          }
                      }
                  }
                  av_packet_unref(pkt);
    
                  if (frameDecoded) break;
              }
    
              decoding = false;
          }
      }
    
  • React Native alternatives to ffmpeg-kit-react-native for adding text overlays to videos ?

    3 avril, par Sanjay Kalal

    I’m developing a React Native app where I need to add text overlays to videos. I was using ffmpeg-kit-react-native, but this library is deprecated, I am looking for efficient alternative and add overlay text on video and then share it with overlay text.

    Are there any React Native libraries or native integrations that can efficiently add text overlays to videos without using ffmpeg?