Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • opencv read error :[h264 @ 0x8f915e0] error while decoding MB 53 20, bytestream -7

    2 mai, par Alex Luya

    My configuration:

      ubuntu 16.04
      opencv 3.3.1
      gcc version 5.4.0 20160609
      ffmpeg version 3.4.2-1~16.04.york0
    

    and I built opencv with:

    cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D PYTHON_EXECUTABLE=$(which python) -D OPENCV_EXTRA_MODULES_PATH=/home/xxx/opencv_contrib/modules -D WITH_QT=ON -D WITH_OPENGL=ON -D WITH_IPP=ON -D WITH_OPENNI2=ON -D WITH_V4L=ON -D WITH_FFMPEG=ON -D WITH_GSTREAMER=OFF -D WITH_OPENMP=ON -D WITH_VTK=ON -D BUILD_opencv_java=OFF -D BUILD_opencv_python3=OFF -D WITH_CUDA=ON -D ENABLE_FAST_MATH=1 -D WITH_NVCUVID=ON -D CUDA_FAST_MATH=ON -D BUILD_opencv_cnn_3dobj=OFF -D FORCE_VTK=ON  -D WITH_CUBLAS=ON -D CUDA_NVCC_FLAGS="-D_FORCE_INLINES" -D WITH_GDAL=ON -D WITH_XINE=ON -D BUILD_EXAMPLES=OFF -D BUILD_DOCS=ON -D BUILD_PERF_TESTS=OFF -D BUILD_TESTS=OFF  -D BUILD_opencv_dnn=OFF -D BUILD_PROTOBUF=OFF -D opencv_dnn_BUILD_TORCH_IMPORTER=OFF -D opencv_dnn_PERF_CAFFE=OFF -D opencv_dnn_PERF_CLCAFFE=OFF -DBUILD_opencv_dnn_modern=OFF -D CUDA_ARCH_BIN=6.1 ..
    

    and use these python code to read and show:

    import cv2
    from com.xxx.cv.core.Image import Image
    
    capture=cv2.VideoCapture("rtsp://192.168.10.184:554/mpeg4?username=xxx&password=yyy")
    while True:
        grabbed,content=capture.read()
        if grabbed:
            Image(content).show()
            doSomething()
        else:
            print "nothing grabbed.."
    

    Everytime, after reading about 50 frames,it will give an error like:

    [h264 @ 0x8f915e0] error while decoding MB 53 20, bytestream -7
    

    then nothing can be grabbed further,and the strange thing is:

    1,comment doSomething() or
    2,keep doSomething() and recording the stream from same IPCamera,then run
      code against recorded video
    

    both cases,code works fine,can anyone tell how to solve this problem?Thank in advance!

  • ffmpeg.wasm in Angular 19

    2 mai, par Yashar Tabrizi

    I am developing an Angular app that records videos. Since the videos that come out usually have variable and "wrong" framerates, I want to re-encode them using FFmpeg, particularly ffmpeg.wasm.

    I have installed the packages @ffmpeg/ffmpeg, @ffmpeg/core and @ffmpeg/util and I have written the following worker ffmpeg.worker.ts to do the initialization and to execute the FFmpeg processing:

    /// 
    import { FFmpeg } from '@ffmpeg/ffmpeg';
    import { toBlobURL } from '@ffmpeg/util';
    
    
    const baseURL = 'https://unpkg.com/@ffmpeg/core@0.12.10/dist/esm';
    
    const ffmpeg = new FFmpeg();
    
    let isLoaded = false;
    
    (async () => {
      await ffmpeg.load({
        coreURL: await toBlobURL(`${baseURL}/ffmpeg-core.js`, "text/javascript"),
        wasmURL: await toBlobURL(`${baseURL}/ffmpeg-core.wasm`, "application/wasm"),
      });
      isLoaded = true;
      self.postMessage({ type: 'ready' });
    })();
    
    self.onmessage = async (e: MessageEvent) => {
      if (!isLoaded) {
        self.postMessage({ type: 'error', error: 'FFmpeg not loaded yet!' });
        return;
      }
    
      if (e.data.byteLength === 0) return;
    
      try {
        await ffmpeg.writeFile('input', new Uint8Array(e.data));
    
        await ffmpeg.exec([
          '-i', 'input',
          '-r', '30',
          '-c:v', 'libx264',
          '-preset', 'ultrafast',
          '-pix_fmt', 'yuv420p',
          '-movflags', 'faststart',
          'out.mp4',
        ]);
    
        const data = await ffmpeg.readFile('out.mp4');
        if (data instanceof Uint8Array) {
          self.postMessage({ type: 'done', file: data.buffer }, [data.buffer]);
        } else {
          self.postMessage({ type: 'error', error: 'Unexpected output from ffmpeg.readFile,' });
        }
    
      } catch (err) {
        self.postMessage({ type: 'error', error: (err as Error).message });
      } finally {
        await ffmpeg.deleteFile(('input'));
        await ffmpeg.deleteFile(('out.mp4'));
      }
    }
    

    I have a service called cameraService where I do the recording and where I want to do the re-encoding after the recording has stopped, so I have this method that initializes the FFmpeg process:

      private encoder: Worker | null = null;
    
      private initEncoder() {
        if (this.encoder) return;
        this.encoder = new Worker(
          new URL('../workers/ffmpeg.worker', import.meta.url), // Location of my worker
          { type: 'module' }
        );
    
        this.encoder.onmessage = (e: MessageEvent) => {
          switch (e.data.type) {
            case 'ready':
              console.log('FFmpeg worker ready.');
              break;
    
            case 'done':
              this.reEncodedVideo = new Blob([e.data.file], { type: 'video/mp4' });
              this.videoUrlSubject.next(URL.createObjectURL(this.reEncodedVideo));
              console.log('FFmpeg encoding completed.');
              break;
            case 'error':
              console.error('FFmpeg encoding error:', e.data.error);
              break;
          }
        };
      }
    

    However, the loading of FFmpeg won't work, no matter what I do. Hosting the ffmpeg-core.js and ffmpeg-core.wasm files doesn't help either. I keep getting this message whenever ffmpeg.load() is called:

    The file does not exist at ".../.angular/cache/19.2.0/mover/vite/deps/worker.js?worker_file&type=module" which is in the optimize deps directory. The dependency might be incompatible with the dep optimizer. Try adding it to 'optimizeDeps.exclude'.

    I know this has something to do with Web Workers and their integration with Vite but has anybody been able to implement ffmpeg.wasm in Angular 19 or is there even any way to achieve this? If not FFmpeg, are there alternatives to perform re-encoding after recording a video in Angular 19?

  • FFMPEG repeated non-monotonic DTS error despite re-encoding and multiple fixes [closed]

    1er mai, par World of Depth

    I have four MP4 files I'm trying to concat. After following the advice in many posts, and many MANY tries, I've gotten it to the point where the concatenated file now plays back, with video and audio, but I still get the following error when processing the 4th file, and suspect that if I add a 5th it won't work again.

    [aost#0:1/copy @ 0x135714fd0] Non-monotonic DTS; previous: 579108, current: 577078; changing to 579109. This may result in incorrect timestamps in the output file.
    [aost#0:1/copy @ 0x135714fd0] Non-monotonic DTS; previous: 579109, current: 578102; changing to 579110. This may result in incorrect timestamps in the output file.
    

    These are the commands I'm using to generate/prepare the 4 input files:

    ffmpeg -fflags +igndts -i original1.mp4 -i original1sound.wav -vf scale=1080:1544,setsar=1,unsharp=5:5:0.5 -r 30 -b:v 25M -pix_fmt yuv420p -c:v libx264 -c:a aac_at -ac 2 -video_track_timescale 15360 -max_muxing_queue_size 9999 -y input1.mp4
    
    ffmpeg -fflags +igndts -i original2.mp4 -i original2sound.wav -vf hflip,scale=1080:1544,setsar=1,unsharp=5:5:0.5 -r 30 -b:v 25M -pix_fmt yuv420p -c:v libx264 -c:a aac_at -ac 2 -video_track_timescale 15360 -max_muxing_queue_size 9999 -y input2.mp4
    
    ffmpeg -fflags +igndts -ss 0.5 -to 3.5 -i original3.mp4 -vf hflip,pad=1080:1544:0:16 -b:v 25M -pix_fmt yuv420p -c:v libx264 -c:a aac_at -max_muxing_queue_size 9999 -y input3.mp4
    
    ffmpeg -fflags +igndts -ss 0.5 -to 3.5 -i original4.mp4 -vf pad=1080:1544:0:16 -b:v 25M -pix_fmt yuv420p -c:v libx264 -c:a aac_at -b:a 128k -max_muxing_queue_size 9999 -y input4.mp4
    

    FFPROBE returns this information about the 4 prepared input files, in order from 1 to 4:

    Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1080x1544 [SAR 1:1 DAR 135:193], 24643 kb/s, 30 fps, 30 tbr, 15360 tbn (default)
    Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
    
    Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1080x1544 [SAR 1:1 DAR 135:193], 25187 kb/s, 30 fps, 30 tbr, 15360 tbn (default)
    Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
    
    Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1080x1544, 21640 kb/s, 30 fps, 30 tbr, 15360 tbn (default)
    Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
    
    Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1080x1544, 21802 kb/s, 30 fps, 30 tbr, 15360 tbn (default)
    Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 129 kb/s (default)
    

    I notice some of the final video bitrates are very slightly different, despite specifying 25M; also input4 has 129k audio despite specifying 128k. Any idea why I'm still getting the DTS error?

    Related bonus question: in the course of troubleshooting this, this looks like my final version of a command for preparing original files to be concatenated (and preventing DTS errors). Note this assumes the highest TBN value of the original files is 15360. Can anything here be omitted / should anything be added?

    ffmpeg -fflags +igndts -i original.mp4 -r 30 -b:v 25M -pix_fmt yuv420p -c:v libx264 -c:a aac_at -ac 2 -video_track_timescale 15360 -max_muxing_queue_size 9999 -y prepared.mp4
    

    Thank you for any help/advice!

  • get specific metadata with ffprobe

    1er mai, par mikem

    I'm having a terrible time getting one single line of metadata from ffprobe.

    I'm running this command:

    ffprobe -show_entries 'stream_tags : format_tags=com.apple.quicktime.creationdate' -loglevel error IMG_9931.MOV

    And I get this output

    [STREAM]

    TAG:creation_time=2022-05-14T20:24:55.000000Z

    TAG:language=und

    TAG:handler_name=Core Media Video

    TAG:encoder=H.264

    [/STREAM]

    [STREAM]

    TAG:creation_time=2022-05-14T20:24:55.000000Z

    TAG:language=und

    TAG:handler_name=Core Media Audio

    [/STREAM]

    [STREAM]

    TAG:creation_time=2022-05-14T20:24:55.000000Z

    TAG:language=und

    TAG:handler_name=Core Media Metadata

    [/STREAM]

    [STREAM]

    TAG:creation_time=2022-05-14T20:24:55.000000Z

    TAG:language=und

    TAG:handler_name=Core Media Metadata

    [/STREAM]

    [STREAM]

    TAG:creation_time=2022-05-14T20:24:55.000000Z

    TAG:language=und

    TAG:handler_name=Core Media Metadata

    [/STREAM]

    [FORMAT]

    TAG:com.apple.quicktime.creationdate=2022-05-14T16:24:55-0400

    [/FORMAT]

    But the only thing I want returned is

    com.apple.quicktime.creationdate=2022-05-14T16:24:55-0400

    I've searched and searched but I can't find any examples of pulling a single specific value of metadata.

    In actuality, I really just want the value of com.apple.quicktime.creationdate... ie "2022-05-14T16:24:55-0400"

    I know I can grep and awk my way through it, but it seems like there should be a way to do it with ffprobe alone given all of the options it has. I just can't figure out how.

    How can I do this? Any help would be appreciated.

  • Why can't I seek using PTS while reading an MXF file until av_write_trailer() is called ?

    1er mai, par Summit

    I'm writing an MXF file using FFmpeg in C++ and then reading it back for real-time playback. However, I'm running into a problem: seeking by PTS (av_seek_frame()) doesn't work properly until I call av_write_trailer() at the end of the writing session.

    Here’s the workflow:

    I'm encoding and writing frames to an MXF file using avformat_write_header() and av_interleaved_write_frame().

    I want to read from the same file while it’s still being written (like a growing file or EVS-style live playback).

    But seeking using av_seek_frame() to a specific PTS fails or behaves incorrectly until I finalize the file with av_write_trailer().

    Here is a simplified version of my writing logic:

    avformat_write_header(formatContext, nullptr);
    // ... loop ...
    avcodec_send_frame(codecContext, frame);
    avcodec_receive_packet(codecContext, &pkt);
    av_interleaved_write_frame(formatContext, &pkt);
    // NO call to av_write_trailer() yet
    

    And this is how I try to seek in the reading logic:

    av_seek_frame(inputFormatContext, videoStreamIndex, targetPts, AVSEEK_FLAG_BACKWARD);