Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • How to get webam frames one by one but also compressed ?

    29 mars, par Vorac

    I need to grab frames from the webcam of a laptop, transmit them one by one and the receiving side stitch them into a video. I picked ffmpeg-python as wrapper of choice and the example from the docs works right away:

    #!/usr/bin/env python
    
    # In this file: reading frames one by one from the webcam.
    
    
    import ffmpeg
    
    width = 640
    height = 480
    
    
    reader = (
        ffmpeg
        .input('/dev/video0', s='{}x{}'.format(width, height))
        .output('pipe:', format='rawvideo', pix_fmt='yuv420p')
        .run_async(pipe_stdout=True)
    )
    
    # This is here only to test the reader.
    writer = (
        ffmpeg
        .input('pipe:', format='rawvideo', pix_fmt='yuv420p', s='{}x{}'.format(width, height))
        .output('/tmp/test.mp4', format='h264', pix_fmt='yuv420p')
        .overwrite_output()
        .run_async(pipe_stdin=True)
    )
    
    
    while True:
        chunk = reader.stdout.read(width * height * 1.5)  # yuv
        print(len(chunk))
        writer.stdin.write(chunk)
    

    Now for the compression part.

    My reading of the docs is that the input to the reader perhaps needs be rawvideo but nothing else does. I tried replacing rawvideo with h264 in my code but that resulted in empty frames. I'm considering a third invocation looking like this but is that really the correct approach?

    encoder = (                                                                     
        ffmpeg                                                                      
        .input('pipe:', format='rawvideo', pix_fmt='yuv420p', s='{}x{}'.format(width, height))
        .output('pipe:', format='h264', pix_fmt='yuv420p')                          
        .run_async(pipe_stdin=True, pipe_stdout=True)                               
    
  • convert a heif file to png/jpg using ffmpeg

    28 mars, par Ajitesh Singh

    The use case is very straight forward. Imagemagick is able to do the conversion but I want to do it with ffmpeg. Here is the all commands I have tried and all of them gives moov atom not found error.

    ffmpeg -i /Users/ajitesh/Downloads/sample1.heif -c:v png -pix_fmt rgb48 /Users/ajitesh/Downloads/sample.png
    

    Output

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f85aa813200] moov atom not found
    /Users/ajitesh/Downloads/sample1.heif: Invalid data found when processing input
    

    it seems like moov atom is actually not present by trying to extract the location of moov atom using the following command

    ffmpeg -v trace -i /Users/ajitesh/Downloads/sample1.heif 2>&1 | grep -e type:\'mdat\' -e type:\'moov\'
    

    Output

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f824c00f000] type:'mdat' parent:'root' sz: 2503083 420 2503495
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f824c00f000] type:'mdat' parent:'root' sz: 2503083 420 2503495
    
  • Including FFmpeg.framework Into My IOS App

    28 mars, par Alpi

    I'm trying to manually integrate ffmpegkit.framework into my Expo Bare Workflow iOS app (built with React Native + native modules via Xcode) because the ffmpegkit will be deprecated and the binaries will be deleted.

    So far

    • I've downloaded the latest LTS release of FFmpegkit from here.
    • I've created 3 files: FFmpegModule.m , FFmpegModule.swift and SoundBud-Bridging-Header.
    • Added the frameworks to my projectDir/ios manually, which shows in my XCode under projectDir/Frameworks
    • Added all the frameworks into "Frameworks, Libraries and Embedded Content" and make them "Embed and Sign"
    • As Framework Search Path in Project Settings, I've set it to "$(PROJECT_DIR)" and recursive
    • In "Build Phases" I've added all the frameworks under "Embed Frameworks",set the destination to "Frameworks" and checked "Code Sign on Copy" to all of them and unchecked "Copy Only When Installing"
    • Also under "Link Binary With Libraries" I've added all the frameworks and marked them "Required"

    Here are the errors I'm getting:

    • The framework is not recognized by Swift (No such module 'ffmpegkit')
    • A build cycle error: Cycle inside SoundBud; building could produce unreliable results. Target 'SoundBud' has copy command from '.../Frameworks/ffmpegkit.framework' ...

    Below you can see my swift file and the ffmpegkit module file: Swift:

    import Foundation
    import ffmpegkit
    import React
    
    @objc(FFmpegModule)
    class FFmpegModule: NSObject, RCTBridgeModule {
    
    static func moduleName() -> String {
    return "FFmpegModule"
    }
    
    @objc
    func runCommand(_ command: String, resolver resolve: @escaping RCTPromiseResolveBlock, 
    rejecter reject: @escaping RCTPromiseRejectBlock) {
    FFmpegKit.executeAsync(command) { session in
      let returnCode = session?.getReturnCode()
      resolve(returnCode?.getValue())
    }
    }
    
    @objc
    static func requiresMainQueueSetup() -> Bool {
    return false
    }
    }
    

    and the module:

    framework module ffmpegkit {
    
    header "AbstractSession.h"
    header "ArchDetect.h"
    header "AtomicLong.h"
    header "Chapter.h"
    header "FFmpegKit.h"
    header "FFmpegKitConfig.h"
    header "FFmpegSession.h"
    header "FFmpegSessionCompleteCallback.h"
    header "FFprobeKit.h"
    header "FFprobeSession.h"
    header "FFprobeSessionCompleteCallback.h"
    header "Level.h"
    header "Log.h"
    header "LogCallback.h"
    header "LogRedirectionStrategy.h"
    header "MediaInformation.h"
    header "MediaInformationJsonParser.h"
    header "MediaInformationSession.h"
    header "MediaInformationSessionCompleteCallback.h"
    header "Packages.h"
    header "ReturnCode.h"
    header "Session.h"
    header "SessionState.h"
    header "Statistics.h"
    header "StatisticsCallback.h"
    header "StreamInformation.h"
    header "ffmpegkit_exception.h"
    
    export *
    }
    

    I can provide you with more info if you need it. I've been trying non stop for 7 days and it's driving me crazy. I would appreciate any help greatly

  • How to add info to a video and share it with others in React Native

    28 mars, par Sanjay Kalal

    I am looking for a solution in which I can pick a video from the gallery and add info or overlay inside it and then share the video with the added info in react native. I tried using ffmpeg but it is not working properly

    I need a proper solution in which I can peak the video, add info to it and share it. I tried ffmpeg but it is not working.

  • Incorrect length of video produced with ffmpeg libraries [closed]

    28 mars, par ivan.ukr

    I'm writing a C program that takes series of PNG images and converts them into a video. Video consists of the initial black screen and then each of those images, shown for the same constant amount of time, say 200 ms. I'm using libx264 as codec and mp4 as output format. I'm compiling my program with GCC 12 on Ubuntu 22.04 LTS. I'm using ffmpeg version from Ubuntu repositories. In order to achieve above behavior I've set time base to 1/5 in the both stream and codec.

    // assume imageDuration = 200
    AVRational timeBase;
    av_reduce(&timeBase.num, &timeBase.den, imageDuration, 1000, INT_MAX);
    
    const AVCodec *c = avcodec_find_encoder(codecId);
    AVStream *s = avformat_new_stream(fc, c);
    s->time_base = timeBase;
    s->nb_frames = numImages + 1; // inital black screen + images
    s->duration = numImages + 1;
    
    AVCodecContext *cc = avcodec_alloc_context3(c);
    cc->width = width;
    cc->height = height;
    cc->pix_fmt = pixelFormat;
    cc->time_base = timeBase;
    
    // ffmpeg headers suggest: Set to time_base ticks per frame.
    // Default 1, e.g., H.264/MPEG-2 set it to 2.
    cc->ticks_per_frame = 2;
    
    cc->framerate = av_inv_q(timeBase);
    if (fc->oformat->flags & AVFMT_GLOBALHEADER) {
        cc->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    }
    
    

    Then I'm encoding 11 frames.

    Finally, I'm getting video with the following characteristics:

    $ ffprobe v.mp4
    
    ....
    
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'v.mp4':
      Metadata:
        major_brand     : isom
        minor_version   : 512
        compatible_brands: isomiso2avc1mp41
        encoder         : Lavf58.76.100
      Duration: 00:00:00.01, start: 0.000000, bitrate: 68414 kb/s
      Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 640x360, 69073 kb/s, 10333.94 fps, 10240 tbr, 10240 tbn, 10 tbc (default)
        Metadata:
          handler_name    : VideoHandler
          vendor_id       : [0][0][0][0]
    
    

    Please pay attention to:

    Duration: 00:00:00.01
    

    and

    10333.94 fps
    

    That's totally NOT what I've expected (2.2s video length and 5 fps frame rate).

    Note: The content of video is correct, this can be verified by looking into the generated video file frame-by-frame in some program like Avidemux. But video length and frame rate are incorrect.

    Please advise, how to fix this?