Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • cutting multiple segments with ffmpeg [duplicate]

    13 juin 2018, par B Pulsart

    This question already has an answer here:

    So there is a 10 min long .mp4 and I have to cut it in 2 seconds long segments every 10 seconds, then concatenate all 2 sec segments together to get a 2 min long film.

    I manage to do that with a loop in python that subprocep ffmpeg, but it's long and ugly (cut, cut cut... and then glue)

    Is it a way to do that with just a line in ffmpeg ?

    my code in python for cutting sequences :

    import subprocess# as sp
    
    ffmpeg = file on disk # on Windows
    film_in =  a file name
    
    t_total = 10#min
    t_sec = t_total*60
    
    def c_cut(time) :
        cmd = [ffmpeg  , '-i' ,film_in ,'-b:v', '800k' ,'-ss', str(time) , '-t','2' , '-an', film_out]
        return cmd
    
    
    for tps in range(0, t_sec, 10) :
        film_out = a file name unique for each segment
        command = c_cut(tps)
        subprocess.call(command)
    
  • No frame decodes after upgrading from ffmpeg 1.1 to 3.3

    13 juin 2018, par M.Mahdipour

    I have a source code in C++ using libavcodec for decoding h264 rtsp stream frames. This source code was written using ffmpeg 1.1. Now when I upgraded to ffmpeg 3.3, all things seem to work correctly except that decoding frames not work. In old version, I was using avcodec_decode_video2. After upgrading, avcodec_decode_video2 always sets got_picture to 0 and return value is equal to the size of the input packet (which means all data is used). And never a frame is decoded. I have also removed avcodec_decode_video2 and done decoding with avcodec_send_packet and avcodec_receive_frame, but avcodec_send_packet always returns 0 and avcodec_receive_frame always returns -11 (EAGAIN).

    This is the code I use for decoding:

    #include "stdafx.h"
    #include 
    #include 
    #include 
    using namespace std;
    
    extern "C"{
    #include "libavformat/avformat.h"
    #include "libavcodec/avcodec.h"
    #include "libswscale/swscale.h"
    #include "libavutil/pixfmt.h"
    }
    
    int extraDataSize;
    static const int MaxExtraDataSize = 1024;
    uint8_t extraDataBuffer[MaxExtraDataSize];
    
    void AddExtraData(uint8_t* data, int size)
    {
        auto newSize = extraDataSize + size;
        if (newSize > MaxExtraDataSize){
            throw "extradata exceeds size limit";
        }
        memcpy(extraDataBuffer + extraDataSize, data, size);
        extraDataSize = newSize;
    }
    
    int _tmain(int argc, _TCHAR* argv[])
    {
        std::string strFramesPath("g:\\frames\\");
    
        AVCodec* avCodec;
        AVCodecContext* avCodecContext;
        AVFrame* avFrame;
        AVCodecID codecId = AV_CODEC_ID_H264;
        unsigned char sprops_part_1[9] = { 0x27, 0x42, 0x80, 0x1f, 0xda, 0x02, 0xd0, 0x49, 0x10 };
        unsigned char sprops_part_2[4] = { 0x28, 0xce, 0x3c, 0x80 };
    
        av_register_all();
        avcodec_register_all();
        avCodec = avcodec_find_decoder(codecId);
        avCodecContext = avcodec_alloc_context3(avCodec);
        if (!avCodecContext)
        {
            cout << "avcodec_alloc_context3 failed." << endl;
            return 0;
        }
        uint8_t startCode[] = { 0x00, 0x00, 0x01 };
    
        // sprops
        {
            // sprops 1
            AddExtraData(startCode, sizeof(startCode));
            AddExtraData(sprops_part_1, 9);
            // sprops 2
            AddExtraData(startCode, sizeof(startCode));
            AddExtraData(sprops_part_2, 4);
    
            avCodecContext->extradata = extraDataBuffer;
            avCodecContext->extradata_size = extraDataSize;
        }
    
        AddExtraData(startCode, sizeof(startCode));
        avCodecContext->flags = 0;
        if (avcodec_open2(avCodecContext, avCodec, NULL) < 0)
        {
            cout << "failed to open codec" << endl;
            return 0;
        }
        avFrame = av_frame_alloc();
        if (!avFrame)
        {
            cout << "failed to alloc frame" << endl;
            return 0;
        }
    
        void *buffer = malloc(100 * 1024);  // 100 KB buffer - all frames fit in this buffer
        for (int nFrameIndex = 0; nFrameIndex < 257; nFrameIndex++)
        {
            std::string strFilename = std::string("g:\\frames\\" + std::to_string(nFrameIndex));
            FILE* f = fopen(strFilename.c_str(), "rb");
            fseek(f, 0, SEEK_END);
            long nFileSize = ftell(f);
            fseek(f, 0, SEEK_SET);
            size_t nReadSize = fread(buffer, 1, nFileSize, f);
            // cout << strFilename << endl;
            if (nReadSize != nFileSize)
            {
                cout << "Error reading file data" << endl;
                continue;
            }
            AVPacket avpkt;
            avpkt.data = (uint8_t*)buffer;
            avpkt.size = nReadSize;
    
            while (avpkt.size > 0)
            {
                int got_frame = 0;
                auto len = avcodec_decode_video2(avCodecContext, avFrame, &got_frame, &avpkt);
                if (len < 0) {
                    //TODO: log error
                    cout << "Error decoding - error code: " << len << endl;
                    break;
                }
                if (got_frame)
                {
                    cout << "* Got 1 Decoded Frame" << endl;
                }
                avpkt.size -= len;
                avpkt.data += len;
            }
        }
    
        getchar();
        return 0;
    }
    

    Test frames data can be downloaded from this link: Frames.zip (~3.7MB)

    I have used windows builds from Builds - Zeranoe FFmpeg If you copy paste this code into your IDE, the code compiles successfully. Using libavcodec new versions, no frame is decoded. Using old version of libavcodec (20141216-git-92a596f), decoding starts when feed frame 2.

    Any ideas?

  • Getting raw h264 packages from USB camera on Raspberry Pi

    13 juin 2018, par Aninano

    I am trying to receive H264 frames from a USB webcamera connected to my Raspberry PI

    Using the RPi Camera Module I can run the following command to get H264 data outputted in stdin: raspivid -t 0 -w 640 -h 320 -fps 15 -o - with close to zero latency

    Is there an equivalent function to do this with a USB camera? I have two USB cameras I would like to do this with.

    Using ffprobe /dev/videoX I get the following output: (shorted down to the improtant details):

    $ ffprobe /dev/video0
    ...
    Input #0, video4linux2,v4l2, from '/dev/video0':
    Duration: N/A, start: 18876.273861, bitrate: 147456 kb/s
    Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 1280x720, 147456 kb/s, 10 fps, 10 tbr, 1000k tbn, 1000k tbc
    
    $ ffprobe /dev/video1
    ...
    Input #0, video4linux2,v4l2, from '/dev/video1':
    Duration: N/A, start: 18980.783228, bitrate: 115200 kb/s
    Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 800x600, 115200 kb/s, 15 fps, 15 tbr, 1000k tbn, 1000k tbc
    
    
    $ ffprobe /dev/video2
    ...
    Input #0, video4linux2,v4l2, from '/dev/video2':
    Duration: N/A, start: 18998.984143, bitrate: N/A
    Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1920x1080, -5 kb/s, 30 fps, 30 tbr, 1000k tbn, 2000k tbc
    

    As far as I can tell two of them are not H264, which will need to be "decoded" to H264 so I understand there is added a bit latency there. But the third one (video2) is H264 so I should be able to get data from it? I've tried to just pipe it out with CAT but it says I got invalid arguments.

    I've come as far as using FFMPEG might be the only option here. Would like to use software easily available for all RPi (apt install).

    Bonus question regarding H264 packages: When I stream the data from raspivid command to my decoder it works perfectly. But if I decide to drop the 10 first packages then it never initializes the decoding process and just shows a black background. Anyone know what might be missing in the first packages that I might be able to recreate in my software so I dont have to restart the stream for every newly connected user?

    EDIT: Bonus Question Answer: After googling around I see that the first two frames raspivid sends me are. So by ignoring the two first frames my decoder wont "decode" properly. So if I save those frames and send them first to all new users it works perfectly. Seems like these are used in some kind of initial process.

    0x27 = 01 00111 = type 7    Sequence parameter set (B-frame)
    0x28 = 01 01000 = type 8    Picture parameter set (B-frame)
    
  • How to cross-compile ffmpeg for arm on Mac using clang

    13 juin 2018, par CoXier

    As title says, I want to cross compile FFmpeg using clang. Here is part of my configure.

     ./configure --cross-prefix=${TOOLCHAIN}/bin/arm-linux-androideabi-
    

    The var TOOLCHAIN is tool chain dir. After configured, the output is.

     C compiler                toolchains/bin/arm-linux-androideabi-gcc
     C library                 bionic
     host C compiler           gcc
     host C library            
    

    I want to change compiler to clang. So I export CC=${TOOLCHAIN}/bin/clang. However the configuration is keeping gcc. How can I use clang and clang++ compiler.

    Thanks in advance.

  • LiveSmoother : payload size : 32768 exceeds maximum allowed

    13 juin 2018, par user3828398

    I'm using the ffmpeg with libsrt, I'm writing out a frame with av_interleaved_write_frame, the url for the srt output is

    srt://10.10.56.45:5555?mode=listener&mss=1316&pkt_size=1316&send_buffer_size=1316&ffs=1316
    

    and I'm getting error

    SRT.c: LiveSmoother: payload size: 32768 exceeds maximum allowed 1316
    Operation not supported: Incorrect use of Message API (sendmsg/recvmsg)..
    

    The packet I'm trying to write is larger than 1316, is this the cause of the problem? Shouldn't av_interleaved_write_frame take care of the big packets?