Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Build ffmpeg in ijkplayer

    13 juin 2016, par Saty

    I am trying to do a build of ffmpeg inside ijkplayer and following this link to do a manual install https://github.com/Bilibili/ijkplayer. I downloaded the ndk and setup the path and setup the path for sdks and then compiling ffmpeg.

    previously, it shows to use ndk9 or later and found out from an post to change to do-compile-ffmpeg.sh from compile-ffmpeg.sh. I changed the file according to ndk-11 now its showing me the below line to specify the architechture. Where to mention that architecture.

    ./do-compile-ffmpeg.sh
     ====================
     [*] check env 
     ====================
     build on Darwin x86_64
     ANDROID_SDK=/username/Library/Android/sdk
     ANDROID_NDK=/Users/username/Documents/android-ndk-r11c
     You must specific an architecture 'arm, armv7a, x86, ...'.
    
  • Have a one liner, ffmpeg script to process and audio file would love to run it on AWS Lambda

    13 juin 2016, par Edward Potter

    Ok, wow, got Lambda to work processing my ImageMagick script. Would love to have that work with ffmpeg. It's a one liner, just don't see an easy way to do that. I can now upload anything to a S3 bucket.

    Mounting S3 on EC2 seems possible, do the processing local, then move it to S3, but the Lambda option seems far more interesting. Wondering if any insights (or references, suggestions) on how to implement that. Thanks.

    The script:

    ffmpeg -i $i -b:a 48k -ar 16000 -f mp3 "$i"
    
  • rtmp ffmpeg stream after opencv process c++

    13 juin 2016, par Javier Cabrera Arteaga

    I want to catch video stream from live stream, process image with opencv and repacket to rtmp live stream with the original audio. The first step is done, i have the opencv processed image, but when i send to output live stream and open with some video program (eg. VLC) nothing is showing. Here is mi code. Thanks in advance.

    #include 
    #include 
    #include 
    #include opencv.hpp>
    
    extern "C" {
    //Library containing decoders and encoders for audio/video codecs.
        #include common.h>
        #include avassert.h>
        #include channel_layout.h>
        #include opt.h>
        #include mathematics.h>
        #include timestamp.h>
        #include avformat.h>
        #include swscale.h>
        #include swresample.h>
        #includeimgutils.h>
        //Library performing highly optimized image scaling and color space/pixel format conversion operations.
    }
    using namespace std;
    
    struct openCVFrameContext{
        cv::Mat cvFrameRGB;
        bool errorStatus;
        bool isEmpty;
    };
    
    char errorBuffer[80];
    
    class Capture_FFMPEG{
    public:
        Capture_FFMPEG(){init();}
    
        ~Capture_FFMPEG(){close();}
    
        virtual bool openVideoFile(const char* filename);
        virtual openCVFrameContext queryFrame(AVFrame **dstAudio);
    
        int videoStream;
        int audioStream;
        int currentStream;
        int frameFinished;
    
        //Video
        AVFormatContext *pFormatContext;
        AVCodecContext *pCodecContext;
        AVCodec *pVCodec;
        AVFrame *pFrame;
        AVFrame *pFrameBGR;
        //Video
    
        //Audio
        AVCodecContext *pACodecContext;
        AVCodec *pACodec;
        AVFrame *pAFrame;
        //Audio
    
    
        uint8_t *bufferBGR;
        AVPacket pVPacket;
    
        openCVFrameContext cvFrameContext;
        struct SwsContext *pVImgConvertCtx;
    
    protected:
        virtual void init();
        virtual void close();
    
    };
    
    //function to initialize all protected variables
    void Capture_FFMPEG::init(){
        videoStream = -1;
        frameFinished = 0;
        audioStream = -1;
        currentStream = 0;
    }
    
    //Function to destroy all protected variables
    
    void Capture_FFMPEG::close() {
        if(pFrame) av_free(pFrame);
        if(pFrameBGR) av_free(pFrameBGR);
        if(&pVPacket) av_free(&pVPacket);
        //if(pVImgConvertCtx) sws_frpeeContext(pVImgConvertCtx);
        if(pFormatContext) avformat_close_input(&pFormatContext);
    //    if(pCodecContext) avcodec_close(pCodecContext);
    }
    
    bool Capture_FFMPEG::openVideoFile(const char *filename) {
        bool statusError = false;
    
        if(avformat_open_input(&pFormatContext, filename,NULL, NULL) != 0){
            cout << "Error opening video file";
            statusError = true;
        }
    
        if(avformat_find_stream_info(pFormatContext, NULL) < 0){
            cout << "Error loading video information";
            statusError = true;
        }
    
        av_dump_format(pFormatContext,0,filename, 0);
    
        videoStream = -1;
    
        audioStream = -1;
        //Getting only video channel
    
        for(int i = 0; i < pFormatContext->nb_streams; i++){
            if(pFormatContext->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
                videoStream = i;
            }
            if(pFormatContext->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
                audioStream = i;
        }
    
        if(videoStream < 0){
            cout << "Error getting video stream index" << endl;
        }
    
        if(audioStream < 0){
            cout << "Error getting audio stream idex" << endl;
        }
    
        // Check video stream is > 0
    
    
        pCodecContext = pFormatContext->streams[videoStream]->codec;
    
        pVCodec = avcodec_find_decoder(pCodecContext->codec_id);
    
        cout << "Open video decoder " << pVCodec->name << endl;
    
        // checking opening codec
    
        if(avcodec_open2(pCodecContext,pVCodec, NULL) < 0){
            cout << "Error opening video codec" << endl;
            statusError = true;
        }
    
        pFrame = av_frame_alloc();
        pFrameBGR = av_frame_alloc();
    
        int numBytes = av_image_get_buffer_size(AV_PIX_FMT_BGR24, pCodecContext->width, pCodecContext->height, 24);
    
        cout << numBytes;
        bufferBGR = (uint8_t *)av_malloc(numBytes* sizeof(uint8_t));
    //    av_image_alloc(pFrameBGR->data, pFrameBGR->linesize, pCodecContext->width, pCodecContext->height,
    //                   AV_PIX_FMT_BGR24,32);
    
        int ret = av_image_fill_arrays(pFrameBGR->data, pFrameBGR->linesize, bufferBGR,AV_PIX_FMT_BGR24,
        pCodecContext->width, pCodecContext->height, 24);
    
        cout << ret << endl;
        if(ret < 0){
            av_strerror(ret, errorBuffer, 80);
            cout << "Could not fill image "<< errorBuffer;
        }
    
        cvFrameContext.cvFrameRGB.create(pCodecContext->height,pCodecContext->width, CV_8UC(3));
    
        //audio
    
        pACodecContext = pFormatContext->streams[audioStream]->codec;
        pACodec = avcodec_find_decoder(pACodecContext->codec_id);
    
    
        avcodec_open2(pACodecContext, pACodec, NULL);
    
        cout << "Audio decoder " << pACodec->name << endl;
        pAFrame = av_frame_alloc();
    
        return statusError;
    }
    
    
    
    openCVFrameContext Capture_FFMPEG::queryFrame(AVFrame **audio_dst) {
    
        if(av_read_frame(pFormatContext, &pVPacket) < 0){
            cout << "Error Could not read frame" << endl;
            return cvFrameContext;
        }
    
        currentStream = pVPacket.stream_index;
    
        if(pVPacket.stream_index == videoStream){
    
            if(avcodec_decode_video2(pCodecContext,pFrame, &frameFinished, &pVPacket) < 0){
                cout << "Error could not decode video" << endl;
            }
    
            if(frameFinished){
    
                if(pVImgConvertCtx == NULL){
                    pVImgConvertCtx = sws_getContext(pCodecContext->width, pCodecContext->height,
                    pCodecContext->pix_fmt, pCodecContext->width, pCodecContext->height, AV_PIX_FMT_BGR24, SWS_BICUBIC,
                            NULL,NULL, NULL);
                }
    
    
    //            int ret = av_frame_make_writable(pFrameBGR);
    //            if(ret < 0) {
    //                av_strerror(ret, errorBuffer, 80);
    //                cout << "Could not write frame" << errorBuffer << endl;
    //            }
    
                sws_scale(pVImgConvertCtx,(const uint8_t* const *) pFrame->data,pFrame->linesize,
                          0, pCodecContext->height, pFrameBGR->data,pFrameBGR->linesize );
    
                //Populate opencv matrix
                for(int y = 0; y < pCodecContext->height; y++){
                    for(int x = 0; x < pCodecContext->width; x++){
                        cvFrameContext.cvFrameRGB.at(y, x)[0] = pFrameBGR->data[0][y*pFrameBGR->linesize[0] +x*3 + 0];
                        cvFrameContext.cvFrameRGB.at(y, x)[1] = pFrameBGR->data[0][y*pFrameBGR->linesize[0] +x*3 + 1];
                        cvFrameContext.cvFrameRGB.at(y, x)[2] = pFrameBGR->data[0][y*pFrameBGR->linesize[0] +x*3 + 2];
                    }
                }
    
            }
    
        }
    
        *audio_dst = NULL;
    
        if(pVPacket.stream_index == audioStream){
    
            int ret = avcodec_decode_audio4(pACodecContext, pAFrame, &frameFinished, &pVPacket);
            if(ret < 0){
                av_strerror(ret, errorBuffer,80);
    
                cout << "Could not decode audio " << errorBuffer << endl;
            }
    
            *audio_dst = pAFrame;
        }
    
    
        return cvFrameContext;
    }
    
    
    int main() {
    
    
        av_register_all();
        avformat_network_init();
    
        Capture_FFMPEG *capture = new Capture_FFMPEG;
    
    
        openCVFrameContext frame;
        frame.errorStatus = false;
    
        string fname = "/var/www/html/stream/test2.ts";
        //string fname = "rtmp://127.0.0.1:1935/live/got.ts";
        //string fname = "/home/javier/PycharmProjects/unmask/output.mpg";
        frame.errorStatus = capture->openVideoFile(fname.c_str());
        //frame.errorStatus = capture->openVideoFile("http://localhost/stream/out1.ts");
    
    //    cv::namedWindow("test",  cv::WINDOW_NORMAL);
    
        AVFormatContext* outfc = NULL;
        AVIOContext * avioCTX;
    
        outfc = avformat_alloc_context();
    
        int ret2 = avformat_alloc_output_context2(&outfc, NULL, "mpegts", "rtmp://127.0.0.1:1935/live/test");
         //int ret2 = avformat_alloc_output_context2(&outfc, NULL, NULL, "/home/javier/Videos/test.mpg");
    
    
        if(ret2 < 0){
            av_strerror(ret2, errorBuffer, 80);
            cout << "Could not open video to encode output " << errorBuffer << endl;
        }
    
        AVCodec* outCodec = avcodec_find_encoder(AV_CODEC_ID_RAWVIDEO);
    
        if(!outCodec){
            cout << "Could not find coder" << endl;
        }
    
        AVStream* str = avformat_new_stream(outfc, outCodec);
    
        avcodec_get_context_defaults3(str->codec, outCodec);
        str->codec->width = capture->pCodecContext->width;
        str->codec->height = capture->pCodecContext->height;
        str->codec->pix_fmt = capture->pCodecContext->pix_fmt;
        str->time_base = capture->pCodecContext->time_base;
        str->codec->time_base = str->time_base;
        str->codec->framerate = capture->pCodecContext->framerate;
        str->codec->bit_rate = capture->pCodecContext->bit_rate;
        str->codec->gop_size = capture->pCodecContext->gop_size;
        str->codec->has_b_frames = capture->pCodecContext->has_b_frames;
    
    
        avcodec_open2(str->codec, outCodec, NULL);
    
        AVCodec* audioCodec = avcodec_find_encoder(outfc->oformat->audio_codec);
        AVStream* audioStream = avformat_new_stream(outfc, audioCodec);
    
        avcodec_get_context_defaults3(audioStream->codec, audioCodec);
        audioStream->codec->sample_fmt = capture->pACodecContext->sample_fmt;
        audioStream->codec->bit_rate = capture->pACodecContext->bit_rate;
        audioStream->codec->sample_rate = capture->pACodecContext->sample_rate;
    
        audioStream->codec->channel_layout = AV_CH_LAYOUT_STEREO;
        audioStream->codec->channels = av_get_channel_layout_nb_channels(str->codec->channel_layout);
        audioStream->time_base = (AVRational){1, str->codec->sample_rate};
    
        avcodec_open2(audioStream->codec, audioCodec, NULL);
    
    
        av_dump_format(outfc,0, "rtmp://127.0.0.1:1935/live/test", true);
        av_dump_format(outfc,1, "rtmp://127.0.0.1:1935/live/test", true);
    
    
        ret2 = avio_open2(&outfc->pb, "rtmp://127.0.0.1:1935/live/test", AVIO_FLAG_WRITE, NULL, NULL);
        cout << ret2 << endl;
        int ret = 0;
    
        SwsContext* swsctx = sws_getCachedContext(
                NULL, capture->pCodecContext->width, capture->pCodecContext->height, AV_PIX_FMT_BGR24,
                str->codec->width, str->codec->height, str->codec->pix_fmt, SWS_BICUBIC, NULL, NULL, NULL);
    
    
        AVFrame* outFrame = av_frame_alloc();
    //    av_frame_get_buffer(outFrame, 32);
        std::vector framebuf((unsigned long)av_image_get_buffer_size(str->codec->pix_fmt, str->codec->width, str->codec->height, 24));
    
        ret = av_image_fill_arrays(outFrame->data, outFrame->linesize, framebuf.data(), str->codec->pix_fmt, capture->pCodecContext->width,
                       capture->pCodecContext->height, 12);
    
        cout <<  ret << endl;
        if(ret < 0){
            av_strerror(ret, errorBuffer, 80);
            cout << "Could not fill image data empty for frame " << errorBuffer << endl;
        }
    
        outFrame->width = capture->pCodecContext->width;
        outFrame->height = capture->pCodecContext->height;
        outFrame->format = str->codec->pix_fmt;
    
    
    
    //    AVFrame* audioOutFrame = avcodec_alloc_frame();
    
        int r = avformat_write_header(outfc, NULL);
    
        if(r < 0){
            av_strerror(r, errorBuffer, 80);
            cout << "Could not write header "<< errorBuffer << endl;
            exit(1);
        }
    
        cv::Mat gray;
        cv::Mat msk;
        cv::Mat copy;
        cv::Mat zeros;
        cv::Mat inp;
    
        vector > contours;
        vector rectangles;
    
        int got;
        int got_audio;
        int frame_pts = 0;
        int delay = 1;
        int dst_nb_samples;
    
        AVFrame *audioFrame;
    
        while(1) {
    
            frame_pts++;
            cout << frame_pts << endl;
    
            frame = capture->queryFrame(&audioFrame);
    
            if(capture->currentStream == capture->videoStream && capture->frameFinished)
            {
                cv::cvtColor(frame.cvFrameRGB, gray, cv::COLOR_RGB2GRAY);
    
                const int stride[] = {static_cast(frame.cvFrameRGB.step[0])};
                ret = sws_scale(swsctx, &frame.cvFrameRGB.data, stride,
                 0, frame.cvFrameRGB.rows, outFrame->data, outFrame->linesize);
                if(ret < 0){
                    av_strerror(ret, errorBuffer, 80);
                    cout << "Could not scale "<< errorBuffer << endl;
                }
    
                outFrame->pts = capture->pVPacket.pts;
    
                AVPacket outPck = {0};
    
                av_init_packet(&outPck);
    
                ret = avcodec_encode_video2(str->codec, &outPck, outFrame, &got);
    
                if (ret < 0) {
    
                    av_strerror(ret, errorBuffer, 80);
                    cout << "Error encoding frame " << errorBuffer << endl;
                }
    
                av_packet_rescale_ts(&outPck,capture->pCodecContext->time_base, str->time_base);
    
                if (got) {
                    outPck.stream_index = str->index;
    //                av_interleaved_write_frame(outfc, &outPck);
                    av_write_frame(outfc, &outPck);
                }
                av_packet_free_side_data(&outPck);
            }
            else{
                  AVPacket audioPckt = {0};
    //
                av_packet_ref(&audioPckt, &capture->pVPacket);
                audioPckt.stream_index = 1;
                av_write_frame(outfc, &audioPckt);
    //            av_interleaved_write_frame(outfc, &audioPckt);
    
    
                av_packet_free_side_data(&audioPckt);
    //
            }
        }
    
        av_write_trailer(outfc);
    
        delete capture;
    
        return 0;
    }
    
  • Add diffrent animation for diffrent frame in video using ffmpeg android

    13 juin 2016, par Sachin Suthar

    I am trying to apply animations for the particular video frame but hereby I'm not seeing that video frame by which I have to apply the animation. I only need fade in and fade out frame and display it as slide show video. Do you have any idea about this?

    I am using the same command which was given to this url but for the particular 5 second video hide and not getting slide show effect.

  • FFmpeg command for crossfading between 2 videos and merge last video without fading not working

    13 juin 2016, par Harsh Bhavsar

    Here, New in FFmpeg . I am using this library

    I already done with 2 video join with this command :

    String[] complexCommand = {"ffmpeg","-y","-i","/sdcard/cut_output.mp4",
                        "-i","/sdcard/harsh1.mp4","-strict","experimental",
                        "-filter_complex",
                        "[0:v]scale=640x480,setsar=1:1[v0];[1:v]scale=640x480,setsar=1:1[v1];[v0][0:a][v1][1:a] concat=n=2:v=1:a=1",
                        "-ab","48000","-ac","2","-ar","22050","-s","640x480","-r","30","-vcodec","mpeg4","-b","2097k","/sdcard/merged.mp4"};
    

    Now i need a command that first 2 videos join between cross fade effect and third video without fading effect directly join .I need that command Help me out pls... Thanx