Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Displaying 450 image files from SDCard at 30fps on android

    11 décembre 2013, par nikhilkerala

    I am trying to develop an app that takes a 15 seconds of video, allows the user to apply different filters, shows the preview of the effect, then allows to save the processed video to sdcard. I use ffmpeg to split the video into JPEG frames, apply the desired filter using GPUImage to all the frames, then use ffmpeg to encode the frames back to a video. Everything works fine except the part where user selects the filter. When user selects a filter, the app is supposed to display the preview of the video with the filter applied. Though 450 frames get the filter applied fairly quick, displaying the images sequentially at 30 fps (to make the user feel the video is being played) is performing poorly. I tried different approaches but the maximum frame rate I could attain even on the fastest devices is 10 to 12 fps.

    The AnimationDrawable technique doesn't work in this case because it requires the entire images to be buffered into memory which in this case is huge. App crashes.

    The below code is the best performing one so far (10 to 12 fps).

    package com.example.animseqvideo;
    import ......
    
    public class MainActivity extends Activity {
        Handler handler;
        Runnable runnable;
        final int interval = 33; // 30.30 FPS
        ImageView myImage;
        int i=0;
    
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
            setContentView(R.layout.activity_main);
    
            myImage = (ImageView) findViewById(R.id.imageView1);
    
            handler = new Handler();
            runnable = new Runnable(){
                public void run() {
    
                    i++;  if(i>450)i=1;
    
                    File imgFile = new  File(Environment.getExternalStorageDirectory().getPath() + "/com.example.animseqvideo/image"+ String.format("%03d", i)   +".jpg");
                    if(imgFile.exists()){
                        Bitmap myBitmap = BitmapFactory.decodeFile(imgFile.getAbsolutePath());
                        myImage.setImageBitmap(myBitmap);
                    }
    //SOLUTION EDIT - MOVE THE BELOW LINE OF CODE AS THE FIRST LINE OF run() AND FPS=30 !!!
    
                    handler.postDelayed(runnable, interval);
                }
            };
            handler.postAtTime(runnable, System.currentTimeMillis()+interval);
            handler.postDelayed(runnable, interval);
        }
    }
    

    I understand that the process of getting an image from SDCard, decoding it, then displaying it onto the screen involves the performance of the SDCard reading, the CPUs performance and graphics performance of the device. But I am wondering if there is a way I could save a few milliseconds in each iteration. Any suggestion would be of great help at this point.

  • ffmpeg/libav Linking issue in Windows

    11 décembre 2013, par Dídac Pérez

    I have cross compiled ffmpeg and libav from Linux to Windows (mingw32). So, I've got my .a files and ready to be used for linking in my MSVC2010 project. The thing is that I am getting linking errors and I don't understand why:

    1>RTSPCapture.obj : error LNK2019: unresolved external symbol _avformat_free_context referenced in function "public: int __thiscall Imagsa::RTSPCapture::dumpSync(class std::basic_string,class std::allocator > const &,class std::basic_string,class std::allocator > const &,class std::basic_stringstream,class std::allocator > &,double *,class std::vector > &)" (?dumpSync@RTSPCapture@Imagsa@@QAEHABV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@0AAV?$basic_stringstream@DU?$char_traits@D@std@@V?$allocator@D@2@@4@PANAAV?$vector@VMjpegFrame@Imagsa@@V?$allocator@VMjpegFrame@Imagsa@@@std@@@4@@Z)
    1>RTSPCapture.obj : error LNK2019: unresolved external symbol _avio_close referenced in function "public: int __thiscall Imagsa::RTSPCapture::dumpSync(class std::basic_string,class std::allocator > const &,class std::basic_string,class std::allocator > const &,class std::basic_stringstream,class std::allocator > &,double *,class std::vector > &)" (?dumpSync@RTSPCapture@Imagsa@@QAEHABV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@0AAV?$basic_stringstream@DU?$char_traits@D@std@@V?$allocator@D@2@@4@PANAAV?$vector@VMjpegFrame@Imagsa@@V?$allocator@VMjpegFrame@Imagsa@@@std@@@4@@Z)
    1>RTSPCapture.obj : error LNK2019: unresolved external symbol _avcodec_close referenced in function "public: int __thiscall Imagsa::RTSPCapture::dumpSync(class std::basic_string,class std::allocator > const &,class std::basic_string,class std::allocator > const &,class std::basic_stringstream,class std::allocator > &,double *,class std::vector > &)" (?dumpSync@RTSPCapture@Imagsa@@QAEHABV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@0AAV?$basic_stringstream@DU?$char_traits@D@std@@V?$allocator@D@2@@4@PANAAV?$vector@VMjpegFrame@Imagsa@@V?$allocator@VMjpegFrame@Imagsa@@@std@@@4@@Z)
    

    Does anybody know what could happen?

  • Problems linking to FFMPEG library in Visual Studio 2010

    11 décembre 2013, par Dídac Pérez

    I have cross-compiled FFMPEG from Debian using the mingw32 toolchain. The result of the compilation is a set of .a files. When I try to use them in my project I get linker errors, concretely the following ones:

    1>RTSPCapture.obj : error LNK2019: unresolved external symbol _avformat_free_context referenced in function ...
    1>RTSPCapture.obj : error LNK2019: unresolved external symbol _avio_close referenced in function ...
    1>RTSPCapture.obj : error LNK2019: unresolved external symbol _avcodec_close referenced in function ...
    (and much more...)
    

    I have already included the header files like this:

    extern "C"
    {
        #include avcodec.h>
        #include avformat.h>
        #include avio.h>
    }
    

    And I use the .a files like this:

    #pragma comment(lib, "libavcodec.a")
    #pragma comment(lib, "libavformat.a")
    #pragma comment(lib, "libavutil.a")
    

    May I know why I am still getting linker errors? Best regards,

    EDIT: I have realized that this is not possible. So, what should I do to use FFMPEG library in my MSVC2010 project taking into account that I can't compile ffmpeg in Windows? it seems to be REALLY difficult...

  • FFMPEG : Displaying a white screen using ffplay via a custom decoder

    11 décembre 2013, par Zax

    I have created a dummy decoder, Here in its decode function, i would be assigning the output file pointer with a YUV420 data filled with 255 (i.e. a white screen).

    I also have a corresponding probe function for my dummy decoder, where it takes an dummy input file and based on some checking i return AVPROBE_SCORE_MAX. This probe section works perfectly fine and invokes my custom dummy decoder.

    The AVCodec structure of for my dummy decoder is as shown below:

    AVCodec ff_dummyDec_decoder = {
        .name           = "dummyDec",
        .type           = AVMEDIA_TYPE_VIDEO,
        .id             = AV_CODEC_ID_MYDEC,
        .priv_data_size = sizeof(MYDECContext),
        .pix_fmts       = (const enum AVPixelFormat[]) {AV_PIX_FMT_YUV420P},
        .init           = dummyDec_decode_init,
        .close          = dummyDec_decode_close,
        .decode         = dummyDec_decode_frame,
    };
    

    Where,

    .init -> is a pointer to a function that performs my decoder related initializations
    .close -> is a pointer to a function that frees all memory that was allocated during initialization
    .decode -> is pointer to a function that decodes a frame.
    

    The definitions for the above functions is shown below:

    #include 
    #include 
    #include "avcodec.h"
    
    unsigned char *yPtr=NULL;
    unsigned char *uPtr=NULL;
    unsigned char *vPtr=NULL;
    
    int memFlag=0;//If memFlag is zero then allocate memory for YUV data
    
    int width=416;//Picture width and height that i want to display in ffplay
    int height=240;
    
    static int dummyDec_decode_frame(AVCodecContext *avctx, void *data,
                                 int *got_frame_ptr, AVPacket *avpkt)
    {
        AVFrame *frame=data; //make frame point to the pointer on which output should be mapped
        printf("\nDecode function entered\n");
        frame->width=width;
        frame->height=height;
        frame->format=AV_PIX_FMT_YUV420P;
    
        //initialize frame->linesize[] array
        avpicture_fill((AVPicture*)frame, NULL, frame->format,frame->width,frame->height);
    
        frame->data[0]=yPtr;
        frame->data[1]=uPtr;
        frame->data[2]=vPtr;
    
        *got_frame_ptr = 1;
    
        printf("\nGotFramePtr set to 1\n");
    
        return width*height+(width/2)*(height/2)+(width/2)*(height/2);//returning the amount of bytes being used
    }
    
    static int dummyDec_decode_init(AVCodecContext *avctx)
    {
        printf("\nDummy Decoders init entered\n");
    
        //Allocate memory for YUV data
        yPtr=(unsigned char*)malloc(sizeof(unsigned char*)*width*height);
        uPtr=(unsigned char*)malloc(sizeof(unsigned char*)*width/2*height/2);
        vPtr=(unsigned char*)malloc(sizeof(unsigned char*)*width/2*height/2);
    
        if(yPtr == NULL || uPtr ==NULL ||vPtr==NULL)
            exit(0);
    
        //set allocated memory with 255 i.e white color
       memset(yPtr,255,width*height);
       memset(uPtr,255,width/2*height/2);
       memset(vPtr,255,width/2*height/2);
    }
    
    static int dummyDec_decode_close(AVCodecContext *avctx)
    {
        free(yPtr);
        free(uPtr);
        free(vPtr);
    }
    

    From the dummyDec_decode_frame() function, i'm returning the number of bytes that are being used to display the white colour. Is this right? Secondly, I have no parser for my decoder, because i'm just mapping a yuv buffer containing white data to AVFrame structure pointer.

    The command that i use for executing is:

    ./ffplay -vcodec dummyDec -i input.bin
    

    The output is an infinite loop with the following messages:

    dummyDec probe entered
    
    Dummy Decoders init entered
    
    Decode function entered
    
    GotFramePtr set to 1
    
    Decode function entered
    
    GotFramePtr set to 1
    
    Decode function entered
    
    GotFramePtr set to 1
    
    .
    
    .
    
    .(the last two messages keep repeating)
    

    Where is it i'm going wrong? is it the absence of parser or something else? I'm unable to proceed because of this. Please provide your valuable answers. Thanks in advance.

    --Regards

  • cvWriteFrame cant write frames into mp4 file when using opencv's ffmpeg_64.dll

    11 décembre 2013, par user3074013

    1.When I successfully create a videowrite ,I try to write some frames into a mp4 format file.

    The crash occurred.

    2.codes:

    m_vw = cvCreateVideoWriter(fileName, codec, fps, m_size, bColor); ....//had got a right iplimage here if( 0 == cvWriteFrame(m_vw, iplimg) )//Here it is crash.But just for 64bits, 32bits is ok. break; // failed to write frame

    3.Some persons said it maybe ffmpeg codec plugin is needed to be installed,like ffdshow or xvid.But i install them ,it still cant work.

    So how to write image into mp4 for 64bits. I will appreciate your helps.Thanks.