Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Not able to decode mp4 file using latest ffmpeg library : av_decode_video2

    8 février 2017, par suvirai

    I am writing a wrapper code around latest ffmpeg library. I am supplying MP4 files from local system. My problem is that I am unable to get any decoded frames when I use av_decode_video2(). The return value comes out to be negative. I have used av_read_frame() which returns 0. I googled about the problem I am facing but no where could I find the correct explanation. Please give me insight here. Pasting the pseudo code here.

        av_init_packet(avpkt);
        picture=av_frame_alloc();
        pFrameRGB=av_frame_alloc();
        codec = avcodec_find_decoder(CODEC_ID_H264);
        c= avcodec_alloc_context3(codec)
        avcodec_open2(decoderLibraryData->c, decoderLibraryData->codec, NULL)
        FormatContext = avformat_alloc_context();
        char *pUrl ="./1.MP4";
    
        iRet = avformat_open_input(atContext, pUrl, pFmt, NULL);
    
        if(FormatContext == NULL)
        {
            printf("could not assign any memory !!!!!!!!! \n");
        }
    
        avformat_find_stream_info(FormatContext, NULL);
    
    
        while(av_read_frame(FormatContext,avpkt) >= 0)
        {
    
          len = avcodec_decode_video2(c, picture, &got_picture,avpkt);
    
          printf("CODEC MANAGER len %d Frame decompressed %d \n",len,got_picture);
    
          if (len <= 0) 
          {
            return ERROR;
          }
        }
    }
    
    
    
            if(lastHeight != 0 && lastWidth != 0)
            {
                if(lastWidth != c->width || lastHeight != c->height )
                {
                    av_free(buffer);
                    buffer = NULL;
                    lastWidth = c->width;
                    lastHeight = c->height;
    
                }
            }
            else
            {
                lastWidth = c->width;
                lastHeight = c->height;
            }
            decodeFlag = 1;
            if(!buffer)
            {
                int numBytes;
                v_mutex_lock(globalCodecLock);
                switch(inPixFormat)
                {
                case RGB:
    
    
                    // Determine required buffer size and allocate buffer
                    numBytes=avpicture_get_size(PIX_FMT_RGB24, c->width, c->height);
    
                    buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
                    avpicture_fill((AVPicture *)pFrameRGB,buffer,PIX_FMT_RGB24,c->width, c->height);
    
                    if(cntxt)
                        sws_freeContext(cntxt);
    
                    cntxt = sws_getContext(c->width, c->height, c->pix_fmt,
                            c->width, c->height, PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
    
                    break;
    
                }
                v_mutex_unlock(globalCodecLock);
                if(cntxt == NULL)
                {
                    printf("sws_getContext error\n");
                    return ERROR;
                }
                }
    
            {
                sws_scale(cntxt, picture->data, picture->linesize, 0, c->height, pFrameRGB->data, pFrameRGB->linesize);
                if(rgbBuff)
                {
    
    
                    if(c->width <= *width && c->height <= *height)
                    {                   
                        saveFrame(pFrameRGB, c->width, c->height, rgbBuff,inPixFormat);
    
                        *width = c->width;
                        *height = c->height;
                        rs = SUCCESS;
                        break;
                    }
                    else
                    {
                        rs = VA_LOWBUFFERSIZE;
                    }
                }
                else
                {
                    rs = VA_LOWBUFFERSIZE;
                }
            }
            if(width)
            {
                *width = c->width;
            }
            if(height)
            {
                *height = c->height;
            }
            if(rs == VA_LOWBUFFERSIZE)
            {
                break;
            }
    

    I am getting the return value of av_read_frame as 0 but av_decode_video2 returns value in negative. I am not able to get any clue here.

  • OpenCV 3.2.0(cmake error in configuration process ) “configure” fails, ffmpeg not downloaded on Windows [duplicate]

    8 février 2017, par O.OZTURK

    This question already has an answer here:

    I get the following error when trying to configure OpenCV using CMAKE on windows:

    CMake Warning at cmake/OpenCVUtils.cmake:1020 (message): Download: Local copy of opencv_ffmpeg.dll has invalid MD5 hash:
    d41d8cd98f00b204e9800998ecf8427e (expected:
    f081abd9d6ca7e425d340ce586f9c090) Call Stack (most recent call first): 3rdparty/ffmpeg/ffmpeg.cmake:10 (ocv_download)
    cmake/OpenCVFindLibsVideo.cmake:219 (include) CMakeLists.txt:557 (include)

    Downloading opencv_ffmpeg.dll... CMake Error at cmake/OpenCVUtils.cmake:1043 (file): file DOWNLOAD HASH mismatch

    for file: [C:/OpenCV-3.2.0/opencv/sources/3rdparty/ffmpeg/downloads/f081abd9d6ca7e425d340ce586f9c090/opencv_ffmpeg.dll]
      expected hash: [f081abd9d6ca7e425d340ce586f9c090]
        actual hash: [d41d8cd98f00b204e9800998ecf8427e]
             status: [6;"Couldn't resolve host name"]
    

    Call Stack (most recent call first): 3rdparty/ffmpeg/ffmpeg.cmake:10 (ocv_download) cmake/OpenCVFindLibsVideo.cmake:219 (include)
    CMakeLists.txt:557 (include)

    CMake Error at cmake/OpenCVUtils.cmake:1047 (message): Failed to download opencv_ffmpeg.dll. Status=6;"Couldn't resolve host name" Call Stack (most recent call first): 3rdparty/ffmpeg/ffmpeg.cmake:10 (ocv_download) cmake/OpenCVFindLibsVideo.cmake:219 (include)
    CMakeLists.txt:557 (include)

    Configuring incomplete, errors occurred! See also "C:/OpenCV-3.2.0/opencv_manual/CMakeFiles/CMakeOutput.log". See also "C:/OpenCV-3.2.0/opencv_manual/CMakeFiles/CMakeError.log".

  • How to get a online video's duration without downloading the full video ?

    8 février 2017, par David Zhuang

    To get a video's duration and resolution, I've got this function:

    def getvideosize(url, verbose=False):
    try:
        if url.startswith('http:') or url.startswith('https:'):
            ffprobe_command = ['ffprobe', '-icy', '0', '-loglevel', 'repeat+warning' if verbose else 'repeat+error', '-print_format', 'json', '-select_streams', 'v', '-show_streams', '-timeout', '60000000', '-user-agent', BILIGRAB_UA, url]
        else:
            ffprobe_command = ['ffprobe', '-loglevel', 'repeat+warning' if verbose else 'repeat+error', '-print_format', 'json', '-select_streams', 'v', '-show_streams', url]
        logcommand(ffprobe_command)
        ffprobe_process = subprocess.Popen(ffprobe_command, stdout=subprocess.PIPE)
        try:
            ffprobe_output = json.loads(ffprobe_process.communicate()[0].decode('utf-8', 'replace'))
        except KeyboardInterrupt:
            logging.warning('Cancelling getting video size, press Ctrl-C again to terminate.')
            ffprobe_process.terminate()
            return 0, 0
        width, height, widthxheight, duration = 0, 0, 0, 0
        for stream in dict.get(ffprobe_output, 'streams') or []:
            if dict.get(stream, 'duration') > duration:
                duration = dict.get(stream, 'duration')
            if dict.get(stream, 'width')*dict.get(stream, 'height') > widthxheight:
                width, height = dict.get(stream, 'width'), dict.get(stream, 'height')
        if duration == 0:
            duration = 1800
        return [[int(width), int(height)], int(float(duration))+1]
    except Exception as e:
        logorraise(e)
        return [[0, 0], 0]
    

    But some online videos comes without duration tag. Can we do something to get its duration?

  • FFMPEG - concatnating audio files without gaps

    8 février 2017, par anujprashar

    I am concatnating multiple mp3 files using ffmpeg, duration of each file is 2-3 seconds. In any usual mp3 there is some delay in begining and end due to which concatnation is not seamless and there are gaps in output mp3.

    I found on this post FFmpeg gap in concatenated audio after splitting audio that I can use atrim filter to trim begining and end. I made below command:

    ffmpeg -i vtest1.mp3 -i vtest2.mp3 -i vtest3.mp3 -i vtest4.mp3 -filter_complex "[0:a]atrim=end=1.98[a0]; [1:a]atrim=start=0.5:end=2.4[a1]; [2:a]atrim=start=0.5:end=2.2[a2]; [3:a]atrim=start=0.5:end=2.2[a3]; [a0][a1][a2][a3]concat=n=4:v=0:a=1[out]" -map [out] -acodec libmp3lame output.mp3

    I have used hit and trial to trim 0.5 from start and around 0.5 from end. Is delay in start same for every mp3. Is there any way to know exact value of delay in start and end of mp3 so that I can use exact value to do trim before concatnation.

  • mp4 pseudo-streaming implementation server & iOS side

    8 février 2017, par eddie.xie

    I'm trying to learn how to do pseudo streaming for MP4 files. I can't think of a good way to do it, but I just found a great example app has similar implementation (except I don't understand how it does it yet)

    Here's the scenario:

    Alice can send a video to Bob in the app

    Bob can open it immediately and see Alice's video, from beginning, while Alice is still recording it

    Also, Bob can choose to view the video later after Alice finished recording. But Bob should be able to view the video instantly without waiting too much time, even when the whole size of the video is large.

    Thus, my hunch is, it's using some sort of pseudo streaming for mp4.

    Here's the screenshots of the requests Alice's phone makes while using the example app:

    enter image description here

    The screenshot suggests, the example app is making an array of PATCH requests to their server, every 0.x seconds. And finally, the very last request will make a PATCH to update the moov information for this MP4.

    Thus my question is, how is this implemented (any educated guess will be welcomed)? Or is there any sort of existing protocol/iOS encoder that I didn't know is doing this already?

    Thanks a lot!