Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • OpenCV compilation : How to specify the location of FFmpeg library with cmake

    6 juin 2017, par Beanocean

    I want to compile OpenCV-2.13.2 with gcc-4.8.2, but the version installed in system path is gcc-4.4.6. So I installed gcc-4.8.2 in /opt/compiler/gcc-4.8.2. I compiled FFmpeg successfully with gcc-4.8.2. When I tried to compile OpenCV, I met with some problems.

    In linking stage, there were some libraries can not be found by /opt/compiler/gcc-4.8.2/bin/ld. The error message is as follows: enter image description here

    The missing libraries are related with FFmpeg, and I have installed FFmpeg in ~/local/lib. Then I checked the file module/core/CMakeFiles/opencv_pref_core.dir/link.txt, It shows as follows: enter image description here

    The ld just did not search the path where I installed FFmpeg. I tried two methods:

    1. add FFmpeg path to env: export LD_LIBRARY_PATH=~/local/lib:$LD_LIBRARY_PATH;
    2. add -D FFMPEG_INCLUDE_DIRS=~/local/include -D FFMPEG_LIBRARAY_DIRS=~/local/lib to cmake options

    They did not work at all.

  • javacv FFmpegFrameGrabber http Authorization Required

    6 juin 2017, par Anastasia Sisordia

    I use the javacv library and http protocol for connect to camera. I have the adress (http://123.456.78.90/ for example) and login and password.

    ( For example: admin 12345 )

    I write

    FFmpegFrameGrabber grabber = new FFmpegFrameGrabber("http://123.456.78.90/");
    

    but I dont know where I need to use login+password. If I don`t use, it catch exeption: HTTP error 401 Authorization Required.

    Anyone can say me how I can send login/password. I tried http://login:password@123.456.78.90/ and this don`t work.

  • How to export video as .mp4 using openCV ?

    6 juin 2017, par Riko

    I am trying to export video as .mp4 with openCV. I have tried several codecs but for now I had no success.

    This is a function that constructs a video from frames:

    def create_movie(self, out_directory, fps, total_frames):
        img1 = cv2.imread("temp/scr0.png")
        height, width, layers =  img1.shape
        codec = cv2.cv.CV_FOURCC('X','V','I','D')
        video = cv2.VideoWriter(out_directory, codec, fps, (width, height))
    
        for i in range(total_frames):
            img_name = "temp/scr" + str(i) + ".png"
            img = cv2.imread(img_name)
            video.write(img)
    
        video.release()
        cv2.destroyAllWindows()
    

    I usually get next error message, using different codecs:

    Tag XVID/0x44495658 incompatible with output codec id '13'
    

    Is is possible to do this and how?

  • ffmpeg convert AVFrame from yuv420p to argb

    6 juin 2017, par Star丶Xing

    I'm tring to decode h.264 stream buffer on android using ffmpeg 3.3. I got an AVFrame structure from the stream successfully, it's format is AV_PIX_FMT_YUV420P. In order to display the image on android, I need to convert it to AV_PIX_FMT_ARGB or AV_PIX_FMT_RGB565. I tried many times, finally I find that I can convert the frame to AV_PIX_FMT_NV21 but cannot convert it to argb or rgb565 using almost same code except some arguments.Here is my code looks like:

    //here i got srcFrame from h264 stream, it's format and size was set by decoder correctly
    
    AVFrame *dstFrame = av_frame_alloc();
    dstFrame->format = AV_PIX_FMT_ARGB;
    dstFrame->width = srcFrame->width;
    dstFrame->height = srcFrame->height;
    
    struct SwsContext *swsContext = sws_getContext(srcFrame->width, srcFrame->height, srcFormat,
                                                   dstFrame->width, dstFrame->height,
                                                   (enum AVPixelFormat) dstFrame->format,
                                                   SWS_FAST_BILINEAR, NULL, NULL, NULL);
    
    int dstBufferSize = av_image_get_buffer_size((enum AVPixelFormat) dstFrame->format, ctx->width,
                                                 ctx->height, 1);
    uint8_t *dstBuffer = (uint8_t *) av_malloc(
            sizeof(uint8_t) * dstBufferSize);
    
    av_image_fill_arrays(dstFrame->data, dstFrame->linesize, dstBuffer,
                         (enum AVPixelFormat) dstFrame->format, dstFrame->width,
                         dstFrame->height,
                         1);
    
    sws_scale(swsContext, (const uint8_t *const *) srcFrame->data,
              srcFrame->linesize, 0, srcFrame->height,
              dstFrame->data, dstFrame->linesize);
    
  • what's the theory of mix audio in the ffmpeg

    6 juin 2017, par from_mars

    when I wanted to mix two file into one, I found the toolkit.when I did this after following the command, everything was fine,just like:

    ffmpeg -i 1.wav -i 2.wav -filter_complex amix=inputs=2:duration=longest:dropout_transition=2 out.wav

    but there is something I want to know.

    1. what's the theory of mix audio in the ffmpeg.

      Do you just calculate the average of the input file?

    2. what's the dropout_transiton.

      and I found the reference in the website: dropout_transition The transition time, in seconds, for volume renormalization when an input stream ends. The default value is 2 seconds.. but I don't know what's the mean.

      if dropout_transiton=0, what would happened?

    Thanks!(sorry for my poor english)