Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Where is opencv2's cvcapture and their subclass ?

    23 novembre 2016, par NEWBIEEBIEE

    I checked this Topic(process video stream from memory buffer), and I would like to do the same in this 1st answer. I tried to create a new class file that inherited from cvCapture_FFMPEG, and to override the "open" function. But I can't find any OpenCV module where there is a class named "cvCapture_FFMPEG".

    I’m assuming that "cvCapture_FFMPEG" is nowhere in OpenCV and their API. Am I right? If so, could you tell me the best way to handle a buffer in OpenCV?

    Please help.

  • where is opencv's cvcapure_ffmpeg ?

    23 novembre 2016, par NEWBIEEBIEE

    I checked this Topic(process video stream from memory buffer), and I would like to do the same in this 1st answer. I tried to create new class file that inherit cvcapture_ffmpeg,and overide "open" function. But I can't find a file where is "cvcapure_ffmpeg" in any file.

    I’m gradually assuming "cvcapure_ffmpeg" is nowhere in opencv and their API. Am I right? If this is it or not,Could you tell me, What am I mistake or how is ffmpeg convert to buffer in best/better way? Please help.sir.

  • FFmpeg compilation with encoder x264 not found Windows

    23 novembre 2016, par Bernhard Lutz

    FFmpeg compilation with encoder x264 not found Windows

    I am trying to compile FFmpeg with several encoder (x264, NVENC). I already managed to compile FFmpeg with MinGW and also x264 but I do not know how I can tell where my compiled encoders are.

    I have a folder where my FFmpeg sources are and in this directory I have my compiled x264 encoder in a subfolder called x264.

    OS: Windows 10

    Compiler: MinGW

  • How to define an in-memory object for opengl to render

    23 novembre 2016, par Ted Yu

    Currently I'm working on an android application, in which I want to fetch frames from Camera, render off-screen and encode into a video. I took the ideas from grafika, camera and egl context works well. However, MediaCodec does not function for some reasons, so I turn to ffmpeg. I do as follows:

    1. Open camera in camera thread
    2. Setup egl context in egl thread, create an off-screen surface
    3. Setup ffmpeg encoder/muxer in encode thread
    4. Generate texture and new a SurfaceTexture backed with that texture in egl thread, and pass the SurfaceTexture to the camera thread for previewing
    5. Everytime SurfaceTexture has frame available, draw the texture to the off-screen surface in egl thread
    6. Read pixels with glReadPixels to a buffer, and pass the buffer to the encode thread
    7. Encode video in the encode thread if there is any available frames

    Everything works fine without glReadPixels, and performance goes low with the glReadPixels invokation uncommented, even the encode thread is not running.

    grafika gets high performance by drawing on MediaCodec's input surface (see VideoEncoderCore).

    So I wonder how to do the same thing with ffmpeg? Is it possible to allocating a frame buffer for rendering and encoding, so that I can get rid of glReadPixels (memory copying)?

  • escape single qoute in ffmpeg filename varible php

    23 novembre 2016, par DOMDocumentVideoSource

    code works fine but if the filename has a single qoute just as "Britney's video.mp4" it does not work.

    $ffmpeg = "/usb/bin/local/ffmpeg";
    $videos = "/videos/*.mp4";
    $ouput_path = "/videos/thumbnails/";
    
    
    foreach(glob($videos) as $video_file){
    
    $lfilename = basename($video_file);
    $filename = basename($video_file, ".mp4");
    $thumbnail = $ouput_path.$filename.'.jpg';
    if (!file_exists($filename)) {
    #$thumbnail = str_replace("'", "%27", $thumbnail);
    exec("/usr/local/bin/ffmpeg -i '$video_file' -an -y -f mjpeg -ss 00:00:30 -vframes 1 '$thumbnail'");
    }
    echo "$filename";