Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • RTSP Client for H264 Audio/Video Stream

    1er avril 2015, par Jean-Philippe Encausse

    I'm looking for a simple way to get data of an IP Camera RTSP Stream (using H264 Audio/Video) and get on the other side

    • a frame by frame byte[]
    • a stream of the audio

    After many research

    • EmguCV Capture seems hanging forever (no answer from forum)
    • There is many (too big) RTSP Server few decode H264
    • There is "slow" ffmpeg wrapper
    • There is some managed DirectShow wrapper

    So I don't know where to go ? And how to do this ?

    It seems iSpyCamera is doing the job but it's a big project not a little library to query ip cameras.

  • SIP video/audio tanscoding using ffmpeg

    1er avril 2015, par user3803112

    Is it possible to integrate ffmpeg with SIP server(open sip) and SIP client(lin_phone)? When sip phone_A tries to video call sip phone_B, can this video call be transrated using ffmpeg? What if i capture the RTP packet using wireshark,then use videosnarf tool to extract raw rtp stream ,then transrate it using FFmpeg and then send it to SIP phone B? Can the SIP phone B able to identify the stream?

  • Unable toopen symbol file. Error (20) : Not a directory

    1er avril 2015, par grzebyk

    I am using ffmpeg library on android to stream live video feed. I have complied ffmpeg for android following roman10 instructions. The application is working correctly - it connects to the server, download the feed, transcode it, rescale it and displays on the device's screen. However after a certain random moment the app crashes with Fatal signal 11 (SIGSEGV), code 1. I have used ndk-stack to find the source of the problem. Here is the crash dump:

    ********** Crash dump: **********
    Build fingerprint: 'google/hammerhead/hammerhead:5.0.1/LRX22C/1602158:user/release-keys'
    pid: 25241, tid: 25317, name: AsyncTask #5  >>> com.grzebyk.streamapp <<<
    signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x98e74c9c
    Stack frame #00 pc 00047924  /data/app/com.grzebyk.streamapp-1/lib/arm/libswscale-3.so: Unable to open symbol file /Users/grzebyk/Documents/New_Eclipse_Projects/StreamApp/libs/armeabi/libStreamApp.so/libswscale-3.so. Error (20): Not a directory
    Stack frame #01 pc 00034be8  /data/app/com.grzebyk.streamapp-1/lib/arm/libswscale-3.so (sws_scale+2648): Unable to open symbol file /Users/grzebyk/Documents/New_Eclipse_Projects/StreamApp/libs/armeabi/libStreamApp.so/libswscale-3.so. Error (20): Not a directory
    

    My native code is located in the StreamApp.cpp file. For me it looks like the app is trying to access libswscale-3.so (part of the ffmpeg) located inside the libStreamApp.so. This seems weird for me…

    All the ffmpeg's .so files are located in /libs/armeabi/lib*.so. Naturally this includes the "missing" libswscale-3.so. The most disturbing thing is a fact that the app is working perfectly, but it crashes suddenly and it does not need any specific trigger to do so.

    What can I do to either put libswscale-3.so inside labStreamApp.so or to avoid referencing one .so file from another?

  • Convert opencv mat frame to ffmpeg AVFrame

    1er avril 2015, par Yoohoo

    I am currently work on a c++ real-time video transmission project, now I am using opencv to capture the webcam, and I want to convert the opencv Mat to ffmepg AVFrame to do the encoding and write into buffer. At the decoder side, read packet from buffer, using ffmpeg decode, and then convert ffmpeg AVFrame to opencv Mat again and play.

    Now I have finished the opencv capturing, and I can encode v4l2 source with ffmpeg, but what I want to do is replace the v4l2 source with opencv Mat. But it get error in follow code (I just show the part of the conversion):

        Mat opencvin;                    //frame from webcam
        cap.read(opencvin);
    
        Mat* opencvframe;
        opencvframe = &opencvin;
        AVFrame ffmpegout;
        Size frameSize = opencvframe->size();
        AVCodec *encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
        AVFormatContext* outContainer = avformat_alloc_context();
        AVStream *outStream = avformat_new_stream(outContainer, encoder);
        avcodec_get_context_defaults3(outStream->codec, encoder);
    
        outStream->codec->pix_fmt = AV_PIX_FMT_YUV420P;
        outStream->codec->width = opencvframe->cols;
        outStream->codec->height = opencvframe->rows;
        avpicture_fill((AVPicture*)&ffmpegout, opencvframe->data, PIX_FMT_BGR24, outStream->codec->width, outStream->codec->height);
        ffmpegout.width = frameSize.width;
        ffmpegout.height = frameSize.height;
    

    This is a code I borrow from Internet, it seems the frame have already be encoded during conversion, before I use

    static AVCodecContext *c= NULL;
    
    c = avcodec_alloc_context3(codec);
    if (!c) {
        fprintf(stderr, "Could not allocate video codec context\n");
        exit(1);
    }
    c->bit_rate = 400000;
    /* resolution must be a multiple of two */
    c->width = 640;
    c->height = 480;
    /* frames per second */
    c->time_base= (AVRational){1,25};
    c->gop_size = 10; /* emit one intra frame every ten frames */
    c->max_b_frames=15;
    c->pix_fmt = AV_PIX_FMT_YUV420P; 
    
    ret = avcodec_encode_video2(c, &pkt, &ffmpegout, &got_output);
    

    to encode the frame. And I get core dumped error if I continue encode the converted frame.

    I want encode the frame after conversion so that I can keep the data in pkt. How can I get a pure converted ffmpeg frame from opencv frame?

    Or if the encode have already done during coversion, how can I get the output into pkt?

  • ffmpeg video output is desaturated

    1er avril 2015, par Dries

    I'm trying to get my view from my application (using OpenGL) to be written to a file. This is how I get the OpenGL frame:

    glBindTexture(GL_TEXTURE_2D, renderTexture->getTextureData()->glId);    
    glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
    

    This works fine and has the right colors. (I checked this by rendering the pixels to a single image file).

    Now, since I want this in a video I'm using ffmpeg for the encoding. This is my command:

    ffmpeg -r 24 -pix_fmt rgba -s 1280x720  -f rawvideo -y -i - -vf vflip -vcodec mpeg1video -q:v 4 -bufsize 500KB -maxrate 5000KB
    

    This also "works" but my video is very desaturated compared with the actual input it gets from OpenGL. How can I solve this? (if possible, can I do this by only changing things in the command?)