Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • How to play a part of the MP4 video stream ?

    14 novembre 2014, par AgentFire

    I have a URL (/ipcam/mpeg4.cgi) which points to my IP camera which is connected via Ethernet. Accessing the URL resuls in a infinite stream of video (possibly with audio) data.

    I would like to store this data into a video file and play it later with a video player (HTML5's video tag is preferred as the player).

    However, a straightforward approach, which is simple saving the stream data into .mp4 file, didn't work.

    I have looked into the file and here is what I saw (click to enlarge):

    It turned out, there are some HTML headers, which I further on manually excluded using the binary editing tool, and yet no player could play the rest of the file.

    The HTML headers are:

    --myboundary
    Content-Type: image/mpeg4
    Content-Length: 76241
    X-Status: 0
    X-Tag: 1693923
    X-Flags: 0
    X-Alarm: 0
    X-Frametype: I
    X-Framerate: 30
    X-Resolution: 1920*1080
    X-Audio: 1
    X-Time: 2000-02-03 02:46:31
    alarm: 0000
    

    My question is pretty clear now, and I would like any help or suggestion. I suspect, I have to manually create some MP4 headers myself based on those values above, however, I fail to understand format descriptions such as these.

    I have the following video stream settings on my IP camera (click to enlarge):

    I could also use the ffmpeg tool, but no matter how I try and mix the arguments to the program, it keeps telling me this error:

  • ffmpeg - add logo in a specific time interval

    14 novembre 2014, par Kåre Brøgger Sørensen

    Is there a way - in a one liner - where I do not have to use ffmpeg to calculate the duration of the whole video before i render ..

    I would like something like:

    ffmpeg -i ny_video_no_logo.mp4 -i ~/Pictures/home_logo_roed_new.png -filter_complex "[0:v][1:v]overlay=main_w-overlay_w-10:main_h-overlay_h-10:enable='between(t,5.50,FULl_LENGTH - 10)'" -codec:a copy out.mp4

    I there a simple way like the "main_w" to get the video length inserted - I am using a non existing variable FULL_LENGTH in the above?

    Or do i have to make a ffmpeg -i and extract the time from this?

  • How to add meta data into a video frame and how to read it ?

    14 novembre 2014, par Asanga Ranasinghe

    I am using AForge.FFmpeg to create a video using a sequence of images. What I want is to add some meta data with each frame when I am making the video. Is there a possible way to do that? If it is, I want to know how to read it again after creating the video. Please help me with this.

    Code I used to create the video. (Using AFroge.ffmpeg)

    VideoFileWriter writer = new VideoFileWriter();
    writer.Open(String.Format("D:\\test.avi", DateTime.Now), 640, 480, 10, VideoCodec.MPEG4);
    while (Condition){
    //I want a method to add meta data into the frame
    writer.WriteVideoFrame(current);//Adding frames into the video using a loop    
    }
    writer.Close();
    
  • FFMPEG : Set output res and bitrate depending upon input video

    13 novembre 2014, par Jegschemesch

    I'm processing user videos with differing aspect ratios. It seems FFMPEG only allows you to specify a fixed resolution. I want the output res to be appropriate for the input res. Similarly, I'd like FFMPEG to intelligently set the output bitrate based on the input video: obviously it shouldn't be any bigger than the input.

    I can get the properties of a video with,

    ffmpeg -i example.flv
    

    But this requires some ugly parsing of the output, so I'm wondering if FFMPEG or some other tool has a more direct facility.

    Basically, I have the Youtube problem: crap comes in, reasonably uniform quality should come out.

  • Process AVFrame using opencv mat causing encoding error

    13 novembre 2014, par user2789801

    I'm trying to decode a video file using ffmpeg, grab the AVFrame object, convert it to opencv mat object, do some processing then convert it back to AVFrame object and encode it back to a video file.

    Well, the program can run, but it produces bad result.

    I Keep getting errors like "top block unavailable for requested intra mode at 7 19", "error while decoding MB 7 19, bytestream 358", "concealing 294 DC, 294AC, 294 MV errors in P frame" etc.

    And the result video got glithes all over it. like this, enter image description here

    I'm guessing it's because my AVFrame to Mat and Mat to AVFrame methods, and here they are

    //unspecified function
    temp_rgb_frame = avcodec_alloc_frame(); 
    int numBytes = avpicture_get_size(PIX_FMT_RGB24, width, height); 
    uint8_t * frame2_buffer = (uint8_t *)av_malloc(numBytes * sizeof(uint8_t)); 
    avpicture_fill((AVPicture*)temp_rgb_frame, frame2_buffer, PIX_FMT_RGB24, width, height);
    
    void CoreProcessor::Mat2AVFrame(cv::Mat **input, AVFrame *output)
    {
        //create a AVPicture frame from the opencv Mat input image
        avpicture_fill((AVPicture *)temp_rgb_frame,
            (uint8_t *)(*input)->data,
            AV_PIX_FMT_RGB24,
            (*input)->cols,
            (*input)->rows);
    
        //convert the frame to the color space and pixel format specified in the sws context
    
        sws_scale(
            rgb_to_yuv_context, 
            temp_rgb_frame->data,
            temp_rgb_frame->linesize,
            0, height, 
            ((AVPicture *)output)->data, 
            ((AVPicture *)output)->linesize);
    
        (*input)->release();
    
    }
    
    void CoreProcessor::AVFrame2Mat(AVFrame *pFrame, cv::Mat **mat)
    {
        sws_scale(
            yuv_to_rgb_context, 
            ((AVPicture*)pFrame)->data, 
            ((AVPicture*)pFrame)->linesize, 
            0, height, 
            ((AVPicture *)temp_rgb_frame)->data,
            ((AVPicture *)temp_rgb_frame)->linesize);
    
        *mat = new cv::Mat(pFrame->height, pFrame->width, CV_8UC3, temp_rgb_frame->data[0]);
    }
    
    void CoreProcessor::process_frame(AVFrame *pFrame)
    {
        cv::Mat *mat = NULL;
        AVFrame2Mat(pFrame, &mat);
        Mat2AVFrame(&mat, pFrame);
    }
    

    Am I doing something wrong with the memory? Because if I remove the processing part, just decode and then encode the frame, the result is correct.