Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • ffmpeg av_interleaved_write_frame() : Broken pipe under windows

    7 avril 2016, par Allen

    I am using ffmpeg to convert original media file to rawvideo yuv format, ouputed the yuv to pipe, then my command tool receive the raw yuv as input, do some processing.

    e.g:

    D:\huang_xuezhong\build_win32_VDNAGen>ffmpeg -i test.mkv -c:v rawvideo -s 320x240 -f rawvideo - | my_tool -o output
    

    every time, when run the command, ffmpeg will dump this av_interleaved_write_frame(): Broken pipe error msg:

    Output #0, rawvideo, to 'pipe:':
      Metadata:
      encoder         : Lavf56.4.101
      Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 320x240 [SAR 120:91 DAR 160:91], q=2-31, 200 kb/s, 24 fps, 24 tbn, 24 tbc (default)
      Metadata:
          encoder         : Lavc56.1.100 rawvideo
      Stream mapping:
          Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
    Press [q] to stop, [?] for help
    av_interleaved_write_frame(): Broken pipe
    frame=    1 fps=0.0 q=0.0 Lsize=     112kB time=00:00:00.04 bitrate=22118.2kbits/s
    video:112kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing o
    verhead: 0.000000%
    Conversion failed!
    

    in my souce code: it take stdin as the input file, every time, it read a frame size content from it, if the content read is less than a frame size, then continue read, until a frame is fetched, then it use the frame content to generate something.

    int do_work (int jpg_width, int jpg_height) 
    {
    int ret = 0;
    
    FILE *yuv_fp = NULL;
    unsigned char * yuv_buf = NULL;
    int frame_size = 0;
    int count = 0;
    int try_cnt = 0;
    
    frame_size = jpg_width * jpg_height * 3 / 2;
    va_log (vfp_log, "a frame size:%d\n", frame_size);
    
    yuv_fp = stdin;
    
    yuv_buf = (unsigned char *) aligned_malloc_int(
            sizeof(char) * (jpg_width + 1) * (jpg_height + 1) * 3, 128);
    
    if (!yuv_buf) {
        fprintf (stderr, "malloc yuv buf error\n");
        goto end;
    } 
    
    memset (yuv_buf, 0, frame_size);
    while (1) {
    
        try_cnt++;
        va_log (vfp_log, "try_cnt is %d\n", try_cnt);
    
        //MAX_TRY_TIMES = 10
        if (try_cnt > MAX_TRY_TIMES) {
            va_log (vfp_log, "try time out\n");
            break;
        }
    
        count = fread (yuv_buf + last_pos, 1, frame_size - last_pos, yuv_fp);
        if (last_pos + count < frame_size) {
            va_log (vfp_log, "already read yuv: %d, this time:%d\n", last_pos + count, count);
            last_pos += count;
            continue;
        }
    
        // do my work here
    
        memset (yuv_buf, 0, frame_size);
        last_pos = 0;
        try_cnt = 0;
    }
    
    end:
    if (yuv_buf) {
        aligned_free_int (yuv_buf);
    }
    
    return ret;
    }
    

    my log:

    2016/04/05 15:20:38: a frame size:115200 2016/04/05 15:20:38: try_cnt is 1 2016/04/05 15:20:38: already read yuv: 49365, this time:49365 2016/04/05 15:20:38: try_cnt is 2 2016/04/05 15:20:38: already read yuv: 49365, this time:0 2016/04/05 15:20:38: try_cnt is 3 2016/04/05 15:20:38: already read yuv: 49365, this time:0 2016/04/05 15:20:38: try_cnt is 4 2016/04/05 15:20:38: already read yuv: 49365, this time:0 2016/04/05 15:20:38: try_cnt is 5 2016/04/05 15:20:38: already read yuv: 49365, this time:0 2016/04/05 15:20:38: try_cnt is 6 2016/04/05 15:20:38: already read yuv: 49365, this time:0 2016/04/05 15:20:38: try_cnt is 7 2016/04/05 15:20:38: already read yuv: 49365, this time:0 2016/04/05 15:20:38: try_cnt is 8 2016/04/05 15:20:38: already read yuv: 49365, this time:0 2016/04/05 15:20:38: try_cnt is 9 2016/04/05 15:20:38: already read yuv: 49365, this time:0 2016/04/05 15:20:38: try_cnt is 10 2016/04/05 15:20:38: already read yuv: 49365, this time:0 2016/04/05 15:20:38: try_cnt is 11 2016/04/05 15:20:38: try time out ```

    my question:

    when piping used ,does ffmpeg write content to pipe buffer as soon as it has content, or it will buffer some size content, then flushes them to pipe?Maybe some internal logic that I misunderstood,any one could help explain or fix my code?

    PS: this command run ok under linux.

  • FFMPEG Audio decode and draw waveform

    7 avril 2016, par Haris

    I am trying to decode the audio and draw the waveform using ffmpeg, and the input audio data is AV_SAMPLE_FMT_S16P, basically I am following the tutorial here, and the audio is playing fine with libao. Now I need to plot the waveform using decoded data, currently I am writing left and right channel to separate csv file and plotting on excel. But the waveform is something different from the waveform shown in Audacity using the same audio clip. When I analyzed the value written on csv most of the values are close to maximum of uint16_t(65535), but there are some other lower values, but majority is high peak.

    Here is the source code,

        const char* input_filename="/home/user/Music/Clip.mp3";
        av_register_all();
        AVFormatContext* container=avformat_alloc_context();
        if(avformat_open_input(&container,input_filename,NULL,NULL)<0){
            endApp("Could not open file");
        }
    
        if(avformat_find_stream_info(container, NULL)<0){
            endApp("Could not find file info");
        }
    
        av_dump_format(container,0,input_filename,false);
    
        int stream_id=-1;
        int i;
        for(i=0;inb_streams;i++){
            if(container->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO){
                stream_id=i;
                break;
            }
        }
        if(stream_id==-1){
            endApp("Could not find Audio Stream");
        }
    
        AVDictionary *metadata=container->metadata;
    
        AVCodecContext *ctx=container->streams[stream_id]->codec;
        AVCodec *codec=avcodec_find_decoder(ctx->codec_id);
    
        if(codec==NULL){
            endApp("cannot find codec!");
        }
    
        if(avcodec_open2(ctx,codec,NULL)<0){
            endApp("Codec cannot be found");
        }
    
    
    
        AVPacket packet;
        av_init_packet(&packet);
    
        //AVFrame *frame=avcodec_alloc_frame();
        AVFrame *frame=av_frame_alloc();
    
        int buffer_size=AVCODEC_MAX_AUDIO_FRAME_SIZE+ FF_INPUT_BUFFER_PADDING_SIZE;
    
        // MSVC can't do variable size allocations on stack, ohgodwhy
        uint8_t *buffer = new uint8_t[buffer_size];
        packet.data=buffer;
        packet.size =buffer_size;
    
        int frameFinished=0;
    
        int plane_size;
    
        ofstream fileCh1,fileCh2;
        fileCh1.open ("ch1.csv");
        fileCh2.open ("ch2.csv");
    
        AVSampleFormat sfmt=ctx->sample_fmt;
    
        while(av_read_frame(container,&packet)>=0)
        {
    
            if(packet.stream_index==stream_id){
                int len=avcodec_decode_audio4(ctx,frame,&frameFinished,&packet);
                int data_size = av_samples_get_buffer_size(&plane_size, ctx->channels,
                                                    frame->nb_samples,
                                                    ctx->sample_fmt, 1);
    
    
                if(frameFinished){
                    int write_p=0;
                    // QTime t;
                    switch (sfmt){
    
                        case AV_SAMPLE_FMT_S16P:
    
                            for (int nb=0;nbsizeof(uint16_t);nb++){
                                for (int ch = 0; ch < ctx->channels; ch++) {
                                    if(ch==0)
                                        fileCh1 <<((uint16_t *) frame->extended_data[ch])[nb]<<"\n";
                                    else if(ch==1)
                                        fileCh2 <<((uint16_t *) frame->extended_data[ch])[nb]<<"\n";
                                }
                            }
    
                            break;
    
                    }
                } else {
                    DBG("frame failed");
                }
            }
    
    
            av_free_packet(&packet);
        }
        fileCh1.close();
        fileCh2.close();
        avcodec_close(ctx);
        avformat_close_input(&container);
        delete buffer;
        return 0;
    

    Edit:

    I have attached the waveform image draw using opencv, here I scaled the sample value to 0-255 range, and took value 127 as 0(Y-axis). Now for each sample draw line from (x,127) to (x,sample value) where x=1,2,3,...

    enter image description here

  • "Undefined reference to `avcodec_alloc_frame" Error when compile and install Opencv on linux

    7 avril 2016, par Kathy Lee

    I am compiling and installing OpenCV using the steps in http://docs.opencv.org/2.4/doc/tutorials/introduction/linux_install/linux_install.html

    But during "making" it, it has error when it reaches 45%. The error message is

    ...
    [ 43%] Built target pch_Generate_opencv_video
    [ 44%] Built target opencv_video
    [ 44%] Built target opencv_perf_video_pch_dephelp
    [ 45%] Built target pch_Generate_opencv_perf_video  
    Linking CXX executable ../../bin/opencv_perf_video
    ../../lib/libopencv_videoio.so.3.1.0: undefined reference to `avcodec_alloc_frame'
    ../../lib/libopencv_videoio.so.3.1.0: undefined reference to `avcodec_encode_video'
    collect2: error: ld returned 1 exit status
    make[2]: *** [bin/opencv_perf_video] Error 1
    make[1]: *** [modules/video/CMakeFiles/opencv_perf_video.dir/all] Error 2
    make: *** [all] Error 2
    

    I downloaded and installed the latest version of ffmpeg from https://www.ffmpeg.org/.

    Anyone knows how can I fix the errors?

    Thank you.

  • How to use ffmpegwrapper for python

    6 avril 2016, par Roast-Beef

    So i have this simple code using ffmpegwrapper, and it doesnt seem to do anything... so whats the prob? thanks

    from ffmpegwrapper import FFmpeg, Input, Output, VideoCodec, VideoFilter
    
    input_video = Input('bob.mp4')
    output_video = Output('bob2.webm')
    FFmpeg('ffmpeg', input_video, output_video)
    

    (this is the full code btw)

    Ill take other alternatives... just not the subprocess command

  • Video delay when using filters in ffmpeg

    6 avril 2016, par Marcin

    I try to make video of sport event using raspberry pi. Drawtext filter seems good option to write score on the video.

    I have problem with synchronization / delay of video. I can see changing points few seconds before moment of score a point.

    for example, 30 seconds after start a record video I change points and wave hand to camera. I can see new text value immediately, but moment of wave hand is after at least 10 seconds.

    ffmpeg -threads 2 -f v4l2 -s 1280x720 -input_format h264 -i /dev/video0 \
      -filter_complex "[0:v]
            drawtext=reload=1:box=0:borderw=1:fontsize=36:fontcolor=White:fontfile=font/FreeSans.ttf:x=w/2-text_w/2-70:y=15:textfile=data/0.txt,
            drawtext=reload=1:box=0:borderw=1:fontsize=36:fontcolor=White:fontfile=font/FreeSans.ttf:x=w/2ïtext_w/2:y=15:textfile=data/1.txt" \
        -copyinkf -codec copy \
        -deinterlace -vcodec libx264 -crf 30 -pix_fmt yuv420p -preset ultrafast -qp 0 -r 30 -q 30 -minrate 800k -maxrate 800k \
        -tune zerolatency \
        -acodec aac -ab 128k -g 50 -strict experimental -f flv r.flv -y
    

    Another strange thing. I record video for 60 seconds and quit process by press "q" key and result video has only 42 seconds of length. Why?

    See screen: speed of recording video?