Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • FFMpeg encoded video will only play in FFPlay

    8 novembre 2013, par mohM

    I've been debugging my program for a couple of weeks now with the output video only showing a blank screen (was testing with VLC, WMP and WMPClassic). I happened to try using FFPlay and lo and behold the video works perfectly. I've read that this is usually caused by an incorrect pixel format, and that switching to PIX_FMT_YUV420P will make it work universally...but I'm already using that pixel format in the encoding process. Is there anything else that is causing this?

    AVCodec* codec;
    AVCodecContext* c = NULL;
    uint8_t* outbuf;
    int i, out_size, outbuf_size;
    
    avcodec_register_all();
    
    printf("Video encoding\n");
    
    // Find the mpeg1 video encoder
    codec = avcodec_find_encoder(CODEC_ID_H264);
    if (!codec) {
        fprintf(stderr, "Codec not found\n");
        exit(1);
    }
    else printf("H264 codec found\n");
    
    c = avcodec_alloc_context3(codec);
    
    c->bit_rate = 400000;
    c->width = 1920;                                        // resolution must be a multiple of two (1280x720),(1900x1080),(720x480)
    c->height = 1200;
    c->time_base.num = 1;                                   // framerate numerator
    c->time_base.den = 25;                                  // framerate denominator
    c->gop_size = 10;                                       // emit one intra frame every ten frames
    c->max_b_frames = 1;                                    // maximum number of b-frames between non b-frames
    //c->keyint_min = 1;                                        // minimum GOP size
    //c->i_quant_factor = (float)0.71;                      // qscale factor between P and I frames
    //c->b_frame_strategy = 20;
    //c->qcompress = (float)0.6;
    //c->qmin = 20;                                         // minimum quantizer
    //c->qmax = 51;                                         // maximum quantizer
    //c->max_qdiff = 4;                                     // maximum quantizer difference between frames
    //c->refs = 4;                                          // number of reference frames
    //c->trellis = 1;                                           // trellis RD Quantization
    c->pix_fmt = PIX_FMT_YUV420P;
    c->codec_id = CODEC_ID_H264;
    //c->codec_type = AVMEDIA_TYPE_VIDEO;
    
    // Open the encoder
    if (avcodec_open2(c, codec,NULL) < 0) {
        fprintf(stderr, "Could not open codec\n");
        exit(1);
    }
    else printf("H264 codec opened\n");
    
    outbuf_size = 100000 + c->width*c->height*(32>>3);//*(32>>3);           // alloc image and output buffer
    outbuf = static_cast(malloc(outbuf_size));
    printf("Setting buffer size to: %d\n",outbuf_size);
    
    FILE* f = fopen("example.mpg","wb");
    if(!f) printf("x  -  Cannot open video file for writing\n");
    else printf("Opened video file for writing\n");
    
    // encode 5 seconds of video
    for(i=0;iwidth, c->height);
        uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes*sizeof(uint8_t));
    
        AVFrame* inpic = avcodec_alloc_frame();
        AVFrame* outpic = avcodec_alloc_frame();
    
        outpic->pts = (int64_t)((float)i * (1000.0/((float)(c->time_base.den))) * 90);
        avpicture_fill((AVPicture*)inpic, (uint8_t*)pPixels, PIX_FMT_RGB32, c->width, c->height);                   // Fill picture with image
        avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);
        av_image_alloc(outpic->data, outpic->linesize, c->width, c->height, c->pix_fmt, 1); 
    
        inpic->data[0] += inpic->linesize[0]*(screenHeight-1);                                                      // Flipping frame
        inpic->linesize[0] = -inpic->linesize[0];                                                                   // Flipping frame
    
        struct SwsContext* fooContext = sws_getContext(screenWidth, screenHeight, PIX_FMT_RGB32, c->width, c->height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
        sws_scale(fooContext, inpic->data, inpic->linesize, 0, c->height, outpic->data, outpic->linesize);
    
        // encode the image
        out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
        printf("Encoding frame %3d (size=%5d)\n", i, out_size);
        fwrite(outbuf, 1, out_size, f);
        delete [] pPixels;
        av_free(outbuffer);     
        av_free(inpic);
        av_free(outpic);
    }
    
    // get the delayed frames
    for(; out_size; i++) {
        fflush(stdout);
    
        out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
        printf("Writing frame %3d (size=%5d)\n", i, out_size);
        fwrite(outbuf, 1, out_size, f);
    }
    
    // add sequence end code to have a real mpeg file
    outbuf[0] = 0x00;
    outbuf[1] = 0x00;
    outbuf[2] = 0x01;
    outbuf[3] = 0xb7;
    fwrite(outbuf, 1, 4, f);
    fclose(f);
    
    avcodec_close(c);
    free(outbuf);
    av_free(c);
    printf("Closed codec and Freed\n");
    
  • ffmpeg - how do I remove a sweep line seen

    8 novembre 2013, par eduard

    I am streaming a video over network(rtp) which uses x264 and ffmpeg's h264 on the other side. Everything is OK as far as there is no packet loss. When there are packet loss it fixed when key-frame arives. Instead of immediate refresh it takes about 1-1.5 seconds and it seems like a sweep line which 'cleans' the errors.

    Is there a way to make key frames fix errors immediately?

  • Make thumbs from videos with ffmpeg and php [on hold]

    8 novembre 2013, par Milan Milosevic

    Im try extract small thumbs from video every 15 sec.

    Here is what im try now

    ffmpeg -i movie.mp4 -r 1/15 -s 120x90 %03d.jpg
    

    But have some error from command line

    [mjpeg @ 0x9e695c0] bitrate tolerance too small for bitrate
    [mjpeg @ 0x9da9a60] ff_frame_thread_encoder_init failed
    Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
    

    Whats wrong here, and how to get thumbs every 15 sec. and save it 0.jpg,1jpg,2jpg,3jpg,4jpg,5jpg...etc..

  • FFmpeg 1.0 causing audio playback issues

    7 novembre 2013, par Jona

    I have an audio streamer based on ffplay. It works great using ffmpeg 0.11 but when I use ffmpeg 1.0 or the latest 1.2 release the audio seems to be decoded or played weirdly.

    Essentially it sounds like chipmunks with mp3 streams and with aac streams I hear tons of static barely hearing the actual stream and the actual stream sounds slow.

    Any ideas the possible changes in ffmpeg that could have caused these types of issues?

    Similar issue was posted here but no actual answer about what is going on. Supposedly this code reproduces the same issue.

    UPDATE 1:
    I have done a step by step copy from ffplay and still no luck! :/ The channel and sampling rate look correct so there must be something internally that is returning a weird decoded format?

  • Lossless video codec squashing dim pixels using avconv

    7 novembre 2013, par Noah

    I am using avconv to convert a raw avi grayscale video to huffyuv with mkv container. I've read that huffyuv is "mathematically lossless", which is precisely what I want. avoprobe on the input file gives

    Input #0, avi, from 'myvid.avi':   Duration: 00:00:32.94, start: 0.000000, bitrate: 129167 kb/s
        Stream #0.0: Video: rawvideo, pal8, 328x246, 200 fps, 0.08 tbr, 200 tbn, 200 tbc
    

    The movie has high intensity (approx 150-250 in 8 bits) and low intensity (1-9) elements that I would like to preserve. However if I run

    avconv -y -an -i myvid.avi -r 200 -c:v huffyuv av_test.mkv
    

    I get a text.mkv where the low intensity details have vanished. In fact I was able to plot the following for the two videos. Squashing of dim pixels

    So avconv is deciding I don't need those critical dim pixels. I could just add, say, 15 to all pixel values, but then I would saturate my bright pixels and there's no guarantee the cutoff value is the same for all videos. I do some downstream processing on the output where I really need pixel values to not change when I convert video formats. Any insights as to how to get avconv or huffyuv to actually save my video without loss?