Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • ffmpeg : access h264.h

    16 février 2012, par Drew C

    I installed ffmpeg because it is required by an C++ project I am working on that deals with h264 decoding. I can see that the h264.h file is in the ffmpeg project files, but the line #include h264.h> in my project results in error: libavcodec/h264.h: No such file or directory at compile time. How can I ensure the h264 library is installed and visible to my project?

  • ffmpg : generate very small size video out of pictures

    15 février 2012, par jfas

    Using JPEG pictures, I'd like to generate a video with the smallest possible size. In the ffmpeg doc, it says one can use a 1 fps frame rate. Is it possible to go below that to have a even smaller video file size (like 1 new picture displayed every 2-3secs?)

  • What parameters are required to use x264 via ffmpeg ?

    15 février 2012, par nightWatcher

    i have an AVI file, i have decoded it into Raw form first, now i want to encode it in .h264 format. I am using libavcodec.dll and libavformat.dll. The point is when i try to open the codec from avcodec_open(AVCodecContext,AVCodec) It doesnot open. Am i missing some parameters that i need to specify for execution of this method for the lib x264? Any help will be deeply appreciated. Thanks

  • Modifying motion vectors in ffmpeg H.264 decoder

    14 février 2012, par qontranami

    For research purposes, I am trying to modify H.264 motion vectors (MVs) for each P- and B-frame prior to motion compensation during the decoding process. I am using FFmpeg for this purpose. An example of a modification is replacing each MV with its original spatial neighbors and then using the resultant MVs for motion compensation, rather than the original ones. Please direct me appropriately.

    So far, I have been able to do a simple modification of MVs in the file /libavcodec/h264_cavlc.c. In the function, ff_h264_decode_mb_cavlc(), modifying the mx and my variables, for instance, by increasing their values modifies the MVs used during decoding.

    For example, as shown below, the mx and my values are increased by 50, thus lengthening the MVs used in the decoder.

    mx += get_se_golomb(&s->gb)+50;
    my += get_se_golomb(&s->gb)+50;
    

    However, in this regard, I don't know how to access the neighbors of mx and my for my spatial mean analysis that I mentioned in the first paragraph. I believe that the key to doing so lies in manipulating the array, mv_cache.

    Another experiment that I performed was in the file, libavcodec/error_resilience.c. Based on the guess_mv() function, I created a new function, mean_mv() that is executed in ff_er_frame_end() within the first if-statement. That first if-statement exits the function ff_er_frame_end() if one of the conditions is a zero error-count (s->error_count == 0). However, I decided to insert my mean_mv() function at this point so that is always executed when there is a zero error-count. This experiment somewhat yielded the results I wanted as I could start seeing artifacts in the top portions of the video but they were restricted just to the upper-right corner. I'm guessing that my inserted function is not being completed so as to meet playback deadlines or something.

    Below is the modified if-statement. The only addition is my function, mean_mv(s).

    if(!s->error_recognition || s->error_count==0 || s->avctx->lowres ||
           s->avctx->hwaccel ||
           s->avctx->codec->capabilities&CODEC_CAP_HWACCEL_VDPAU ||
           s->picture_structure != PICT_FRAME || // we dont support ER of field pictures yet, though it should not crash if enabled
           s->error_count==3*s->mb_width*(s->avctx->skip_top + s->avctx->skip_bottom)) {
            //av_log(s->avctx, AV_LOG_DEBUG, "ff_er_frame_end in er.c\n"); //KG
            if(s->pict_type==AV_PICTURE_TYPE_P)
                mean_mv(s);
            return;
    

    And here's the mean_mv() function I created based on guess_mv().

    static void mean_mv(MpegEncContext *s){
        //uint8_t fixed[s->mb_stride * s->mb_height];
        //const int mb_stride = s->mb_stride;
        const int mb_width = s->mb_width;
        const int mb_height= s->mb_height;
        int mb_x, mb_y, mot_step, mot_stride;
    
        //av_log(s->avctx, AV_LOG_DEBUG, "mean_mv\n"); //KG
    
        set_mv_strides(s, &mot_step, &mot_stride);
    
        for(mb_y=0; mb_ymb_height; mb_y++){
            for(mb_x=0; mb_xmb_width; mb_x++){
                const int mb_xy= mb_x + mb_y*s->mb_stride;
                const int mot_index= (mb_x + mb_y*mot_stride) * mot_step;
                int mv_predictor[4][2]={{0}};
                int ref[4]={0};
                int pred_count=0;
                int m, n;
    
                if(IS_INTRA(s->current_picture.f.mb_type[mb_xy])) continue;
                //if(!(s->error_status_table[mb_xy]&MV_ERROR)){
                //if (1){
                if(mb_x>0){
                    mv_predictor[pred_count][0]= s->current_picture.f.motion_val[0][mot_index - mot_step][0];
                    mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index - mot_step][1];
                    ref         [pred_count]   = s->current_picture.f.ref_index[0][4*(mb_xy-1)];
                    pred_count++;
                }
    
                if(mb_x+1current_picture.f.motion_val[0][mot_index + mot_step][0];
                    mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index + mot_step][1];
                    ref         [pred_count]   = s->current_picture.f.ref_index[0][4*(mb_xy+1)];
                    pred_count++;
                }
    
                if(mb_y>0){
                    mv_predictor[pred_count][0]= s->current_picture.f.motion_val[0][mot_index - mot_stride*mot_step][0];
                    mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index - mot_stride*mot_step][1];
                    ref         [pred_count]   = s->current_picture.f.ref_index[0][4*(mb_xy-s->mb_stride)];
                    pred_count++;
                }
    
                if(mb_y+1current_picture.f.motion_val[0][mot_index + mot_stride*mot_step][0];
                    mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index + mot_stride*mot_step][1];
                    ref         [pred_count]   = s->current_picture.f.ref_index[0][4*(mb_xy>mb_stride)];
                    pred_count++;
                }
    
                if(pred_count==0) continue;
    
                if(pred_count>=1){
                    int sum_x=0, sum_y=0, sum_r=0;
                    int k;
    
                    for(k=0; k/ Sum all the MVx from MVs avail. for EC
                        sum_y+= mv_predictor[k][1]; // Sum all the MVy from MVs avail. for EC
                        sum_r+= ref[k];
                        // if(k && ref[k] != ref[k-1])
                        // goto skip_mean_and_median;
                    }
    
                    mv_predictor[pred_count][0] = sum_x/k;
                    mv_predictor[pred_count][1] = sum_y/k;
                    ref         [pred_count]    = sum_r/k;
                }
    
                s->mv[0][0][0] = mv_predictor[pred_count][0];
                s->mv[0][0][1] = mv_predictor[pred_count][1];
    
                for(m=0; mcurrent_picture.f.motion_val[0][mot_index + m + n * mot_stride][0] = s->mv[0][0][0];
                        s->current_picture.f.motion_val[0][mot_index + m + n * mot_stride][1] = s->mv[0][0][1];
                    }
                }
    
                decode_mb(s, ref[pred_count]);
    
                //}
            }
        }
    }
    

    I would really appreciate some assistance on how to go about this properly.

  • Is it possible to play an output video file from an encoder as it's being encoded ?

    14 février 2012, par lvreiny

    I have a video file, and I need to encode it as H264/AVC and feed to client via HTTP. What i need is that i player at client side can play back the video as it is being encoded.

    AFAIK, To enable player to play as the video is downloading, "moov atom" have to be placed at the begnning of the video file. However, encoders (ex: ffmpeg) always write "moov atom" at the end of file after it completes encoding.

    Is there any way encoder can put "moov atom" at beginning of encode's output? Or play video without moov atom presence?

    Thank in advances

    LR