Newest 'libx264' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/libx264

Les articles publiés sur le site

  • ffmpeg libx264 neon optimization breaks execution

    27 janvier 2014, par nmxprime

    Hi use libx264 source obtained from x264-snapshot-20140122-2245 and compiling using below script

    NDK=~/Android/android-ndk-r7c
    PLATFORM=$NDK/platforms/android-9/arch-arm/
    PREBUILT=$NDK/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86
    function build_one
    {
    ./configure --prefix=$PREFIX \
    --sysroot=$PLATFORM \
    --disable-avs \
    --disable-lavf \
    --disable-ffms \
    --disable-gpac \
    --cross-prefix=$PREBUILT/bin/arm-linux-androideabi- \
    --host=arm-linux \
    --enable-static \
    --libdir=$PLATFORM/usr/lib \
    --includedir=$PLATFORM/usr/include \
    --extra-cflags="-march=armv7-a -mfloat-abi=softfp -mfpu=neon -mvectorize-with-neon-quad" \
    --extra-ldflags="-Wl,--fix-cortex-a8" \
    --enable-debug
    

    The config log is:

    platform:      ARM
    system:        LINUX
    cli:           yes
    libx264:       internal
    shared:        no
    static:        yes
    asm:           yes
    interlaced:    yes
    avs:           no
    lavf:          no
    ffms:          no
    mp4:           no
    gpl:           yes
    thread:        no
    opencl:        yes
    filters:       crop select_every
    debug:         yes
    gprof:         no
    strip:         no
    PIC:           no
    bit depth:     8
    chroma format: all
    
    You can run 'make' or 'make fprofiled' now.
    

    I hope the above code compiles and optimizes for NEON execution

    Doubts:

    Why Threads is no, because i didn't specified --disable-thread

    What is cli and it's significance here, also significance of opencl, such that libx264 uses opencl features?


    Then i built ffmpeg 1.2.5with following script

    ./configure --target-os=linux \
    --prefix=$PREFIX \
    --enable-cross-compile \
    --extra-libs="-lgcc" \
    --arch=arm \
    --cc=$PREBUILT/bin/arm-linux-androideabi-gcc \
    --cross-prefix=$PREBUILT/bin/arm-linux-androideabi- \
    --nm=$PREBUILT/bin/arm-linux-androideabi-nm \
    --sysroot=$PLATFORM \
    --extra-cflags=" -O3 -fpic -DANDROID -DHAVE_SYS_UIO_H=1 -Dipv6mr_interface=ipv6mr_ifindex -fasm -Wno-psabi -fno-short-enums -fno-strict-aliasing -finline-limit=300 $OPTIMIZE_CFLAGS " \
    --disable-shared \
    --enable-static \
    --extra-ldflags="-Wl,-rpath-link=$PLATFORM/usr/lib -L$PLATFORM/usr/lib -nostdlib -lc -lm -ldl -llog -lx264 $EXTRA_LD_FLAG" \
    --disable-ffplay \
    --disable-everything \
    --enable-avformat \
    --enable-avcodec \
    --enable-libx264 \
    --enable-gpl \
    --enable-encoder=libx264 \
    --enable-encoder=libx264rgb \
    --enable-decoder=h264 \
    --disable-network \
    --disable-avfilter \
    --disable-avdevice \
    --enable-debug=3 \
    $ADDITIONAL_CONFIGURE_FLAG
    

    where

    ADDITIONAL_CONFIGURE_FLAG = --enable-debug=3
    OPTIMIZE_CFLAGS="-mfloat-abi=softfp -mfpu=neon -marm -march=$CPU -mvectorize-with-neon-quad"
    

    The log shows NEON supported.

    When i run the code (called in a while loop),

    ret = avcodec_encode_video2(c, &pkt, picture, &got_output);//avcodec_encode_video(c, finalout, outbuf_size, picture);
    
    fprintf(stderr,"ret = %d, got-out = %d \n",ret,got_output);
    if (ret < 0) 
            fprintf(stderr, "error encoding frame\n");
    
        if (got_output) 
            fprintf(stderr,"encoding frame %3d (size=%5d): (ret=%d)\n", 1, pkt.size,ret);
    

    it runs for 2 or 3 time(during which if(got_output) is not true), then i get SIGSEGV. Tried addr2line and ndk-stack but no use[Though i enabled debug info, ndk-stack cannot find routine info].

    I edited libx264's encoder.c code with some fprintf statements. Posting snippet of code

       if( h->frames.i_input <= h->frames.i_delay + 1 - h->i_thread_frames )
        {
            /* Nothing yet to encode, waiting for filling of buffers */
            pic_out->i_type = X264_TYPE_AUTO;
    fprintf(stderr,"EditLog:: Returns as waiting for filling \n"); //edit
            return 0;
        }
    }
    
    else
    {
        /* signal kills for lookahead thread */
        x264_pthread_mutex_lock( &h->lookahead->ifbuf.mutex );
        h->lookahead->b_exit_thread = 1;
        x264_pthread_cond_broadcast( &h->lookahead->ifbuf.cv_fill );
        x264_pthread_mutex_unlock( &h->lookahead->ifbuf.mutex );
    }
    fprintf(stderr,"After wait for fill \n");
    fprintf(stderr,"h: %p \n",h); //edit
    fprintf(stderr,"h->i_frame = %p \n",&h->i_frame); //edit
    h->i_frame++;
    fprintf(stderr,"after i_frame++");
    

    in log, i don't see after i_frame++, here occurs (may be) the SIGSEGV.

    Please help in solving it. The same works without neon optimization!!

  • Android h264 decode non-existing PPS 0 referenced

    22 janvier 2014, par nmxprime

    In Android JNI, using ffmpeg with libx264 use below codes to encode and decode raw rgb data!. I should use swscale to convert rgb565 to yuv420p as required by H.264. But not clear about this conversion.Please help, where i am wrong, with regard the log i get!

    Code for Encoding

    codecinit()- called once(JNI wrapper function)

    int Java_com_my_package_codecinit (JNIEnv *env, jobject thiz) {
    avcodec_register_all();
    codec = avcodec_find_encoder(AV_CODEC_ID_H264);//AV_CODEC_ID_MPEG1VIDEO);
    if(codec->id == AV_CODEC_ID_H264)
        __android_log_write(ANDROID_LOG_ERROR, "set","h264_encoder");
    
    if (!codec) {
        fprintf(stderr, "codec not found\n");
        __android_log_write(ANDROID_LOG_ERROR, "codec", "not found");
    
    }
        __android_log_write(ANDROID_LOG_ERROR, "codec", "alloc-contest3");
    c= avcodec_alloc_context3(codec);
    if(c == NULL)
        __android_log_write(ANDROID_LOG_ERROR, "avcodec","context-null");
    
    picture= av_frame_alloc();
    
    if(picture == NULL)
        __android_log_write(ANDROID_LOG_ERROR, "picture","context-null");
    
    c->bit_rate = 400000;
    c->height = 800;
    c->time_base= (AVRational){1,25};
    c->gop_size = 10; 
    c->max_b_frames=1;
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    outbuf_size = 768000;
    c->width = 480;
    
    size = (c->width * c->height);
    
    if (avcodec_open2(c, codec,NULL) < 0) {
    
    __android_log_write(ANDROID_LOG_ERROR, "codec", "could not open");
    
    
    }
    
    ret = av_image_alloc(picture->data, picture->linesize, c->width, c->height,
                         c->pix_fmt, 32);
    if (ret < 0) {
            __android_log_write(ANDROID_LOG_ERROR, "image","alloc-failed");
        fprintf(stderr, "could not alloc raw picture buffer\n");
    
    }
    
    picture->format = c->pix_fmt;
    picture->width  = c->width;
    picture->height = c->height;
    return 0;
    
    }
    

    encodeframe()-called in a while loop

    int Java_com_my_package_encodeframe (JNIEnv *env, jobject thiz,jbyteArray buffer) {
    jbyte *temp= (*env)->GetByteArrayElements(env, buffer, 0);
    Output = (char *)temp;
    const uint8_t * const inData[1] = { Output }; 
    const int inLinesize[1] = { 2*c->width };
    
    //swscale should implement here
    
        av_init_packet(&pkt);
        pkt.data = NULL;    // packet data will be allocated by the encoder
        pkt.size = 0;
    
        fflush(stdout);
    picture->data[0] = Output;
    ret = avcodec_encode_video2(c, &pkt, picture,&got_output);
    
        fprintf(stderr,"ret = %d, got-out = %d \n",ret,got_output);
         if (ret < 0) {
                    __android_log_write(ANDROID_LOG_ERROR, "error","encoding");
            if(got_output > 0)
            __android_log_write(ANDROID_LOG_ERROR, "got_output","is non-zero");
    
        }
    
        if (got_output) {
            fprintf(stderr,"encoding frame %3d (size=%5d): (ret=%d)\n", 1, pkt.size,ret);
            fprintf(stderr,"before caling decode");
            decode_inline(&pkt); //function that decodes right after the encode
            fprintf(stderr,"after caling decode");
    
    
            av_free_packet(&pkt);
        }
    
    
    fprintf(stderr,"y val: %d \n",y);
    
    
    (*env)->ReleaseByteArrayElements(env, buffer, Output, 0);
    return ((ret));
    }
    

    decode_inline() function

    decode_inline(AVPacket *avpkt){
    AVCodec *codec;
    AVCodecContext *c = NULL;
    int frame, got_picture, len = -1,temp=0;
    
    AVFrame *rawFrame, *rgbFrame;
    uint8_t inbuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
    char buf[1024];
    char rawBuf[768000],rgbBuf[768000];
    
    struct SwsContext *sws_ctx;
    
    memset(inbuf + INBUF_SIZE, 0, FF_INPUT_BUFFER_PADDING_SIZE);
    avcodec_register_all();
    
    c= avcodec_alloc_context3(codec);
    if(c == NULL)
        __android_log_write(ANDROID_LOG_ERROR, "avcodec","context-null");
    
    codec = avcodec_find_decoder(AV_CODEC_ID_H264);
    if (!codec) {
        fprintf(stderr, "codec not found\n");
        fprintf(stderr, "codec = %p \n", codec);
        }
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    c->width = 480;
    c->height = 800;
    
    rawFrame = av_frame_alloc();
    rgbFrame = av_frame_alloc();
    
    if (avcodec_open2(c, codec, NULL) < 0) {
        fprintf(stderr, "could not open codec\n");
        exit(1);
        }
    sws_ctx = sws_getContext(c->width, c->height,/*PIX_FMT_RGB565BE*/
                PIX_FMT_YUV420P, c->width, c->height, AV_PIX_FMT_RGB565/*PIX_FMT_YUV420P*/,
                SWS_BILINEAR, NULL, NULL, NULL);
    
    
    frame = 0;
    
    unsigned short *decodedpixels = &rawBuf;
    rawFrame->data[0] = &rawBuf;
    rgbFrame->data[0] = &rgbBuf;
    
    fprintf(stderr,"size of avpkt %d \n",avpkt->size);
    temp = avpkt->size;
    while (temp > 0) {
            len = avcodec_decode_video2(c, rawFrame, &got_picture, avpkt);
    
            if (len < 0) {
                fprintf(stderr, "Error while decoding frame %d\n", frame);
                exit(1);
                }
            temp -= len;
            avpkt->data += len;
    
            if (got_picture) {
                printf("saving frame %3d\n", frame);
                fflush(stdout);
            //TODO  
            //memcpy(decodedpixels,rawFrame->data[0],rawFrame->linesize[0]); 
            //  decodedpixels +=rawFrame->linesize[0];
    
                frame++;
                }
    
            }
    
    avcodec_close(c);
    av_free(c);
    //free(rawBuf);
    //free(rgbBuf);
    av_frame_free(&rawFrame);
    av_frame_free(&rgbFrame);
    

    }

    The log i get

    For the decode_inline() function:


    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] Invalid mix of idr and non-idr slices
    01-02 14:50:50.160: I/stderr(3407): Error while decoding frame 0
    

    Edit: Changing GOP value:

    If i change c->gop_size = 3; as expected it emits one I frame every three frames. The non-existing PPS 0 referenced message is not there for in every third execution, but all other have this message

  • Creating a video from images using ffmpeg libav and libx264 ?

    11 janvier 2014, par marikaner

    I am trying to create a video from images using the ffmpeg library. The images have a size of 1920x1080 and are supposed to be encoded with H.264 using a .mkv container. I have come across various problems, thinking I am getting closer to a solution, but this one I am really stuck on. With the settings I use, the first X frames (around 40, depending on what and how many images I use for the video) of my video are not encoded. avcodec_encode_video2 does not return any error (return value is 0) with got_picture_ptr = 0. The result is a video that actually looks as expected, but the first seconds are weirdly jumpy.

    So this is how I create the video file:

    // m_codecContext is an instance variable of type AVCodecContext *
    // m_formatCtx is an instance variable of type AVFormatContext *
    
    // outputFileName is a valid filename ending with .mkv
    AVOutputFormat *oformat = av_guess_format(NULL, outputFileName, NULL);
    if (oformat == NULL)
    {
        oformat = av_guess_format("mpeg", NULL, NULL);
    }
    
    // oformat->video_codec is AV_CODEC_ID_H264
    AVCodec *codec = avcodec_find_encoder(oformat->video_codec);
    
    m_codecContext = avcodec_alloc_context3(codec);
    m_codecContext->codec_id = oformat->video_codec;
    m_codecContext->codec_type = AVMEDIA_TYPE_VIDEO;
    m_codecContext->gop_size = 30;
    m_codecContext->bit_rate = width * height * 4
    m_codecContext->width = width;
    m_codecContext->height = height;
    m_codecContext->time_base = (AVRational){1,frameRate};
    m_codecContext->max_b_frames = 1;
    m_codecContext->pix_fmt = AV_PIX_FMT_YUV420P;
    
    m_formatCtx = avformat_alloc_context();
    m_formatCtx->oformat = oformat;
    m_formatCtx->video_codec_id = oformat->video_codec;
    
    snprintf(m_formatCtx->filename, sizeof(m_formatCtx->filename), "%s", outputFileName);
    
    AVStream *videoStream = avformat_new_stream(m_formatCtx, codec);
    if(!videoStream)
    {
       printf("Could not allocate stream\n");
    }
    videoStream->codec = m_codecContext;
    
    if(m_formatCtx->oformat->flags & AVFMT_GLOBALHEADER)
    {
       m_codecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
    }
    
    avcodec_open2(m_codecContext, codec, NULL) < 0);
    avio_open(&m_formatCtx->pb, outputFileName.toStdString().c_str(), AVIO_FLAG_WRITE);
    avformat_write_header(m_formatCtx, NULL);
    

    this is how the frames are added:

    void VideoCreator::writeImageToVideo(const QSharedPointer &img, int frameIndex)
    {
        AVFrame *frame = avcodec_alloc_frame();
    
        /* alloc image and output buffer */
    
        int size = m_codecContext->width * m_codecContext->height;
        int numBytes = avpicture_get_size(m_codecContext->pix_fmt, m_codecContext->width, m_codecContext->height);
    
        uint8_t *outbuf = (uint8_t *)malloc(numBytes);
        uint8_t *picture_buf = (uint8_t *)av_malloc(numBytes);
    
        int ret = av_image_fill_arrays(frame->data, frame->linesize, picture_buf, m_codecContext->pix_fmt, m_codecContext->width, m_codecContext->height, 1);
    
        frame->data[0] = picture_buf;
        frame->data[1] = frame->data[0] + size;
        frame->data[2] = frame->data[1] + size/4;
        frame->linesize[0] = m_codecContext->width;
        frame->linesize[1] = m_codecContext->width/2;
        frame->linesize[2] = m_codecContext->width/2;
    
        fflush(stdout);
    
    
        for (int y = 0; y < m_codecContext->height; y++)
        {
            for (int x = 0; x < m_codecContext->width; x++)
            {
                unsigned char b = img->bits()[(y * m_codecContext->width + x) * 4 + 0];
                unsigned char g = img->bits()[(y * m_codecContext->width + x) * 4 + 1];
                unsigned char r = img->bits()[(y * m_codecContext->width + x) * 4 + 2];
    
                unsigned char Y = (0.257 * r) + (0.504 * g) + (0.098 * b) + 16;
    
                frame->data[0][y * frame->linesize[0] + x] = Y;
    
                if (y % 2 == 0 && x % 2 == 0)
                {
                    unsigned char V = (0.439 * r) - (0.368 * g) - (0.071 * b) + 128;
                    unsigned char U = -(0.148 * r) - (0.291 * g) + (0.439 * b) + 128;
    
                    frame->data[1][y/2 * frame->linesize[1] + x/2] = U;
                    frame->data[2][y/2 * frame->linesize[2] + x/2] = V;
                }
            }
        }
    
        int pts = frameIndex;//(1.0 / 30.0) * 90.0 * frameIndex;
    
        frame->pts = pts;//av_rescale_q(m_codecContext->coded_frame->pts, m_codecContext->time_base, formatCtx->streams[0]->time_base); //(1.0 / 30.0) * 90.0 * frameIndex;
    
        int got_packet_ptr;
        AVPacket packet;
        av_init_packet(&packet);
        packet.data = outbuf;
        packet.size = numBytes;
        packet.stream_index = formatCtx->streams[0]->index;
        packet.flags |= AV_PKT_FLAG_KEY;
        packet.pts = packet.dts = pts;
        m_codecContext->coded_frame->pts = pts;
    
        ret = avcodec_encode_video2(m_codecContext, &packet, frame, &got_packet_ptr);
        if (got_packet_ptr != 0)
        {
            m_codecContext->coded_frame->pts = pts;  // Set the time stamp
    
            if (m_codecContext->coded_frame->pts != (0x8000000000000000LL))
            {
                pts = av_rescale_q(m_codecContext->coded_frame->pts, m_codecContext->time_base, formatCtx->streams[0]->time_base);
            }
            packet.pts = pts;
            if(m_codecContext->coded_frame->key_frame)
            {
               packet.flags |= AV_PKT_FLAG_KEY;
            }
    
            std::cout << "pts: " << packet.pts << ", dts: "  << packet.dts << std::endl;
    
            av_interleaved_write_frame(formatCtx, &packet);
            av_free_packet(&packet);
        }
    
        free(picture_buf);
        free(outbuf);
        av_free(frame);
        printf("\n");
    }
    

    and this is the cleanup:

    int numBytes = avpicture_get_size(m_codecContext->pix_fmt, m_codecContext->width, m_codecContext->height);
    int got_packet_ptr = 1;
    
    int ret;
    //        for(; got_packet_ptr != 0; i++)
    while (got_packet_ptr)
    {
        uint8_t *outbuf = (uint8_t *)malloc(numBytes);
    
        AVPacket packet;
        av_init_packet(&packet);
        packet.data = outbuf;
        packet.size = numBytes;
    
        ret = avcodec_encode_video2(m_codecContext, &packet, NULL, &got_packet_ptr);
        if (got_packet_ptr)
        {
            av_interleaved_write_frame(m_formatCtx, &packet);
        }
    
        av_free_packet(&packet);
        free(outbuf);
    }
    
    av_write_trailer(formatCtx);
    
    avcodec_close(m_codecContext);
    av_free(m_codecContext);
    printf("\n");
    

    I assume it might be tied to the PTS and DTS values, but I have tried EVERYTHING. The frame index seems to make the most sense. The images are correct, I can save them to files without any problems. I am running out of ideas. I would be incredibly thankful if there was someone out there who knew better than me...

    Cheers, marikaner

    UPDATE:

    If this is of any help this is the output at the end of the video encoding:

    [libx264 @ 0x7fffc00028a0] frame I:19    Avg QP:14.24  size:312420
    [libx264 @ 0x7fffc00028a0] frame P:280   Avg QP:19.16  size:148867
    [libx264 @ 0x7fffc00028a0] frame B:181   Avg QP:21.31  size: 40540
    [libx264 @ 0x7fffc00028a0] consecutive B-frames: 24.6% 75.4%
    [libx264 @ 0x7fffc00028a0] mb I  I16..4: 30.9% 45.5% 23.7%
    [libx264 @ 0x7fffc00028a0] mb P  I16..4:  4.7%  9.1%  4.5%  P16..4: 23.5% 16.6% 12.6%  0.0%  0.0%    skip:28.9%
    [libx264 @ 0x7fffc00028a0] mb B  I16..4:  0.6%  0.5%  0.3%  B16..8: 26.7% 11.0%  5.5%  direct: 3.9%  skip:51.5%  L0:39.4% L1:45.0% BI:15.6%
    [libx264 @ 0x7fffc00028a0] final ratefactor: 19.21
    [libx264 @ 0x7fffc00028a0] 8x8 transform intra:48.2% inter:47.3%
    [libx264 @ 0x7fffc00028a0] coded y,uvDC,uvAC intra: 54.9% 53.1% 30.4% inter: 25.4% 13.5% 4.2%
    [libx264 @ 0x7fffc00028a0] i16 v,h,dc,p: 41% 29% 11% 19%
    [libx264 @ 0x7fffc00028a0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 16% 26% 31%  3%  4%  3%  7%  3%  6%
    [libx264 @ 0x7fffc00028a0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 30% 26% 14%  4%  5%  4%  7%  4%  7%
    [libx264 @ 0x7fffc00028a0] i8c dc,h,v,p: 58% 26% 13%  3%
    [libx264 @ 0x7fffc00028a0] Weighted P-Frames: Y:17.1% UV:3.6%
    [libx264 @ 0x7fffc00028a0] ref P L0: 63.1% 21.4% 11.4%  4.1%  0.1%    
    [libx264 @ 0x7fffc00028a0] ref B L0: 85.7% 14.3%
    [libx264 @ 0x7fffc00028a0] kb/s:27478.30
    
  • How to reduce latency when streaming x264

    10 janvier 2014, par tobsen

    I would like to produce a zerolatency live video stream and play it in VLC player with as little latency as possible.

    This are the settings I currently use:

    x264_param_default_preset( &m_Params, "veryfast", "zerolatency" );
    
    m_Params.i_threads              =   2;
    m_Params.b_sliced_threads       =   true;
    m_Params.i_width                =   m_SourceWidth;
    m_Params.i_height               =   m_SourceHeight;
    
    m_Params.b_intra_refresh        =   1;
    
    m_Params.b_vfr_input            =   true;
    m_Params.i_timebase_num         =   1;
    m_Params.i_timebase_den         =   1000;
    
    m_Params.i_fps_num              =   1;
    m_Params.i_fps_den              =   60;
    
    m_Params.rc.i_vbv_max_bitrate   =   512;
    m_Params.rc.i_vbv_buffer_size   =   256;
    m_Params.rc.f_vbv_buffer_init   =   1.1f;
    
    m_Params.rc.i_rc_method         =   X264_RC_CRF;
    m_Params.rc.f_rf_constant       =   24;
    m_Params.rc.f_rf_constant_max   =   35;
    
    m_Params.b_annexb               =   0;
    m_Params.b_repeat_headers       =   0;
    m_Params.b_aud                  =   0;
    
    x264_param_apply_profile( &m_Params, "high" );
    

    Using those settings, I have the following issues:

    • VLC shows lots of missing frames (see screenshot, "verloren"). I am not sure if this is an issue.
    • If I set a value <200ms for the network stream delay in VLC, VLC renders a few frames and than stops to decode/render frames.
    • If I set a value >= 200ms for the network stream delay in VLC, everything looks good so far but the latency is, obviously, 200ms, which is too high.

    Question: Which settings (x264lib and VLC) should I use in order to encode and stream with as little latency as possible?

    enter image description here

  • Encoding for fastest decoding with ffmpeg

    10 janvier 2014, par nbubis

    Encoding with ffmpeg and libx264, are there presets or flags that will optimize decoding speed?

    Right now it seems that videos transcoded with similar file sizes are decoded at very different speeds using Qtkit, and I was wondering whether there are options for encoding such that the decoding speed will be maximal.