Newest 'x264' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/x264

Les articles publiés sur le site

  • How to configure X264 build before running make on OS X

    18 octobre 2013, par user1884325

    I'm trying to build an X264 executable which will run on any MAC OS X (>10.6) without any external dependencies. I downloaded the latest stable snapshot x264-snapshot-20131017-2245-stable.

    I haven't been able to find a lot of documentation on the build procedure for x264 on OSX.

    When I just run ./configure --disable-opencl and then make, I get several output files:

    x264 libx264.a .depend x264.o

    I assume that the executable can run without the other files?

    Anyway, I tested the x264 in a terminal and at first it seemed to work, but when I try to start x264 as a quiet background process like this:

    ./x264 --quiet --profile baseline --level 3.2 --preset ultrafast --bframes 0 --force-cfr --no-mbtree --sync-lookahead 0 --rc-lookahead 0 --intra-refresh --input-csp rgb --fps 15 --input-res 1280x720 --bitrate 1100 --vbv-maxrate 1100 --vbv-bufsize 600 --threads 1 -o - -

    nothing happens at all. If I go to Activity Monitor I should see x264 running, but it's not there.

    Looong story short: How should the x264 build be configured before running make??? I do not want to use openCL (--disable-opencl)

  • How to create an H264 video for specific Libav decoding

    16 octobre 2013, par James491

    I'm creating a program that decodes and displays videos using Libav. I have complete control over the videos the program uses because I make them beforehand. I am able to seek and decode the videos as needed but only after adjusting to a few quirks of x264 encoded videos. For instance, I have to add 8-10 extra frames at the end of my videos to be able to decode and seek to the original last frame. Is it possible to further customize the encoding of an H264 video using ffmpeg or another application, or perhaps some parameters I can initialize in AVFormatContext or AVCodecContext, in order to solve something like decoding the last frames? Also, are there any specifics I can add to the video file or implement in decoding to decrease seek time, since all seek commands and positions are predetermined? My current seeking function is just composed of an avformat_seek_file(...) followed by avcodec_flush_buffers(...). I am already seeking to key frames. I am willing to read and learn if all you have is a recommended page or article to investigate.

  • x264 & ARM on Windows Phone

    16 octobre 2013, par ssk

    I am trying to build x264 (http://www.videolan.org/developers/x264.html)

    I used this project to build it on Windows using Visual Studio: http://winx264.codeplex.com/documentation

    I found these steps to build it on Windows using MSYS: http://www.ayobamiadewole.com/Blog/How-to-build-x264-or-libx264.dll-in-Windows and http://software.intel.com/en-us/articles/building-x264-with-intel-compiler-for-windows

    I found that ARM support has been added to x264 recently. x264 depends on Intel compiler and YASM assembler. I am not sure whether this is supported on Windows Phone 8.

    1) Has anyone built a windows phone 8 project using Intel compiler? Is it supported?

    2) I am not familiar with assemblers. Is YASM assembler supported on Windows Phone 8?

  • Streaming output of x264_encoder_encode

    11 octobre 2013, par user2660369

    how can I stream the output of x264_encoder_encode over UDP?

    This is my Init_x264-function:

    x264_param_t param;
    x264_param_default_preset(&param, "veryfast", "zerolatency");
    param.i_threads = 1;
    param.i_width = width;
    param.i_height = height;
    param.i_fps_num = 30;
    param.i_fps_den = 1;
    
    param.i_keyint_max = 30;
    param.b_intra_refresh = 1;
    
    param.rc.i_rc_method = X264_RC_CRF;
    param.rc.f_rf_constant = 25;
    param.rc.f_rf_constant_max = 35;
    
    param.b_annexb = 1;
    param.b_repeat_headers = 1;
    
    param.i_log_level = X264_LOG_DEBUG;
    
    x264_param_apply_profile(&param, "baseline");
    
    encoder = x264_encoder_open(&param);
    
    picIn = new x264_picture_t;
    picOut = new x264_picture_t;
    x264_picture_alloc(picIn, X264_CSP_I420, width, height);
    x264_encoder_parameters(encoder, &param);
    

    Now instead of saving the output of x264_encoder_encode to disk (using fwrite) I tried to just send it over UDP to my destination. I tried to play it with avplay, but it fails:

    [h264 @ 0x7f83f0012e80] non-existing PPS 0 referenced
    [h264 @ 0x7f83f0012e80] decode_slice_header error
    [h264 @ 0x7f83f0012e80] no frame!
    [h264 @ 0x7f83f0012e80] non-existing PPS 0 referenced
    [h264 @ 0x7f83f0012e80] decode_slice_header error
    [h264 @ 0x7f83f0012e80] no frame!
    

    With vlc I got different error messages, mostly about missing headers.

  • How to optimize ffmpeg w/ x264 for multiple bitrate output files

    10 octobre 2013, par Jonesy

    The goal is to create multiple output files that differ only in bitrate from a single source file. The solutions for this that were documented worked, but had inefficiencies. The solution that I discovered to be most efficient was not documented anywhere that I could see. I am posting it here for review and asking if others know of additional optimizations that can be made.

    Source file       MPEG-2 Video (Letterboxed) 1920x1080 @>10Mbps
                      MPEG-1 Audio @ 384Kbps
    Destiation files  H264 Video 720x400 @ multiple bitrates
                      AAC Audio @ 128Kbps
    Machine           Multi-core Processor
    

    The video quality at each bitrate is important so we are running in 2-Pass mode with the 'medium' preset

    VIDEO_OPTIONS_P2 = -vcodec libx264 -preset medium -profile:v main -g 72 -keyint_min 24 -vf scale=720:-1,crop=720:400

    The first approach was to encode them all in parallel processes

    ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto -f mp4 out-250.mp4 &
    ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto -f mp4 out-500.mp4 &
    ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto -f mp4 out-700.mp4 &
    

    The obvious inefficiencies are that the source file is read, decoded, scaled, and cropped identically for each process. How can we do this once and then feed the encoders with the result?

    The hope was that generating all the encodes in a single ffmpeg command would optimize-out the duplicate steps.

    ffmpeg -y -i $INPUT_FILE \
    $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto -f mp4 out-250.mp4 \
    $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto -f mp4 out-500.mp4 \
    $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto -f mp4 out-700.mp4
    

    However, the encoding time was nearly identical to the previous multi-process approach. This leads me to believe that all the steps are again being performed in duplicate.

    To force ffmpeg to read, decode, and scale only once, I put those steps in one ffmpeg process and piped the result into another ffmpeg process that performed the encoding. This improved the overall processing time by 15%-20%.

    INPUT_STREAM="ffmpeg -i $INPUT_FILE -vf scale=720:-1,crop=720:400 -threads auto -f yuv4mpegpipe -"
    
    $INPUT_STREAM | ffmpeg -y -f yuv4mpegpipe -i - \
    $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto out-250.mp4 \
    $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto out-500.mp4 \
    $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto out-700.mp4
    

    Does anyone see potential problems with doing it this way, or know of a better method?