Newest 'x264' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/x264

Les articles publiés sur le site

  • How to stitch(concat) two transport stream with two different resolution and I-frame slices format without loosing resolution and slices information

    2 octobre 2019, par ART

    I have been trying to test a use case with steam captured from multimedia device and that didn't work. And then I have been trying to create this specific transport stream for like two days now without success, so requesting some help.

    I need to create transport stream with two different resolution and two different slicing format.

    I divided the task in following steps and in last two steps I need help.

    Step 1 : Download sample video with resolution : 1920x1080.
    I downloaded big buck bunny mp4 .

    Step 2 : Create transport stream with following
    resolution : 1920x720, H264 I frame slices per frame: 1
    I used following ffmpeg commands to do that.

    #Rename file to input.mp4
    $ mv bbb_sunflower_1080p_30fps_normal.mp4 input.mp4
    #Extract transport stream
    $ ffmpeg -i input.mp4 -c copy first.ts
    

    first.ts is having 1980x720 resolution and one H264 I slice per frame.

    Step 3 : Create another transport stream with smaller resolution using following commands

    #Get mp4 with lower resolution.
    $ ffmpeg -i input.mp4 -s 640x480 temp.mp4
    #Extract trans port stream from mp4
    $ ffmpeg -i temp.mp4 -c copy low_r.ts
    

    Step 4 : Edit(and re-encode?) low_r.ts to have two H264 I frame slices. I used following command to achieve it.

    $ x264 --slices 4 low_r.ts -o second.ts
    

    However when I play this second.ts on vlc using following command it doesn't play

    $ vlc ./second.ts 
    

    And using Elacard StreamEye software when I analyze the transport stream I see that it has 4 H264 I slices in only two times other than that lot of H264 p slices and H264 B slices. Need help here to figure out why second.ts doesn't play and why slicing is not correct.

    Step 5 : Combine both the transport stream without loosing resolution and slicing information. Don't know command for this. Need help here. I tried ffmpeg but that combines two stream with different resolution and makes one file with one resolution.

    Any suggestions/pointers would help me proceed. Let me also know if any of the above steps are not fine too.

  • How to write a Live555 FramedSource to allow me to stream H.264 live

    28 septembre 2019, par Garviel

    I've been trying to write a class that derives from FramedSource in Live555 that will allow me to stream live data from my D3D9 application to an MP4 or similar.

    What I do each frame is grab the backbuffer into system memory as a texture, then convert it from RGB -> YUV420P, then encode it using x264, then ideally pass the NAL packets on to Live555. I made a class called H264FramedSource that derived from FramedSource basically by copying the DeviceSource file. Instead of the input being an input file, I've made it a NAL packet which I update each frame.

    I'm quite new to codecs and streaming, so I could be doing everything completely wrong. In each doGetNextFrame() should I be grabbing the NAL packet and doing something like

    memcpy(fTo, nal->p_payload, nal->i_payload)
    

    I assume that the payload is my frame data in bytes? If anybody has an example of a class they derived from FramedSource that might at least be close to what I'm trying to do I would love to see it, this is all new to me and a little tricky to figure out what's happening. Live555's documentation is pretty much the code itself which doesn't exactly make it easy for me to figure out.

  • Multiple encoding of videos using AviSynth and X264 from a bat file

    24 septembre 2019, par FoxyFish

    at the moment i am encoding my videos (vhs restoration) using avisynth and x264 via dragging an .avs file onto a bat file.

    This is working great, but my problem (well not really a problem, more of an efficiency issue) is that i have to manually keep dragging my avs files onto the bat to start the process. Would it be possible to have the bat file convert a whole directory of videos (maybe 20 or so) one after the other in an automated way?

    The .avs file is always the same for each video except the AviSource() line.

    I know i can loop the .bat for the number of videos present, but how do i load in the .avs file having a variable AviSource()?

    How could i achieve this, or is it not possible?

    Thanks.

  • How to use ffmpeg / x264 2-Pass encoding for multiple bitrate output files

    10 septembre 2019, par Jonesy

    While performing a 2-Pass encode to multiple output files I was receiving the error

    ratecontrol_init: can't open stats file 1 ffmpeg2pass-2.log
    

    My setup is to do a single first pass and then multiple second pass encodes to output files with different target bitrates using the same first pass results.

    ffmpeg -y -i $INPUT_FILE -an -vcodec libx264 -pass 1 -b:v 700k -f rawvideo /dev/null
    
    ffmpeg -y -i $INPUT_FILE -i out-aud.mp4 \
    $AUDIO_OPTIONS_P2 -vcodec libx264 -pass 2 -b:v 250k -f mp4 out-250.mp4 \
    $AUDIO_OPTIONS_P2 -vcodec libx264 -pass 2 -b:v 500k -f mp4 out-500.mp4 \
    $AUDIO_OPTIONS_P2 -vcodec libx264 -pass 2 -b:v 700k -f mp4 out-700.mp4
    

    This sequence resulted in the error listed above. What I discovered thru code-inspection is that ffmpeg/x264 looks for a different set of first-pass files for each second-pass encoding path. The first encoding path uses the set of files originally created

    ffmpeg2pass-0.log
    ffmpeg2pass-0.log.mbtree
    

    The second encoding path requires first-pass files with the names

    ffmpeg2pass-2.log
    ffmpeg2pass-2.log.mbtree
    

    The third encoding path requires first-pass files with the names starting with ffmpeg2pass-4*, etc.

    My solution was to create soft-links to the originally created set of files with the new names that were required for each pass before running the second-pass command.

    ln -s ffmpeg2pass-0.log ffmpeg2pass-2.log
    ln -s ffmpeg2pass-0.log.mbtree ffmpeg2pass-2.log.mbtree
    ln -s ffmpeg2pass-0.log ffmpeg2pass-4.log
    ln -s ffmpeg2pass-0.log.mbtree ffmpeg2pass-4.log.mbtree
    

    This seems to work as it results in the output encodes that I needed. However, I don't know if this method is legitimate. Am I getting sub-optimal encoding results by using a first-pass output for one bitrate (700k) as the input to second-pass encodings for other bitrates?

  • x264 encoded frames into a mp4 container with ffmpeg API

    10 septembre 2019, par PJD

    I'm struggling with understanding what is and what is not needed in getting my already encoded x264 frames into a video container file using ffmpeg's libavformat API.

    My current program will get the x264 frames like this -

    while( x264_encoder_delayed_frames( h ) )
    {
        printf("Writing delayed frame %u\n", delayed_frame_counter++);
        i_frame_size = x264_encoder_encode( h, &nal, &i_nal, NULL, &pic_out );
        if( i_frame_size < 0 ) {
                printf("Failed to encode a delayed x264 frame.\n");
                return ERROR;
            }
        else if( i_frame_size )
        {
            if( !fwrite(nal->p_payload, i_frame_size, 1, video_file_ptr) ) {
                printf("Failed to write a delayed x264 frame.\n");
                return ERROR;
            }
        }
    }
    

    If I use the CLI to the ffmpeg binary, I can put these frames into a container using:

    ffmpeg -i "raw_frames.h264" -c:v copy -f mp4 "video.mp4"
    

    I would like to code this function into my program using the libavformat API though. I'm a little stuck in the concepts and the order on which each ffmpeg function is needed to be called.

    So far I have written:

            mAVOutputFormat = av_guess_format("gen_vid.mp4", NULL, NULL);
            printf("Guessed format\n");
    
            int ret = avformat_alloc_output_context2(&mAVFormatContext, NULL, NULL, "gen_vid.mp4");
            printf("Created context = %d\n", ret);
            printf("Format = %s\n", mAVFormatContext->oformat->name);
    
            mAVStream = avformat_new_stream(mAVFormatContext, 0);
            if (!mAVStream) {
                printf("Failed allocating output stream\n");
            } else {
                printf("Allocated stream.\n");
            }
    
            mAVCodecParameters = mAVStream->codecpar;
            if (mAVCodecParameters->codec_type != AVMEDIA_TYPE_AUDIO &&
                mAVCodecParameters->codec_type != AVMEDIA_TYPE_VIDEO &&
                mAVCodecParameters->codec_type != AVMEDIA_TYPE_SUBTITLE) {
                printf("Invalid codec?\n");
            }
    
            if (!(mAVFormatContext->oformat->flags & AVFMT_NOFILE)) {
                ret = avio_open(&mAVFormatContext->pb, "gen_vid.mp4", AVIO_FLAG_WRITE);
                if (ret < 0) {
                  printf("Could not open output file '%s'", "gen_vid.mp4");
                }
              }
    
            ret = avformat_write_header(mAVFormatContext, NULL);
            if (ret < 0) {
              printf("Error occurred when opening output file\n");
            }
    

    This will print out:

    Guessed format
    Created context = 0
    Format = mp4
    Allocated stream.
    Invalid codec?
    [mp4 @ 0x55ffcea2a2c0] Could not find tag for codec none in stream #0, codec not currently supported in container
    Error occurred when opening output file
    

    How can I make sure the codec type is set correctly for my video? Next I need to somehow point my mAVStream to use my x264 frames - advice would be great.

    Update 1: So I've tried to set the H264 codec, so the codec's meta-data is available. I seem to hit 2 newer issues now. 1) It cannot find the device and therefore cannot configure the encoder. 2) I get the "dimensions not set".

    mAVOutputFormat = av_guess_format("gen_vid.mp4", NULL, NULL);
    printf("Guessed format\n");
    
    // MUST allocate the media file format context. 
    int ret = avformat_alloc_output_context2(&mAVFormatContext, NULL, NULL, "gen_vid.mp4");
    printf("Created context = %d\n", ret);
    printf("Format = %s\n", mAVFormatContext->oformat->name);
    
    // Even though we already have encoded the H264 frames using x264,
    // we still need the codec's meta-data.
    const AVCodec *mAVCodec;
    mAVCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
    if (!mAVCodec) {
        fprintf(stderr, "Codec '%s' not found\n", "H264");
        exit(1);
    }
    mAVCodecContext = avcodec_alloc_context3(mAVCodec);
    if (!mAVCodecContext) {
        fprintf(stderr, "Could not allocate video codec context\n");
        exit(1);
    }
    printf("Codec context allocated with defaults.\n");
    /* put sample parameters */
    mAVCodecContext->bit_rate = 400000;
    mAVCodecContext->width = width;
    mAVCodecContext->height = height;
    mAVCodecContext->time_base = (AVRational){1, 30};
    mAVCodecContext->framerate = (AVRational){30, 1};
    mAVCodecContext->gop_size = 10;
    mAVCodecContext->level = 31;
    mAVCodecContext->max_b_frames = 1;
    mAVCodecContext->pix_fmt = AV_PIX_FMT_NV12;
    
    av_opt_set(mAVCodecContext->priv_data, "preset", "slow", 0);
    printf("Set codec parameters.\n");
    
    // Initialize the AVCodecContext to use the given AVCodec.
    avcodec_open2(mAVCodecContext, mAVCodec, NULL);            
    
    // Add a new stream to a media file. Must be called before
    // calling avformat_write_header().
    mAVStream = avformat_new_stream(mAVFormatContext, mAVCodec);
    if (!mAVStream) {
        printf("Failed allocating output stream\n");
    } else {
        printf("Allocated stream.\n");
    }
    
    // TODO How should codecpar be set?
    mAVCodecParameters = mAVStream->codecpar;
    if (mAVCodecParameters->codec_type != AVMEDIA_TYPE_AUDIO &&
        mAVCodecParameters->codec_type != AVMEDIA_TYPE_VIDEO &&
        mAVCodecParameters->codec_type != AVMEDIA_TYPE_SUBTITLE) {
        printf("Invalid codec?\n");
    }
    
    if (!(mAVFormatContext->oformat->flags & AVFMT_NOFILE)) {
        ret = avio_open(&mAVFormatContext->pb, "gen_vid.mp4", AVIO_FLAG_WRITE);
        if (ret < 0) {
          printf("Could not open output file '%s'", "gen_vid.mp4");
        }
      }
    printf("Called avio_open()\n");
    
    // MUST write a header.
    ret = avformat_write_header(mAVFormatContext, NULL);
    if (ret < 0) {
      printf("Error occurred when opening output file (writing header).\n");
    }
    

    Now I am getting this output -

    Guessed format
    Created context = 0
    Format = mp4
    Codec context allocated with defaults.
    Set codec parameters.
    [h264_v4l2m2m @ 0x556460344b40] Could not find a valid device
    [h264_v4l2m2m @ 0x556460344b40] can't configure encoder
    Allocated stream.
    Invalid codec?
    Called avio_open()
    [mp4 @ 0x5564603442c0] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
    [mp4 @ 0x5564603442c0] dimensions not set
    Error occurred when opening output file (writing header).