Newest 'x264' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/x264

Les articles publiés sur le site

  • Decoding H.264 individual nal units

    24 septembre 2018, par madprogrammer2015

    I am currently sending individual NAL units across a network. These NAL units are generated by x264. Now is it possible to feed these NAL units individually into avcodec_decode_video2?

    Or do I have to concatenate the nal units until they represent the same frame? If thats the case then how is that done?

    I have also read that I might be able to receive the SPS and PPS packets. Then wait for at least one packet, and attempt to decode. Is this correct?

    Any advice that can be offered would be greatly appreciated

  • Export dynamic metadata using x265

    28 août 2018, par nam

    I am working on ffmpeg and x265 for video encoding. From the release note of x265:

    HDR10+ supported. Dynamic metadata may be either supplied as a bitstream via the userSEI field of x265_picture, or as a json jile that can be parsed by x265 and inserted into the bitstream; use --dhdr10-info to specify json file name, and --dhdr10-opt to enable optimization of inserting tone-map information only at IDR frames, or when the tone map information changes.

    But I dont know how to export the dynamic metadata from a video sequence as a userSEI or json file. Hope to get solution from you.

  • FFmpeg/x264 “subq” or “subme” Settings

    26 août 2018, par RRN

    I am using FFmpeg 4, FFmpeg x264 has an option called “subq” or “subme”, the document mentioned the default is ‘6’, but I tried to skip this option, then if I use “MediaInfo” to check the output video, the default should be ‘0’ instead of ‘6’. The document doesn’t’ mention about ’0’, anybody knows?

  • How can I quantitatively measure gstreamer H264 latency between source and display ?

    10 août 2018, par KevinM

    I have a project where we are using gstreamer , x264, etc, to multicast a video stream over a local network to multiple receivers (dedicated computers attached to monitors). We're using gstreamer on both the video source (camera) systems and the display monitors.

    We're using RTP, payload 96, and libx264 to encode the video stream (no audio).

    But now I need to quantify the latency between (as close as possible to) frame acquisition and display.

    Does anyone have suggestions that use the existing software?

    Ideally I'd like to be able to run the testing software for a few hours to generate enough statistics to quantify the system. Meaning that I can't do one-off tests like point the source camera at the receiving display monitor displaying a high resolution and manually calculate the difference...

    I do realise that using a pure software-only solution, I will not be able to quantify the video acquisition delay (i.e. CCD to framebuffer).

    I can arrange that the system clocks on the source and display systems are synchronised to a high accuracy (using PTP), so I will be able to trust the system clocks (else I will use some software to track the difference between the system clocks and remove this from the test results).

    In case it helps, the project applications are written in C++, so I can use C event callbacks, if they're available, to consider embedding system time in a custom header (e.g. frame xyz, encoded at time TTT - and use the same information on the receiver to calculate a difference).

  • What is the cause of this green and yellow ffmpeg artifact ?

    17 juillet 2018, par Suever

    I'm trying to convert a series of image frames into a video with ffmpeg. For some of the image series, I am getting a strange yellow/green artifact and I'm not sure what setting in the conversion is causing the artifact or the best way to fix it.

    The command I'm using for the conversion is

    ffmpeg -f concat -safe 0 -i inputs.txt -c:v libx264 -pix_fmt yuv420p -r 10 -vf "scale=1024:-2" -movflags +faststart video.mp4
    

    A reproducible example with three image files and an associated inputs.txt file indicating the files and associated durations can be downloaded here (Dropbox link)

    Here's the resulting artifact

    enter image description here

    And the GIF of the output

    enter image description here

    What settings could be causing this artifact, and what can I do to try to reduce/remove the artifact from the video? This happens on both versions of ffmpeg that I have tried:

    ffmpeg version 3.4.1 Copyright (c) 2000-2017 the FFmpeg developers
      built with Apple LLVM version 9.0.0 (clang-900.0.39.2)
      configuration: --prefix=/usr/local/Cellar/ffmpeg/3.4.1 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-gpl --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --enable-videotoolbox --disable-lzma
    
    ffmpeg version 3.1-tessus Copyright (c) 2000-2016 the FFmpeg developers
      built with Apple LLVM version 6.0 (clang-600.0.57) (based on LLVM 3.5svn)
      configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg --as=yasm --extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl --enable-libass --enable-libbluray --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopus --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzmq --enable-version3 --disable-ffplay --disable-indev=qtkit --disable-indev=x11grab_xcb
    

    Any insight is appreciated.