Recherche avancée

Médias (91)

Autres articles (44)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

Sur d’autres sites (5687)

  • I'm trying to hide information in a H264 video. When I stitch the video up, split it into frames again and try to read it, the information is lost

    18 mai 2024, par Wer Wer

    I'm trying to create a video steganography python script. The algorithm for hiding will be...

    


      

    1. convert any video codec into h264 lossless
    2. 


    3. save the audio of the video and split the h264 video into frames
    4. 


    5. hide my txt secret into frame0 using LSB replacement method
    6. 


    7. stitch the video back up and put in the audio
    8. 


    


    ...and when I want to recover the text, I'll

    


      

    1. save the audio of the video and split the encoded h264 video into frames
    2. 


    3. retrieve my hidden text from frame0 and print the text
    4. 


    


    So, this is what I can do :

    


      

    1. split the video
    2. 


    3. hide the text in frame0
    4. 


    5. retrieve the text from frame0
    6. 


    7. stitch the video
    8. 


    


    But after stitching the video, when I tried to retrieve the text by splitting that encrypted video, it appears that the text has been lost. This is because i got the error

    


    UnicodeEncodeError: &#x27;charmap&#x27; codec can&#x27;t encode character &#x27;\x82&#x27; in position 21: character maps to <undefined>&#xA;</undefined>

    &#xA;

    I'm not sure if my LSB replacement algorithm was lost, which results in my not being able to retrieve my frame 0 information, or if the H264 conversion command I used was a converted my video into H264 lossy version instead of lossless (which I don't believe so because I specified -qp 0)&#xA;This was the command I used to convert my video

    &#xA;

    ffmpeg -i video.mp4 -t 12 -c:v libx264 -preset veryslow -qp 0 output.mp4&#xA;

    &#xA;

    These are my codes

    &#xA;

    import json&#xA;import os&#xA;import magic&#xA;import ffmpeg&#xA;import cv2&#xA;import numpy as np&#xA;&#xA;import subprocess&#xA;&#xA;# Path to the file you want to check&#xA;here = os.path.dirname(os.path.abspath(__file__))&#xA;file_path = os.path.join(here, "output.mp4")&#xA;raw_video = cv2.VideoCapture(file_path)&#xA;audio_output_path = os.path.join(here, "audio.aac")&#xA;final_video_file = os.path.join(here, "output.mp4")&#xA;&#xA;# create a folder to save the frames.&#xA;frames_directory = os.path.join(here, "data1")&#xA;try:&#xA;    if not os.path.exists(frames_directory):&#xA;        os.makedirs(frames_directory)&#xA;except OSError:&#xA;    print("Error: Creating directory of data")&#xA;&#xA;file_path_txt = os.path.join(here, "hiddentext.txt")&#xA;# Read the content of the file in binary mode&#xA;with open(file_path_txt, "r") as f:&#xA;    file_content = f.read()&#xA;# txt_binary_representation = "".join(format(byte, "08b") for byte in file_content)&#xA;# print(file_content)&#xA;&#xA;"""&#xA;use this cmd to convert any video to h264 lossless. original vid in 10 bit depth format&#xA;ffmpeg -i video.mp4 -c:v libx264 -preset veryslow -qp 0 output.mp4&#xA;&#xA;use this cmd to convert any video to h264 lossless. original vid in 8 bit depth format&#xA;ffmpeg -i video.mp4 -c:v libx264 -preset veryslow -crf 0 output.mp4&#xA;&#xA;i used this command to only get first 12 sec of video because the h264 vid is too large &#xA;ffmpeg -i video.mp4 -t 12 -c:v libx264 -preset veryslow -qp 0 output.mp4&#xA;&#xA;check for multiple values to ensure its h264 lossless:&#xA;1. CRF = 0&#xA;2. qp = 0&#xA;3. High 4:4:4 Predictive&#xA;"""&#xA;&#xA;&#xA;# region --codec checking. ensure video is h264 lossless--&#xA;def check_h264_lossless(file_path):&#xA;    try:&#xA;        # Use ffprobe to get detailed codec information, including tags&#xA;        result = subprocess.run(&#xA;            [&#xA;                "ffprobe",&#xA;                "-v",&#xA;                "error",&#xA;                "-show_entries",&#xA;                "stream=codec_name,codec_long_name,profile,level,bit_rate,avg_frame_rate,nb_frames,tags",&#xA;                "-of",&#xA;                "json",&#xA;                file_path,&#xA;            ],&#xA;            stdout=subprocess.PIPE,&#xA;            stderr=subprocess.PIPE,&#xA;            text=True,&#xA;        )&#xA;        # Check if the file is lossless&#xA;        metadata = check_h264_lossless(file_path)&#xA;        print(json.dumps(metadata, indent=4))&#xA;&#xA;        # Check if the CRF value is available in the tags&#xA;        for stream in metadata.get("streams", []):&#xA;            if stream.get("codec_name") == "h264":&#xA;                tags = stream.get("tags", {})&#xA;                crf_value = tags.get("crf")&#xA;                encoder = tags.get("encoder")&#xA;                print(f"CRF value: {crf_value}")&#xA;                print(f"Encoder: {encoder}")&#xA;        return json.loads(result.stdout)&#xA;    except Exception as e:&#xA;        return f"An error occurred: {e}"&#xA;&#xA;&#xA;# endregion&#xA;&#xA;&#xA;# region --splitting video into frames--&#xA;def extract_audio(input_video_path, audio_output_path):&#xA;    if os.path.exists(audio_output_path):&#xA;        print(f"Audio file {audio_output_path} already exists. Skipping extraction.")&#xA;        return&#xA;    command = [&#xA;        "ffmpeg",&#xA;        "-i",&#xA;        input_video_path,&#xA;        "-q:a",&#xA;        "0",&#xA;        "-map",&#xA;        "a",&#xA;        audio_output_path,&#xA;    ]&#xA;    try:&#xA;        subprocess.run(command, check=True)&#xA;        print(f"Audio successfully extracted to {audio_output_path}")&#xA;    except subprocess.CalledProcessError as e:&#xA;        print(f"An error occurred: {e}")&#xA;&#xA;&#xA;def split_into_frames():&#xA;    extract_audio(file_path, audio_output_path)&#xA;    currentframe = 0&#xA;    print("Splitting...")&#xA;    while True:&#xA;        ret, frame = raw_video.read()&#xA;        if ret:&#xA;            name = os.path.join(here, "data1", f"frame{currentframe}.png")&#xA;            # print("Creating..." &#x2B; name)&#xA;            cv2.imwrite(name, frame)&#xA;            currentframe &#x2B;= 1&#xA;        else:&#xA;            print("Complete")&#xA;            break&#xA;&#xA;&#xA;# endregion&#xA;&#xA;&#xA;# region --merge all back into h264 lossless--&#xA;# output_video_file = "output1111.mp4"&#xA;&#xA;&#xA;def stitch_frames_to_video(frames_dir, output_video_path, framerate=60):&#xA;    command = [&#xA;        "ffmpeg",&#xA;        "-y",&#xA;        "-framerate",&#xA;        str(framerate),&#xA;        "-i",&#xA;        os.path.join(frames_dir, "frame%d.png"),&#xA;        "-c:v",&#xA;        "libx264",&#xA;        "-preset",&#xA;        "veryslow",&#xA;        "-qp",&#xA;        "0",&#xA;        output_video_path,&#xA;    ]&#xA;&#xA;    try:&#xA;        subprocess.run(command, check=True)&#xA;        print(f"Video successfully created at {output_video_path}")&#xA;    except subprocess.CalledProcessError as e:&#xA;        print(f"An error occurred: {e}")&#xA;&#xA;&#xA;def add_audio_to_video(video_path, audio_path, final_output_path):&#xA;    command = [&#xA;        "ffmpeg",&#xA;        "-i",&#xA;        video_path,&#xA;        "-i",&#xA;        audio_path,&#xA;        "-c:v",&#xA;        "copy",&#xA;        "-c:a",&#xA;        "aac",&#xA;        "-strict",&#xA;        "experimental",&#xA;        final_output_path,&#xA;    ]&#xA;    try:&#xA;        subprocess.run(command, check=True)&#xA;        print(f"Final video with audio created at {final_output_path}")&#xA;    except subprocess.CalledProcessError as e:&#xA;        print(f"An error occurred: {e}")&#xA;&#xA;&#xA;# endregion&#xA;&#xA;&#xA;def to_bin(data):&#xA;    if isinstance(data, str):&#xA;        return "".join([format(ord(i), "08b") for i in data])&#xA;    elif isinstance(data, bytes) or isinstance(data, np.ndarray):&#xA;        return [format(i, "08b") for i in data]&#xA;    elif isinstance(data, int) or isinstance(data, np.uint8):&#xA;        return format(data, "08b")&#xA;    else:&#xA;        raise TypeError("Type not supported")&#xA;&#xA;&#xA;def encode(image_name, secret_data):&#xA;    image = cv2.imread(image_name)&#xA;    n_bytes = image.shape[0] * image.shape[1] * 3 // 8&#xA;    print("[*] Maximum bytes to encode:", n_bytes)&#xA;    secret_data &#x2B;= "====="&#xA;    if len(secret_data) > n_bytes:&#xA;        raise ValueError("[!] Insufficient bytes, need bigger image or less data")&#xA;    print("[*] Encoding Data")&#xA;&#xA;    data_index = 0&#xA;    binary_secret_data = to_bin(secret_data)&#xA;    data_len = len(binary_secret_data)&#xA;    for row in image:&#xA;        for pixel in row:&#xA;            r, g, b = to_bin(pixel)&#xA;            if data_index &lt; data_len:&#xA;                pixel[0] = int(r[:-1] &#x2B; binary_secret_data[data_index], 2)&#xA;                data_index &#x2B;= 1&#xA;            if data_index &lt; data_len:&#xA;                pixel[1] = int(g[:-1] &#x2B; binary_secret_data[data_index], 2)&#xA;                data_index &#x2B;= 1&#xA;            if data_index &lt; data_len:&#xA;                pixel[2] = int(b[:-1] &#x2B; binary_secret_data[data_index], 2)&#xA;                data_index &#x2B;= 1&#xA;            if data_index >= data_len:&#xA;                break&#xA;    return image&#xA;&#xA;&#xA;def decode(image_name):&#xA;    print("[&#x2B;] Decoding")&#xA;    image = cv2.imread(image_name)&#xA;    binary_data = ""&#xA;    for row in image:&#xA;        for pixel in row:&#xA;            r, g, b = to_bin(pixel)&#xA;            binary_data &#x2B;= r[-1]&#xA;            binary_data &#x2B;= g[-1]&#xA;            binary_data &#x2B;= b[-1]&#xA;    all_bytes = [binary_data[i : i &#x2B; 8] for i in range(0, len(binary_data), 8)]&#xA;    decoded_data = ""&#xA;    for byte in all_bytes:&#xA;        decoded_data &#x2B;= chr(int(byte, 2))&#xA;        if decoded_data[-5:] == "=====":&#xA;            break&#xA;    return decoded_data[:-5]&#xA;&#xA;&#xA;frame0_path = os.path.join(here, "data1", "frame0.png")&#xA;encoded_image_path = os.path.join(here, "data1", "frame0.png")&#xA;&#xA;&#xA;def encoding_function():&#xA;    split_into_frames()&#xA;&#xA;    encoded_image = encode(frame0_path, file_content)&#xA;    cv2.imwrite(encoded_image_path, encoded_image)&#xA;&#xA;    stitch_frames_to_video(frames_directory, file_path)&#xA;    add_audio_to_video(file_path, audio_output_path, final_video_file)&#xA;&#xA;&#xA;def decoding_function():&#xA;    split_into_frames()&#xA;    decoded_message = decode(encoded_image_path)&#xA;    print(f"[&#x2B;] Decoded message: {decoded_message}")&#xA;&#xA;&#xA;# encoding_function()&#xA;decoding_function()&#xA;&#xA;

    &#xA;

    So I tried to put my decoding function into my encoding function like this

    &#xA;

    def encoding_function():&#xA;    split_into_frames()&#xA;&#xA;    encoded_image = encode(frame0_path, file_content)&#xA;    cv2.imwrite(encoded_image_path, encoded_image)&#xA;&#xA;#immediately get frame0 and decode without stitching to check if the data is there&#xA;    decoded_message = decode(encoded_image_path)&#xA;    print(f"[&#x2B;] Decoded message: {decoded_message}")&#xA;&#xA;    stitch_frames_to_video(frames_directory, file_path)&#xA;    add_audio_to_video(file_path, audio_output_path, final_video_file)&#xA;&#xA;

    &#xA;

    This returns my secret text from frame0. But splitting it after stitching does not return my hidden text. The hidden text was lost

    &#xA;

    def decoding_function():&#xA;    split_into_frames()&#xA;#this function is after the encoding_function(). the secret text is lost, resulting in charmap codec #can&#x27;t encode error&#xA;    decoded_message = decode(encoded_image_path)&#xA;    print(f"[&#x2B;] Decoded message: {decoded_message}")&#xA;

    &#xA;

    EDIT :&#xA;So i ran the encoding function first, copied frame0.png out and placed it some where. Then I ran the decoding function, and got another frame0.png.

    &#xA;

    I ran both frame0.png into this python function

    &#xA;

    frame0_data1_path = os.path.join(here, "data1", "frame0.png")&#xA;frame0_data2_path = os.path.join(here, "data2", "frame0.png")&#xA;frame0_data1 = cv2.imread(frame0_data1_path)&#xA;frame0_data2 = cv2.imread(frame0_data2_path)&#xA;&#xA;if frame0_data1 is None:&#xA;    print(f"Error: Could not load image from {frame0_data1_path}")&#xA;elif frame0_data2 is None:&#xA;    print(f"Error: Could not load image from {frame0_data2_path}")&#xA;else:&#xA;&#xA;    if np.array_equal(frame0_data1, frame0_data2):&#xA;        print("The frames are identical.")&#xA;    else:&#xA;        print("The frames are different.")&#xA;

    &#xA;

    ...and apparently both are different. This means my frame0 binary got changed when I stitch back into the video after encoding. Is there a way to make it not change ? Or will h264 or any video codec change a little bit when you stitch the frames back up ?

    &#xA;

  • Using FFmpeg to stitch together H.264 videos and variably-spaced JPEG pictures ; dealing with ffmpeg warnings

    19 octobre 2022, par LB2

    Context

    &#xA;

    I have a process flow that may output either H.264 Annex B streams, variably-spaced JPEGs, or a mixture of two. By variably-spaced I mean where elapsed time between any two adjacent JPEGs may (and likely to be) different from any other two adjacent JPEGs. So an example of possible inputs are :

    &#xA;

      &#xA;
    1. stream1.h264
    2. &#xA;

    3. {Set of JPEGs}
    4. &#xA;

    5. stream1.h264 &#x2B; stream2.h264
    6. &#xA;

    7. stream1.h264 &#x2B; {Set of JPEGs}
    8. &#xA;

    9. stream1.h264 &#x2B; {Set of JPEGs} &#x2B; stream2.h264
    10. &#xA;

    11. stream1.h264 &#x2B; {Set of JPEGs} &#x2B; stream2.h264 &#x2B; {Set of JPEGs} &#x2B; ...
    12. &#xA;

    13. stream1.h264 &#x2B; stream2.h264 &#x2B; {Set of JPEGs} &#x2B; ...
    14. &#xA;

    &#xA;

    The output needs to be a single stitched (i.e. concatenated) output in MPEG-4 container.

    &#xA;

    Requirements : No re-encoding or transcoding of existing video compression (One time conversion of JPEG sets to video format is OKay).

    &#xA;

    Solution Prototype

    &#xA;

    To prototype the solution I have found that ffmpeg has concat demuxer that would let me specify an ordered sequence of inputs that ffmpeg would then concatenate together, but all inputs must be of the same format. So, to meet that requirement, I :

    &#xA;

      &#xA;
    1. Convert every JPEG set to an .mp4 using concat (and using delay # directive to specify time-spacing between each JPEG)
    2. &#xA;

    3. Convert every .h264 to .mp4 using -c copy to avoid transcoding.
    4. &#xA;

    5. Stitch all generated interim .mp4 files into the single final .mp4 using -f concat and -c copy.
    6. &#xA;

    &#xA;

    Here's the bash script, in parts, that performs the above :

    &#xA;

      &#xA;
    1. Ignore the curl comment ; that's from originally generating a 100 jpeg images with numbers and these are simply saved locally. What the loop does is it generates concat input file with file sequence#.jpeg directives and duration # directive where each successive JPEG delay is incremented by 0.1 seconds (0.1 between first and second, 0.2 b/w 2nd and 3rd, 0.3 b/w 3rd and 4th, and so on). Then it runs ffmpeg command to convert the set of JPEGs to .mp4 interim file.

      &#xA;

      echo "ffconcat version 1.0" >ffconcat-jpeg.txt&#xA;echo >>ffconcat-jpeg.txt&#xA;&#xA;for i in {1..100}&#xA;do&#xA;    echo "file $i.jpeg" >>ffconcat-jpeg.txt&#xA;    d=$(echo "$i" | awk &#x27;{printf "%f", $1 / 10}&#x27;)&#xA;    # d=$(echo "scale=2; $i/10" | bc)&#xA;    echo "duration $d" >>ffconcat-jpeg.txt&#xA;    echo "" >>ffconcat-jpeg.txt&#xA;    # curl -o "$i.jpeg" "https://math.tools/equation/get_equaimages?equation=$i&amp;fontsize=256"&#xA;done&#xA;&#xA;ffmpeg \&#xA;    -hide_banner \&#xA;    -vsync vfr \&#xA;    -f concat \&#xA;    -i ffconcat-jpeg.txt \&#xA;    -r 30 \&#xA;    -video_track_timescale 90000 \&#xA;    video-jpeg.mp4&#xA;

      &#xA;

    2. &#xA;

    3. Convert two streams from .h264 to .mp4 via copy (no transcoding).

      &#xA;

      ffmpeg \&#xA;    -hide_banner \&#xA;    -i low-motion-video.h264 \&#xA;    -c copy \&#xA;    -vsync vfr \&#xA;    -video_track_timescale 90000 \&#xA;    low-motion-video.mp4&#xA;&#xA;ffmpeg \&#xA;    -hide_banner \&#xA;    -i full-video.h264 \&#xA;    -c copy \&#xA;    -video_track_timescale 90000 \&#xA;    -vsync vfr \&#xA;    full-video.mp4&#xA;

      &#xA;

    4. &#xA;

    5. Stitch all together by generating another concat directive file.

      &#xA;

      echo "ffconcat version 1.0" >ffconcat-h264.txt&#xA;echo >>ffconcat-h264.txt&#xA;echo "file low-motion-video.mp4" >>ffconcat-h264.txt&#xA;echo >>ffconcat-h264.txt&#xA;echo "file full-video.mp4" >>ffconcat-h264.txt&#xA;echo >>ffconcat-h264.txt&#xA;echo "file video-jpeg.mp4" >>ffconcat-h264.txt&#xA;echo >>ffconcat-h264.txt&#xA;&#xA;ffmpeg \&#xA;    -hide_banner \&#xA;    -f concat \&#xA;    -i ffconcat-h264.txt \&#xA;    -pix_fmt yuv420p \&#xA;    -c copy \&#xA;    -video_track_timescale 90000 \&#xA;    -vsync vfr \&#xA;    video-out.mp4&#xA;&#xA;

      &#xA;

    6. &#xA;

    &#xA;

    Problem (and attempted troubleshooting)

    &#xA;

    The above does produce a reasonable output — it plays first video, then plays second video with no timing/rate issues AFAICT, then plays JPEGs with time between each JPEG "frame" growing successively, as expected.

    &#xA;

    But, the conversion process produces warnings that concern me (for compatibility with players ; or potentially other IRL streams that may result in some issue my prototyping content doesn't make obvious). Initial attempts generated 100s of warnings, but with some arguments added, I reduced it down to just a handful, but this handful is stubborn and nothing I tried would help.

    &#xA;

    The first conversion of JPEGs to .mp4 goes fine with the following output :

    &#xA;

    Input #0, concat, from &#x27;ffconcat-jpeg.txt&#x27;:&#xA;  Duration: 00:08:25.00, start: 0.000000, bitrate: 0 kb/s&#xA;  Stream #0:0: Video: png, pal8(pc), 176x341 [SAR 3780:3780 DAR 16:31], 25 fps, 25 tbr, 25 tbn, 25 tbc&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (png (native) -> h264 (libx264))&#xA;Press [q] to stop, [?] for help&#xA;[libx264 @ 0x7fe418008e00] using SAR=1/1&#xA;[libx264 @ 0x7fe418008e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2&#xA;[libx264 @ 0x7fe418008e00] profile High 4:4:4 Predictive, level 1.3, 4:4:4, 8-bit&#xA;[libx264 @ 0x7fe418008e00] 264 - core 163 r3060 5db6aa6 - H.264/MPEG-4 AVC codec - Copyleft 2003-2021 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=11 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;Output #0, mp4, to &#x27;video-jpeg.mp4&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf58.76.100&#xA;  Stream #0:0: Video: h264 (avc1 / 0x31637661), yuv444p(tv, progressive), 176x341 [SAR 1:1 DAR 16:31], q=2-31, 30 fps, 90k tbn&#xA;    Metadata:&#xA;      encoder         : Lavc58.134.100 libx264&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A&#xA;frame=  100 fps=0.0 q=-1.0 Lsize=     157kB time=00:07:55.33 bitrate=   2.7kbits/s speed=2.41e&#x2B;03x    &#xA;video:155kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.800846%&#xA;[libx264 @ 0x7fe418008e00] frame I:1     Avg QP:20.88  size:   574&#xA;[libx264 @ 0x7fe418008e00] frame P:43    Avg QP:14.96  size:  2005&#xA;[libx264 @ 0x7fe418008e00] frame B:56    Avg QP:21.45  size:  1266&#xA;[libx264 @ 0x7fe418008e00] consecutive B-frames: 14.0% 24.0% 30.0% 32.0%&#xA;[libx264 @ 0x7fe418008e00] mb I  I16..4: 36.4% 55.8%  7.9%&#xA;[libx264 @ 0x7fe418008e00] mb P  I16..4:  5.1%  7.5% 11.2%  P16..4:  5.6%  8.1%  4.5%  0.0%  0.0%    skip:57.9%&#xA;[libx264 @ 0x7fe418008e00] mb B  I16..4:  2.4%  0.9%  3.9%  B16..8: 16.2%  8.8%  4.6%  direct: 1.2%  skip:62.0%  L0:56.6% L1:38.7% BI: 4.7%&#xA;[libx264 @ 0x7fe418008e00] 8x8 transform intra:28.3% inter:3.7%&#xA;[libx264 @ 0x7fe418008e00] coded y,u,v intra: 26.5% 0.0% 0.0% inter: 9.8% 0.0% 0.0%&#xA;[libx264 @ 0x7fe418008e00] i16 v,h,dc,p: 82% 13%  4%  0%&#xA;[libx264 @ 0x7fe418008e00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20%  8% 71%  1%  0%  0%  0%  0%  0%&#xA;[libx264 @ 0x7fe418008e00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 41% 11% 29%  4%  2%  3%  1%  7%  1%&#xA;[libx264 @ 0x7fe418008e00] Weighted P-Frames: Y:0.0% UV:0.0%&#xA;[libx264 @ 0x7fe418008e00] ref P L0: 44.1%  4.2% 28.4% 23.3%&#xA;[libx264 @ 0x7fe418008e00] ref B L0: 56.2% 32.1% 11.6%&#xA;[libx264 @ 0x7fe418008e00] ref B L1: 92.4%  7.6%&#xA;[libx264 @ 0x7fe418008e00] kb/s:2.50&#xA;

    &#xA;

    The conversion of individual streams from .h264 to .mp4 generates two types of warnings each. One is [mp4 @ 0x7faee3040400] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly, and the other is [mp4 @ 0x7faee3040400] pts has no value.

    &#xA;

    Some posts on SO (can't find my original finds on that now) suggested that it's safe to ignore and comes from H.264 being an elementary stream that supposedly doesn't contain timestamps. It surprises me a bit since I produce that stream using NVENC API and clearly supply timing information for each frame via PIC_PARAMS structure : NV_STRUCT(PIC_PARAMS, pp); ...; pp.inputTimeStamp = _frameIndex&#x2B;&#x2B; * (H264_CLOCK_RATE / _params.frameRate);, where #define H264_CLOCK_RATE 9000 and _params.frameRate = 30.

    &#xA;

    Input #0, h264, from &#x27;low-motion-video.h264&#x27;:&#xA;  Duration: N/A, bitrate: N/A&#xA;  Stream #0:0: Video: h264 (High), yuv420p(progressive), 1440x3040 [SAR 1:1 DAR 9:19], 30 fps, 30 tbr, 1200k tbn, 60 tbc&#xA;Output #0, mp4, to &#x27;low-motion-video.mp4&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf58.76.100&#xA;  Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1440x3040 [SAR 1:1 DAR 9:19], q=2-31, 30 fps, 30 tbr, 90k tbn, 1200k tbc&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (copy)&#xA;Press [q] to stop, [?] for help&#xA;[mp4 @ 0x7faee3040400] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly&#xA;[mp4 @ 0x7faee3040400] pts has no value&#xA;[mp4 @ 0x7faee3040400] pts has no value0kB time=-00:00:00.03 bitrate=N/A speed=N/A    &#xA;    Last message repeated 17985 times&#xA;frame=17987 fps=0.0 q=-1.0 Lsize=   79332kB time=00:09:59.50 bitrate=1084.0kbits/s speed=1.59e&#x2B;03x    &#xA;video:79250kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.103804%&#xA;Input #0, h264, from &#x27;full-video.h264&#x27;:&#xA;  Duration: N/A, bitrate: N/A&#xA;  Stream #0:0: Video: h264 (High), yuv420p(progressive), 1440x3040 [SAR 1:1 DAR 9:19], 30 fps, 30 tbr, 1200k tbn, 60 tbc&#xA;Output #0, mp4, to &#x27;full-video.mp4&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf58.76.100&#xA;  Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1440x3040 [SAR 1:1 DAR 9:19], q=2-31, 30 fps, 30 tbr, 90k tbn, 1200k tbc&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (copy)&#xA;Press [q] to stop, [?] for help&#xA;[mp4 @ 0x7f9381864600] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly&#xA;[mp4 @ 0x7f9381864600] pts has no value&#xA;[mp4 @ 0x7f9381864600] pts has no value0kB time=-00:00:00.03 bitrate=N/A speed=N/A    &#xA;    Last message repeated 17981 times&#xA;frame=17983 fps=0.0 q=-1.0 Lsize=   52976kB time=00:09:59.36 bitrate= 724.1kbits/s speed=1.33e&#x2B;03x    &#xA;video:52893kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.156232%&#xA;

    &#xA;

    But the most worrisome error for me is from stitching together all interim .mp4 files into one :

    &#xA;

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ff2010e00] Auto-inserting h264_mp4toannexb bitstream filter&#xA;Input #0, concat, from &#x27;ffconcat-h264.txt&#x27;:&#xA;  Duration: N/A, bitrate: 1082 kb/s&#xA;  Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1440x3040 [SAR 1:1 DAR 9:19], 1082 kb/s, 30 fps, 30 tbr, 90k tbn, 60 tbc&#xA;    Metadata:&#xA;      handler_name    : VideoHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;Output #0, mp4, to &#x27;video-out.mp4&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf58.76.100&#xA;  Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1440x3040 [SAR 1:1 DAR 9:19], q=2-31, 1082 kb/s, 30 fps, 30 tbr, 90k tbn, 90k tbc&#xA;    Metadata:&#xA;      handler_name    : VideoHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (copy)&#xA;Press [q] to stop, [?] for help&#xA;[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9fe1009c00] Auto-inserting h264_mp4toannexb bitstream filter&#xA;[mp4 @ 0x7f9ff2023400] Non-monotonous DTS in output stream 0:0; previous: 53954460, current: 53954460; changing to 53954461. This may result in incorrect timestamps in the output file.&#xA;[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9fd1008a00] Auto-inserting h264_mp4toannexb bitstream filter&#xA;[mp4 @ 0x7f9ff2023400] Non-monotonous DTS in output stream 0:0; previous: 107900521, current: 107874150; changing to 107900522. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x7f9ff2023400] Non-monotonous DTS in output stream 0:0; previous: 107900522, current: 107886150; changing to 107900523. This may result in incorrect timestamps in the output file.&#xA;frame=36070 fps=0.0 q=-1.0 Lsize=  132464kB time=00:27:54.26 bitrate= 648.1kbits/s speed=6.54e&#x2B;03x    &#xA;video:132296kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.126409%&#xA;

    &#xA;

    I'm not sure how to deal with those non-monotonous DTS errors, and no matter what I try, nothing budges. I analyzed the interim .mp4 files using ffprobe -show_frames and found that the last frame of each interim .mp4 does not have DTS, while previous frames do. E.g. :

    &#xA;

    ...&#xA;[FRAME]&#xA;media_type=video&#xA;stream_index=0&#xA;key_frame=0&#xA;pkt_pts=53942461&#xA;pkt_pts_time=599.360678&#xA;pkt_dts=53942461&#xA;pkt_dts_time=599.360678&#xA;best_effort_timestamp=53942461&#xA;best_effort_timestamp_time=599.360678&#xA;pkt_duration=3600&#xA;pkt_duration_time=0.040000&#xA;pkt_pos=54161377&#xA;pkt_size=1034&#xA;width=1440&#xA;height=3040&#xA;pix_fmt=yuv420p&#xA;sample_aspect_ratio=1:1&#xA;pict_type=B&#xA;coded_picture_number=17982&#xA;display_picture_number=0&#xA;interlaced_frame=0&#xA;top_field_first=0&#xA;repeat_pict=0&#xA;color_range=unknown&#xA;color_space=unknown&#xA;color_primaries=unknown&#xA;color_transfer=unknown&#xA;chroma_location=left&#xA;[/FRAME]&#xA;[FRAME]&#xA;media_type=video&#xA;stream_index=0&#xA;key_frame=0&#xA;pkt_pts=53927461&#xA;pkt_pts_time=599.194011&#xA;pkt_dts=N/A&#xA;pkt_dts_time=N/A&#xA;best_effort_timestamp=53927461&#xA;...&#xA;

    &#xA;

    My guess is that as concat demuxer reads in (or elsewhere in ffmpeg's conversion pipeline), for the last frame it sees no DTS set, and produces a virtual value equal to the last seen. Then further in pipeline it consumes this input, sees that DTS value is being repeated, issues a warning and offsets it with increment by one, which might be somewhat nonsensical/unrealistic timing value.

    &#xA;

    I tried using -fflags &#x2B;genpts as suggested in this SO answer, but that doesn't change anything.

    &#xA;

    Per yet other posts suggesting issue being with incompatible tbn and tbc values and possible timebase issues, I tried adding -time_base 1:90000 and -enc_time_base 1:90000 and -copytb 1 and nothing budges. The -video_track_timescale 90000 is there b/c it helped reduce those DTS warnings from 100s down to 3, but doesn't eliminate them all.

    &#xA;

    Question

    &#xA;

    What is missing and how can I get ffmpeg to perform conversions without these warnings, to be sure it produces proper, well-formed output ?

    &#xA;

  • Revision 208aa6158b : Remove get_nonrd_var_based_fixed_partition function This function has been repl

    9 avril 2015, par Jingning Han

    Changed Paths :
     Modify /vp9/encoder/vp9_encodeframe.c



    Remove get_nonrd_var_based_fixed_partition function

    This function has been replaced by other approaches and is not
    in use now.

    Change-Id : I387f45b5607d202539e482468ccc70e6c0f9341f