Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (63)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (11190)

  • FFMPEG changes pixel values when reading and saving png without modification

    25 janvier 2023, par walrus

    This is a toy problem that is the result of my trying to identify a bug within a video pipeline I'm working on. The idea is that I want to take a frame from a YUV420 video, modify it as an RGB24 image, and reinsert it. To do this I convert YUV420 -> YUV444 -> RGB -> YUV444 -> YUV420. Doing this without any modification should result in the same frame however I noticed slight color transformations.

    


    I tried to isolate the problem using a toy 3x3 RGB32 png image. The function read_and_save_image reads the image and then saves it as new file. It returns the read pixel array. I run this function thrice successively using the output of the previous run as the input of the next. This is to demonstrate a perplexing fact. While passing an image through the function once causes the resulting image to have different pixel values, doing it twice does not change anything. Perhaps more confusing is that the pixel values returned by the function are all the same.

    


    tldr ; How can I load and save the toy image below using ffmpeg as a new file such that the pixel values of the new and original files are identical ?

    


    Here is the original image followed by the result from one and two passes through the function. Note that the pixel value displayed by when reading these images with Preview has changed ever so slightly. This becomes noticeable within a video.

    


    Test image (very small) ->&#xA;3x3 test image file <-

    &#xA;

    Here are the pixel values read (note that after being loaded and saved there is a change) :

    &#xA;

    original test image

    &#xA;

    test image after one pass

    &#xA;

    test image after two passes

    &#xA;

    Edit : here is an RGB24 frame extracted from a video I am using to test my pipeline. I had the same issue with pixel values changing after loading and saving with ffmpeg.

    &#xA;

    frame from video I was testing pipeline on

    &#xA;

    Here is a screenshot showing how the image is noticeably darker after ffmpeg. Same pixels on the top right corner of the image.

    &#xA;

    zoomed in top right corner

    &#xA;

    Here is the code of the toy problem :

    &#xA;

    import os&#xA;import ffmpeg&#xA;import numpy as np&#xA;&#xA;&#xA;def read_and_save_image(in_file, out_file, width, height, pix_fmt=&#x27;rgb32&#x27;):&#xA;    input_data, _ = (&#xA;        ffmpeg&#xA;        .input(in_file)&#xA;        .output(&#x27;pipe:&#x27;, format=&#x27;rawvideo&#x27;, pix_fmt=pix_fmt)&#xA;        .run(capture_stdout=True)&#xA;    )&#xA;  &#xA;    frame = np.frombuffer(input_data, np.uint8)&#xA;    print(in_file,&#x27;\n&#x27;, frame.reshape((height,width,-1)))&#xA;    &#xA;    save_data = (&#xA;        ffmpeg&#xA;            .input(&#x27;pipe:&#x27;, format=&#x27;rawvideo&#x27;, pix_fmt=pix_fmt, s=&#x27;{}x{}&#x27;.format(width, height))&#xA;            .output(out_file, pix_fmt=pix_fmt)&#xA;            .overwrite_output()&#xA;            .run_async(pipe_stdin=True)&#xA;    )&#xA;    &#xA;    &#xA;&#xA;    save_data.stdin.write(frame.tobytes())&#xA;    save_data.stdin.close()&#xA;    #save_data.wait()&#xA;&#xA;    return frame&#xA;&#xA;try:&#xA;    test_img = "test_image.png"&#xA;    test_img_1 = "test_image_1.png"&#xA;    test_img_2 = "test_image_2.png"&#xA;    test_img_3 = "test_image_3.png"&#xA;&#xA;    width, height, pix_fmt = 3,3,&#x27;rgb32&#x27;&#xA;    #width, height, pix_fmt = video_stream[&#x27;width&#x27;], video_stream[&#x27;height&#x27;],  &#x27;rgb24&#x27;&#xA;    test_img_pxls = read_and_save_image(test_img,test_img_1, width, height, pix_fmt)&#xA;    test_img_1_pxls = read_and_save_image(test_img_1,test_img_2, width, height, pix_fmt)&#xA;    test_img_2_pxls = read_and_save_image(test_img_2,test_img_3, width, height, pix_fmt)&#xA;&#xA;    print(np.array_equiv(test_img_pxls, test_img_1_pxls))&#xA;    print(np.array_equiv(test_img_1_pxls, test_img_2_pxls))&#xA;&#xA;except ffmpeg.Error as e:&#xA;    print(&#x27;stdout:&#x27;, e.stdout.decode(&#x27;utf8&#x27;))&#xA;    print(&#x27;stderr:&#x27;, e.stderr.decode(&#x27;utf8&#x27;))&#xA;    raise e&#xA;&#xA;&#xA;!mediainfo --Output=JSON --Full $test_img&#xA;!mediainfo --Output=JSON --Full $test_img_1&#xA;!mediainfo --Output=JSON --Full $test_img_2&#xA;

    &#xA;

    Here is the console output of the program that shows that the pixel arrays read by ffmpeg are the same despite the images being different.

    &#xA;

    test_image.png &#xA; [[[253 218 249 255]&#xA;  [252 213 248 255]&#xA;  [251 200 244 255]]&#xA;&#xA; [[253 227 250 255]&#xA;  [249 209 236 255]&#xA;  [243 169 206 255]]&#xA;&#xA; [[253 235 251 255]&#xA;  [245 195 211 255]&#xA;  [226 103 125 255]]]&#xA;test_image_1.png &#xA; [[[253 218 249 255]&#xA;  [252 213 248 255]&#xA;  [251 200 244 255]]&#xA;&#xA; [[253 227 250 255]&#xA;  [249 209 236 255]&#xA;  [243 169 206 255]]&#xA;&#xA; [[253 235 251 255]&#xA;  [245 195 211 255]&#xA;  [226 103 125 255]]]&#xA;test_image_2.png &#xA; [[[253 218 249 255]&#xA;  [252 213 248 255]&#xA;  [251 200 244 255]]&#xA;&#xA; [[253 227 250 255]&#xA;  [249 209 236 255]&#xA;  [243 169 206 255]]&#xA;&#xA; [[253 235 251 255]&#xA;  [245 195 211 255]&#xA;  [226 103 125 255]]]&#xA;True&#xA;True&#xA;{&#xA;"media": {&#xA;"@ref": "test_image.png",&#xA;"track": [&#xA;{&#xA;"@type": "General",&#xA;"ImageCount": "1",&#xA;"FileExtension": "png",&#xA;"Format": "PNG",&#xA;"FileSize": "4105",&#xA;"StreamSize": "0",&#xA;"File_Modified_Date": "UTC 2023-01-19 13:49:00",&#xA;"File_Modified_Date_Local": "2023-01-19 13:49:00"&#xA;},&#xA;{&#xA;"@type": "Image",&#xA;"Format": "PNG",&#xA;"Format_Compression": "LZ77",&#xA;"Width": "3",&#xA;"Height": "3",&#xA;"BitDepth": "32",&#xA;"Compression_Mode": "Lossless",&#xA;"StreamSize": "4105"&#xA;}&#xA;]&#xA;}&#xA;}&#xA;&#xA;{&#xA;"media": {&#xA;"@ref": "test_image_1.png",&#xA;"track": [&#xA;{&#xA;"@type": "General",&#xA;"ImageCount": "1",&#xA;"FileExtension": "png",&#xA;"Format": "PNG",&#xA;"FileSize": "128",&#xA;"StreamSize": "0",&#xA;"File_Modified_Date": "UTC 2023-01-24 15:31:58",&#xA;"File_Modified_Date_Local": "2023-01-24 15:31:58"&#xA;},&#xA;{&#xA;"@type": "Image",&#xA;"Format": "PNG",&#xA;"Format_Compression": "LZ77",&#xA;"Width": "3",&#xA;"Height": "3",&#xA;"BitDepth": "32",&#xA;"Compression_Mode": "Lossless",&#xA;"StreamSize": "128"&#xA;}&#xA;]&#xA;}&#xA;}&#xA;&#xA;{&#xA;"media": {&#xA;"@ref": "test_image_2.png",&#xA;"track": [&#xA;{&#xA;"@type": "General",&#xA;"ImageCount": "1",&#xA;"FileExtension": "png",&#xA;"Format": "PNG",&#xA;"FileSize": "128",&#xA;"StreamSize": "0",&#xA;"File_Modified_Date": "UTC 2023-01-24 15:31:59",&#xA;"File_Modified_Date_Local": "2023-01-24 15:31:59"&#xA;},&#xA;{&#xA;"@type": "Image",&#xA;"Format": "PNG",&#xA;"Format_Compression": "LZ77",&#xA;"Width": "3",&#xA;"Height": "3",&#xA;"BitDepth": "32",&#xA;"Compression_Mode": "Lossless",&#xA;"StreamSize": "128"&#xA;}&#xA;]&#xA;}&#xA;}&#xA;&#xA;

    &#xA;

  • `ffmpet -f concat` don't work when all input streams appear to have the same spec

    9 mars 2023, par Roy

    My ffmpeg command :

    &#xA;

    ffmpeg -safe 0 -f concat -i list.txt -c copy out.mp4&#xA;

    &#xA;

    My 1st input file :

    &#xA;

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;D:\Applications\ffmpeg_6.0_full\a.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : isom&#xA;    minor_version   : 512&#xA;    compatible_brands: isomiso2avc1mp41&#xA;    encoder         : Lavf60.3.100&#xA;  Duration: 00:00:04.97, start: 0.000000, bitrate: 40 kb/s&#xA;  Stream #0:0[0x1](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 2 kb/s (default)&#xA;    Metadata:&#xA;      handler_name    : SoundHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;  Stream #0:1[0x2](und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 27 kb/s, 30 fps, 30 tbr, 30k tbn (default)&#xA;    Metadata:&#xA;      handler_name    : VideoHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;      encoder         : Lavc60.3.100 libx264&#xA;

    &#xA;

    My 2nd input file :

    &#xA;

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;D:\Applications\ffmpeg_6.0_full\b.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: mp41isom&#xA;    creation_time   : 2023-03-08T06:47:13.000000Z&#xA;    artist          : Microsoft Game DVR&#xA;    title           : PUBG: BATTLEGROUNDS&#xA;  Duration: 00:10:00.16, start: 0.000000, bitrate: 20885 kb/s&#xA;  Stream #0:0[0x1](und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 20739 kb/s, 30 fps, 30 tbr, 30k tbn (default)&#xA;    Metadata:&#xA;      creation_time   : 2023-03-08T06:47:13.000000Z&#xA;      handler_name    : VideoHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;      encoder         : AVC Coding&#xA;  Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 131 kb/s (default)&#xA;    Metadata:&#xA;      creation_time   : 2023-03-08T06:47:13.000000Z&#xA;      handler_name    : SoundHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;

    &#xA;

    The above command outputs some warning signals :

    &#xA;

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0000025239902d40] Auto-inserting h264_mp4toannexb bitstream filter&#xA;[mp4 @ 00000252396fe5c0] Non-monotonous DTS in output stream 0:1; previous: 218112, current: 150024; changing to 218113. This may result in incorrect timestamps in the output file.&#xA;...&#xA;a lot of them&#xA;...&#xA;frame=25992 fps=21754 q=-1.0 Lsize= 1519621kB time=00:14:49.39 bitrate=13996.8kbits/s speed= 744x&#xA;video:9649kB audio:1519216kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;

    &#xA;

    The resultant video can play the first part of the video correctly, then the video players either skips directly to the end of the video (MPC-HC), or don't render anything at all while timer passes as normal (VLC).

    &#xA;

    My impression of the concat is that it requires all videos to have the same spec, which I think my input achieved (all the "Steam #0:0", etc, line matches). I only see the following difference, which I assumed that should be okay :

    &#xA;

      &#xA;
    1. Metadata are different both for the whole input (e.g. "major_brand") and for each stream (e.g. "encoder"). I assumed that metadata won't affect the processing.
    2. &#xA;

    3. The order of video/audio streams are different in the two inputs : the 1st input file has audio then video ; the 2nd input file has video then audio. I assumed that ffmpeg knows the difference and won't concat a video stream to an audio stream.
    4. &#xA;

    &#xA;

    The full output of the command can be found in this pastebin : https://pastebin.com/Z5q97Uyg

    &#xA;

  • `ffmpeg -f concat` don't work when all input streams appear to have the same spec

    2 octobre 2024, par Roy

    My ffmpeg command :

    &#xA;

    ffmpeg -safe 0 -f concat -i list.txt -c copy out.mp4&#xA;

    &#xA;

    My 1st input file :

    &#xA;

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;D:\Applications\ffmpeg_6.0_full\a.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : isom&#xA;    minor_version   : 512&#xA;    compatible_brands: isomiso2avc1mp41&#xA;    encoder         : Lavf60.3.100&#xA;  Duration: 00:00:04.97, start: 0.000000, bitrate: 40 kb/s&#xA;  Stream #0:0[0x1](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 2 kb/s (default)&#xA;    Metadata:&#xA;      handler_name    : SoundHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;  Stream #0:1[0x2](und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 27 kb/s, 30 fps, 30 tbr, 30k tbn (default)&#xA;    Metadata:&#xA;      handler_name    : VideoHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;      encoder         : Lavc60.3.100 libx264&#xA;

    &#xA;

    My 2nd input file :

    &#xA;

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;D:\Applications\ffmpeg_6.0_full\b.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: mp41isom&#xA;    creation_time   : 2023-03-08T06:47:13.000000Z&#xA;    artist          : Microsoft Game DVR&#xA;    title           : PUBG: BATTLEGROUNDS&#xA;  Duration: 00:10:00.16, start: 0.000000, bitrate: 20885 kb/s&#xA;  Stream #0:0[0x1](und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 20739 kb/s, 30 fps, 30 tbr, 30k tbn (default)&#xA;    Metadata:&#xA;      creation_time   : 2023-03-08T06:47:13.000000Z&#xA;      handler_name    : VideoHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;      encoder         : AVC Coding&#xA;  Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 131 kb/s (default)&#xA;    Metadata:&#xA;      creation_time   : 2023-03-08T06:47:13.000000Z&#xA;      handler_name    : SoundHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;

    &#xA;

    The above command outputs some warning signals :

    &#xA;

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0000025239902d40] Auto-inserting h264_mp4toannexb bitstream filter&#xA;[mp4 @ 00000252396fe5c0] Non-monotonous DTS in output stream 0:1; previous: 218112, current: 150024; changing to 218113. This may result in incorrect timestamps in the output file.&#xA;...&#xA;a lot of them&#xA;...&#xA;frame=25992 fps=21754 q=-1.0 Lsize= 1519621kB time=00:14:49.39 bitrate=13996.8kbits/s speed= 744x&#xA;video:9649kB audio:1519216kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;

    &#xA;

    The resultant video can play the first part of the video correctly, then the video players either skips directly to the end of the video (MPC-HC), or don't render anything at all while timer passes as normal (VLC).

    &#xA;

    My impression of the concat is that it requires all videos to have the same spec, which I think my input achieved (all the "Steam #0:0", etc, line matches). I only see the following difference, which I assumed that should be okay :

    &#xA;

      &#xA;
    1. Metadata are different both for the whole input (e.g. "major_brand") and for each stream (e.g. "encoder"). I assumed that metadata won't affect the processing.
    2. &#xA;

    3. The order of video/audio streams are different in the two inputs : the 1st input file has audio then video ; the 2nd input file has video then audio. I assumed that ffmpeg knows the difference and won't concat a video stream to an audio stream.
    4. &#xA;

    &#xA;

    The full output of the command can be found in this pastebin : https://pastebin.com/Z5q97Uyg

    &#xA;