Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (105)

  • MediaSPIP Init et Diogène : types de publications de MediaSPIP

    11 novembre 2010, par

    À l’installation d’un site MediaSPIP, le plugin MediaSPIP Init réalise certaines opérations dont la principale consiste à créer quatre rubriques principales dans le site et de créer cinq templates de formulaire pour Diogène.
    Ces quatre rubriques principales (aussi appelées secteurs) sont : Medias ; Sites ; Editos ; Actualités ;
    Pour chacune de ces rubriques est créé un template de formulaire spécifique éponyme. Pour la rubrique "Medias" un second template "catégorie" est créé permettant d’ajouter (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (9431)

  • understanding HEVC NAL SEI termination and byte alignment parsing with ffmpeg

    3 mars, par rodeomacon

    The NAL SEI timecode message I am currently writing to file is 00 00 01 4E 01 88 06 XX XX XX XX XX 10 80 (The termination portion being 10 80, payloadSize set to 0x06 and the XX bytes being an encoding of the frames/seconds/minutes/hours).

    


    My goal is to read the timecode with ffmpeg -i video.h265 -c:v copy -bsf:v trace_headers -f null - and ffprobe -show_frames video.mov with no errors.

    


    The 3 left most 0 bits of the final 0x10 byte are the conclusion of the time_offset_length (Equal to 0) data. Following this, I am intending to have a rbsp_stop_one_bit followed by four rbsp_alignment_zero_bits to result in byte alignment.

    


    With this termination configuration (No trailing 0x80 byte and the payloadSize set to 0x05 like 00 00 01 4E 01 88 05 XX XX XX XX XX 10), ffmpeg reports Invalid value at time_offset_length[i]: bitstream ended.

    


    With the addition of the trailing 0x80 byte and changing the payloadSize to 0x06 to match, ffmpeg does not throw a warning but instead indicates there are extra, unused bits :

    


    [trace_headers @ 0000015aff793a80] Prefix Supplemental Enhancement Information
[trace_headers @ 0000015aff793a80] 0           forbidden_zero_bit                                          0 = 0
[trace_headers @ 0000015aff793a80] 1           nal_unit_type                                          100111 = 39
[trace_headers @ 0000015aff793a80] 7           nuh_layer_id                                           000000 = 0
[trace_headers @ 0000015aff793a80] 13          nuh_temporal_id_plus1                                     001 = 1
[trace_headers @ 0000015aff793a80] 16          last_payload_type_byte                               10001000 = 136
[trace_headers @ 0000015aff793a80] 24          last_payload_size_byte                               00000110 = 6
[trace_headers @ 0000015aff793a80] Time Code
[trace_headers @ 0000015aff793a80] 32          num_clock_ts                                               01 = 1
[trace_headers @ 0000015aff793a80] 34          clock_timestamp_flag[0]                                     1 = 1
[trace_headers @ 0000015aff793a80] 35          units_field_based_flag[0]                                   0 = 0
[trace_headers @ 0000015aff793a80] 36          counting_type[0]                                        00000 = 0
[trace_headers @ 0000015aff793a80] 41          full_timestamp_flag[0]                                      1 = 1
[trace_headers @ 0000015aff793a80] 42          discontinuity_flag[0]                                       0 = 0
[trace_headers @ 0000015aff793a80] 43          cnt_dropped_flag[0]                                         0 = 0
[trace_headers @ 0000015aff793a80] 44          n_frames[0]                                         000110101 = 53
[trace_headers @ 0000015aff793a80] 53          seconds_value[0]                                       010010 = 18
[trace_headers @ 0000015aff793a80] 59          minutes_value[0]                                       010100 = 20
[trace_headers @ 0000015aff793a80] 65          hours_value[0]                                          01010 = 10
[trace_headers @ 0000015aff793a80] 70          time_offset_length[0]                                   00000 = 0
[trace_headers @ 0000015aff793a80] 75          bit_equal_to_one                                            1 = 1
[trace_headers @ 0000015aff793a80] 76          bit_equal_to_zero                                           0 = 0
[trace_headers @ 0000015aff793a80] 77          bit_equal_to_zero                                           0 = 0
[trace_headers @ 0000015aff793a80] 78          bit_equal_to_zero                                           0 = 0
[trace_headers @ 0000015aff793a80] 79          bit_equal_to_zero                                           0 = 0
[trace_headers @ 0000015aff793a80] 80          rbsp_stop_one_bit                                           1 = 1
[trace_headers @ 0000015aff793a80] 81          rbsp_alignment_zero_bit                                     0 = 0
[trace_headers @ 0000015aff793a80] 82          rbsp_alignment_zero_bit                                     0 = 0
[trace_headers @ 0000015aff793a80] 83          rbsp_alignment_zero_bit                                     0 = 0
[trace_headers @ 0000015aff793a80] 84          rbsp_alignment_zero_bit                                     0 = 0
[trace_headers @ 0000015aff793a80] 85          rbsp_alignment_zero_bit                                     0 = 0
[trace_headers @ 0000015aff793a80] 86          rbsp_alignment_zero_bit                                     0 = 0
[trace_headers @ 0000015aff793a80] 87          rbsp_alignment_zero_bit                                     0 = 0


    


    Without the bit_equal_to_one, ffmpeg gives a generic error Failed to read unit 0 (type 39) after reading the time_offset_length correctly.

    


    What is the meaning of bit_equal_to_one and bit_equal_to_zero in this context and is this the intended SEI termination method ? Why are those bits not parsed as the alignment bits ?

    


  • Different code(.java file) for different platform ?

    2 mars 2016, par AR792

    I have a code where image data is passed from bitmap to FFmpeg frame recorder and converted to a video. But i need to make small changes while running it on LG G3(armv7) from Asus zenfone 5(x86).

    Following are the class variables that create the issue :(declared under, class Main Activity)

    inputWidth = 1024 ;

    inputHeight = 650 ;

    Following is the method where the issue occurs :

    byte [] getNV21(int inputWidth, int inputHeight, Bitmap bitmap) {

       int [] argb = new int[inputWidth * inputHeight];

       bitmap.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);

       byte [] yuv = new byte[inputWidth*inputHeight*3/2];
       encodeYUV420SP(yuv, argb, inputWidth, inputHeight);

       return yuv;
    }

    void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
       final int frameSize = width * height;

       int yIndex = 0;
       int uvIndex = frameSize;

       int a, R, G, B, Y, U, V;
       int index = 0;
       for (int j = 0; j < height; j++) {
           for (int i = 0; i < width; i++) {

               a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
               R = (argb[index] & 0xff0000) >> 16;
               G = (argb[index] & 0xff00) >> 8;
               B = (argb[index] & 0xff) >> 0;

               // well known RGB to YUV algorithm
               Y = ( (  66 * R + 129 * G +  25 * B + 128) >> 8) +  16;
               U = ( ( -38 * R -  74 * G + 112 * B + 128) >> 8) + 128;
               V = ( ( 112 * R -  94 * G -  18 * B + 128) >> 8) + 128;

               // NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
               //    meaning for every 4 Y pixels there are 1 V and 1 U.  Note the sampling is every other
               //    pixel AND every other scanline.
               yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
               if (j % 2 == 0 && index % 2 == 0) {
                   yuv420sp[uvIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
                   yuv420sp[uvIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
               }

               index ++;
           }
       }
    }

    Working CODE :

    LG G3 :I can use the above variables at any place in the code to get the required output.
    Bitmap size returned = 2734200

    Asus Zenfone 5 : Except at creating the bitmap, I have to use everywhere else bitmap.getHeight() and bitmap.getWidth(), to get the required output.

    Surprisingly here Bitmap size returned = 725760 (So its not setting according to set bitmap parameters ?)

    INCORRECT CODE :

    LG G3 : IF i use bitmap.getHeight() and bitmap.getWidth(), i get java.lang.ArrayIndexOutOfBoundsException : length = 102354 , index = 102354. @getNV21 method

    Asus Zenfone 5 : If i use inputWidth , inputHeight i get
    java.lang.IllegalArgumentException : x + width must be <= bitmap.width() @getNV21 method

    How can i generalize the above code for both phones ?

  • What is the optimal way to synchronize frames in ffmpeg c/c++ ?

    16 septembre 2022, par Turgut

    I made a program that read's n number of video's as input, draws those videos to the GLFW window and finally encodes it all as a singular video output. The problem is frames of each video in question can be different, it's dependent on the user's input.

    &#xA;

    For example : the user can put two video's which has an FPS of 30 and 59, and can want an output 23,797. The problem is those video's are not in sync with each other, thus on the output we can see that the input video's are either faster or slower.

    &#xA;

    Duration of each video is also dependent on the input. For example, in accordance to the previous example, the first input might be 30 second and the second can be 13 second, while the output is 50 seconds.

    &#xA;

    I mostly read the frames similar to a moving png rather than a solid video since there are no iframe and bframes. There are just data I get from the GLFW window.

    &#xA;

    As an example, let's say we give one video as input which has an FPS of 30 and duration of 30, and our output has an FPS of 23.797 and duration of 30. I have 2 function's skip_frame and wait_frame which respectively either read's a frame twice so we skip a frame or don't read the frame on that iteration. Those function's are used depending on the situation, whether it's output < input or output > input.

    &#xA;

    Here is what my code roughly looks like :

    &#xA;

    while(current_time &lt; output_duration){&#xA;   for(auto input_video: all_inputs){&#xA;      for(int i = 0; i &lt; amount_to_read_from_input(); i&#x2B;&#x2B;){&#xA;         frame = input_video.read_frame();&#xA;      }&#xA;   }&#xA;   &#xA;   GLFW_window.draw_to_screen(frame);&#xA;&#xA;   encoder.encode_one_video_frame(GLFW_window.read_window());&#xA;}&#xA;

    &#xA;

    Basically skip_frame and wait_frame are both inside amount_to_read_from_input() and return 2 or 0 respectively.

    &#xA;

    So far I have tried multiplying duration with fps for both input and output. Then getting the result of their subtraction. Going from our previous example we get 900 - 714 = 186.&#xA;Then I divide the result to the output fps like so : 714 / 186 = 3.8. Meaning that I have to skip a frame every 3.8 iterations. (I skip a frame every 3 iterations and save the residual 0.8 for the next iter.)

    &#xA;

    But it's still a seconds or two behind. (Like it ends at 29 seconds for a 30 second output.) and the audio is out-of-sync. Ffmpeg handles my audio so there are no errors on that part.

    &#xA;

    Also seen this question but I don't think I can utilize ffmpeg's function's here since im reading from a glfw window and it comes down to my algorithm.

    &#xA;

    The problem is what is the math here ?

    &#xA;

    What can I do to make sure these frames are stabilized on almost every input/output combination ?

    &#xA;