Recherche avancée

Médias (91)

Autres articles (79)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (10005)

  • I want to convert the decoded h264 packet data to an es data file

    26 juillet 2017, par user8335183

    I want to convert the decoded h264 packet data to an es data file(dump), but I do not know what to do.
    What should I do ?

  • FFmpeg massive data loss when writing large data and swapping segments

    25 avril 2023, par Bohdan Petrenko

    I have an ffmpeg process running which continiously writes audio data to two 30 seconds (for testing, I'm actually planning to use 5 minutes) segments. The problem is that when I write some audio data with length more than size of two segments (60 seconds), 8-17 seconds of audio is lost. Here is how I run FFmpeg and write data :

    


        _ffmpeg = Process.Start(new ProcessStartInfo
    {
        FileName = "ffmpeg",
        Arguments = 
            $"-y -f s16le -ar 48000 -ac {Channels} -i pipe:0 -c:a libmp3lame -f segment -segment_time {BufferDuration} -segment_format mp3 -segment_wrap 2 -reset_timestamps 1 -segment_list \"{_segmentListPath}\" \"{segmentName}\"",
        UseShellExecute = false,
        RedirectStandardInput = true
    })!;
    // Channels is usually 1, BufferDuration is 30


    


    And here is how I write data :

    


    public async Task WriteSilenceAsync(int amount)
{
    if (amount > _size) amount = _size; // _size is 48000 * 1 * 2 * 30 * 2 = 5760000 (size of 1 minute of audio)
    
    var silence = _silenceBuffer.AsMemory(0, amount);
    await _ffmpeg.StandardInput.BaseStream.WriteAsync(silence);
}


    


    I tried to change the ffmpeg parameters and ways I write data. But I haven't found the solution.

    


    I'm sure that the problem is caused by ffmpeg segments, because if I disable segmenting and write audio to a single file, there are no problems with data loss or audio missmatch. I also sure that amount of silence to add in WriteSilenceAsync() method is calculated right. I'm not sure if the problem appears with data length more than 30 seconds but less then 1 minute, but I think it doesn't.

    


    I don't know how to solve this problem and would be glad to see any suggestions or solutions.

    


  • delphi firemonkey + FFmpeg Fill image/Tbitmap with data of AVFRAME->pixelformat->YUV420P

    9 février 2020, par coban

    I have managed to create a simple Video player using SDL2 + FFmpeg libraries with Delphi VCL. It’s about the same as ffplay.exe but not a Console app.
    I’ve noticed that FFmpeg (I might be wrong) converts/scales (sws_scale) source pixelformat(any) -> to destination -> YUV420P faster than to any other format.

    What I want to achieve is some kind of a (video)surface, where over I can put other components, like for example a TProgressbar. SDL has a function sdl_createwindowfrom which can turn a tpanel into video(surface/window) where it is possible to put any component over it. But this function is only for windows.

    Maybe I am looking in the wrong direction to achieve what I want, if so, any hint is welcome.
    I was thinkin of drawing the data retrieved in pixelformat yuv420p to a TBitmap of a Timage, this way I won’t need SDL2 library, and I will be able to put any other component above, in this case, Timage. Or another component which might be faster.

    It seems like I need to convert the YUV420P into BGRA format, because TBitmap does not seem to support any YUV format, worse is FIREMONKEY tbitmap is always BGRA format, changing to other format is not possible.

    In first case, I need a function to convert yuv420 to BGRA, can anyone help with this, is there a component/package/function for this which I could use ? Or maybe is it anyhow possible to use yuv420p format directly without converting ?
    I tried to convert some SDL2 functions from SDL2 source (C/C++) to Delphi functions but it’s to complicate for me, specially with my knowledge of C/C++. In SDL2 there are methods/functions implemented for converting RGB <-> YUV. (Why did I ever start Delphi programming ? my mistake).

    BTW, I already tried TMediaplayer, it’s drawing video(picture) above everything, nothing else than the video is visible.


    I’ve made an attempt, what I don’t understand is where to get/what is "y_stride, uv_stride and rgb_stride"
    Some variable declarations and/or assignments can be incorrect, need to debug the values, but first I need to know what to pass for the above variables.


       procedure STD_FUNCTION_NAME(width, height:Cardinal;Y, U, V:PByte; Y_stride, UV_stride:Cardinal;
                             RGB:PByte;     RGB_stride:Cardinal;yuv_type:YCbCrType;
                           YUV_FORMAT,RGB_FORMAT:Word);
    var param:PYUV2RGBParam;
     y_pixel_stride,
     uv_pixel_stride,
     uv_x_sample_interval,
     uv_y_sample_interval:Word;

     x, ys:Cardinal;
     y_ptr1,y_ptr2,u_ptr,v_ptr:PByte;
     rgb_ptr1,rgb_ptr2:PByte;

     u_tmp,v_tmp,r_tmp,
     g_tmp,b_tmp:Cardinal;
     y_tmp:Integer;
    begin
    param := @(YUV2RGB[integer( yuv_type)]);
    if YUV_FORMAT = YUV_FORMAT_420
    then begin
     y_pixel_stride      := 1;
     uv_pixel_stride     := 1;
     uv_x_sample_interval:= 2;
     uv_y_sample_interval:= 2;
    end;
    if YUV_FORMAT = YUV_FORMAT_422
    then begin
     y_pixel_stride        := 2;
     uv_pixel_stride       := 4;
     uv_x_sample_interval  := 2;
     uv_y_sample_interval  := 1;
    end;
    if YUV_FORMAT = YUV_FORMAT_NV12
    then begin
     y_pixel_stride        := 1;
     uv_pixel_stride       := 2;
     uv_x_sample_interval  := 2;
     uv_y_sample_interval  := 2;
    end;


    //for(y=0; y&lt;(height-(uv_y_sample_interval-1)); y+=uv_y_sample_interval)
    ys := 0;
    while ys &lt; height-(uv_y_sample_interval-1) do
    begin
       y_ptr1  := Y+ys*Y_stride;
     y_ptr2  := Y+(ys+1)*Y_stride;
     u_ptr   := U+(ys div uv_y_sample_interval)*UV_stride;
     v_ptr   := V+(ys div uv_y_sample_interval)*UV_stride;

       rgb_ptr1:=RGB+ys*RGB_stride;

       if uv_y_sample_interval > 1
     then rgb_ptr2:=RGB+(ys+1)*RGB_stride;


       //for(x=0; x&lt;(width-(uv_x_sample_interval-1)); x+=uv_x_sample_interval)
    x := 0;
    while x&lt;(width-(uv_x_sample_interval-1)) do
       begin
           // Compute U and V contributions, common to the four pixels

           u_tmp := (( u_ptr^)-128);
           v_tmp := (( v_ptr^)-128);

           r_tmp := (v_tmp*param.v_r_factor);
           g_tmp := (u_tmp*param.u_g_factor + v_tmp*param.v_g_factor);
           b_tmp := (u_tmp*param.u_b_factor);

           // Compute the Y contribution for each pixel

           y_tmp := ((y_ptr1[0]-param.y_shift)*param.y_factor);
           PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);

           y_tmp := ((y_ptr1[y_pixel_stride]-param.y_shift)*param.y_factor);
           PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);

           if uv_y_sample_interval > 1
     then begin
       y_tmp := ((y_ptr2[0]-param.y_shift)*param.y_factor);
       PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr2);

       y_tmp := ((y_ptr2[y_pixel_stride]-param.y_shift)*param.y_factor);
       PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr2);
           end;

           y_ptr1 := y_ptr1 + 2*y_pixel_stride;
           y_ptr2 := y_ptr2 + 2*y_pixel_stride;
           u_ptr  := u_ptr  + 2*uv_pixel_stride div uv_x_sample_interval;
           v_ptr  := v_ptr  + 2*uv_pixel_stride div uv_x_sample_interval;
     x := x + uv_x_sample_interval
       end;

       //* Catch the last pixel, if needed */
       if (uv_x_sample_interval = 2) and (x = (width-1))
       then begin
           // Compute U and V contributions, common to the four pixels

           u_tmp := (( u_ptr^)-128);
           v_tmp := (( v_ptr^)-128);

           r_tmp := (v_tmp*param.v_r_factor);
           g_tmp := (u_tmp*param.u_g_factor + v_tmp*param.v_g_factor);
           b_tmp := (u_tmp*param.u_b_factor);

           // Compute the Y contribution for each pixel

           y_tmp := ((y_ptr1[0]-param.y_shift)*param.y_factor);
           PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);

           if uv_y_sample_interval > 1
     then begin
             y_tmp := ((y_ptr2[0]-param.y_shift)*param.y_factor);
       PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr2);
             //PACK_PIXEL(rgb_ptr2);
           end;
       end;
    ys := ys +uv_y_sample_interval;
    end;

    //* Catch the last line, if needed */
    if (uv_y_sample_interval = 2) and (ys = (height-1))
    then begin
       y_ptr1 :=Y+ys*Y_stride;
    u_ptr  :=U+(ys div uv_y_sample_interval)*UV_stride;
    v_ptr  :=V+(ys div uv_y_sample_interval)*UV_stride;

       rgb_ptr1:=RGB+ys*RGB_stride;

       //for(x=0; x&lt;(width-(uv_x_sample_interval-1)); x+=uv_x_sample_interval)
    x := 0;
    while x &lt; (width-(uv_x_sample_interval-1)) do
       begin
           // Compute U and V contributions, common to the four pixels

           u_tmp := (( u_ptr^)-128);
           v_tmp := (( v_ptr^)-128);

           r_tmp := (v_tmp*param.v_r_factor);
           g_tmp := (u_tmp*param.u_g_factor + v_tmp*param.v_g_factor);
           b_tmp := (u_tmp*param.u_b_factor);

           // Compute the Y contribution for each pixel

           y_tmp := ((y_ptr1[0]-param.y_shift)*param.y_factor);
           //PACK_PIXEL(rgb_ptr1);
     PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);
           y_tmp := ((y_ptr1[y_pixel_stride]-param.y_shift)*param.y_factor);
           //PACK_PIXEL(rgb_ptr1);
     PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);

           y_ptr1 := y_ptr1 + 2*y_pixel_stride;
           u_ptr  := u_ptr  + 2*uv_pixel_stride div uv_x_sample_interval;
           v_ptr  := v_ptr  + 2*uv_pixel_stride div uv_x_sample_interval;

     x := x + uv_x_sample_interval
       end;

       //* Catch the last pixel, if needed */
       if (uv_x_sample_interval = 2) and (x = (width-1))
       then begin
           // Compute U and V contributions, common to the four pixels

           u_tmp := (( u_ptr^)-128);
           v_tmp := (( v_ptr^)-128);

           r_tmp := (v_tmp*param.v_r_factor);
           g_tmp := (u_tmp*param.u_g_factor + v_tmp*param.v_g_factor);
           b_tmp := (u_tmp*param.u_b_factor);

           // Compute the Y contribution for each pixel

           y_tmp := ((y_ptr1[0]-param.y_shift)*param.y_factor);
           //PACK_PIXEL(rgb_ptr1);
     PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);
       end;
    end;

    end ;