Recherche avancée

Médias (0)

Mot : - Tags -/upload

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (39)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8033)

  • Get blur effect inside a drawn box using ass format

    18 septembre 2024, par Armen Sanoyan

    I have Nodejs application which generates subtitles file for video editing. The editing is done by ffmpeg. For each word I also generate a backdrop with rounded corners using drawing command. Example of dialog for backdrop.

    


    Dialogue: 0,00:00:38.60,00:00:42.10,Default,,0,0,0,,{\p1\1a&HD8&\c&HE27FA4&\pos(341.07421875,517.5)\an4}  {\p1}m 15 0 b 1202.8515625 .... {\p0}


    


    Now I want to make the backdrop blur.
(Text backdrop by ass formatting)

    


    I have tried to use \blur or \be, but they are for other purpose. Below is reference from the doc

    


    Enable or disable a subtle softening-effect for the edges of the text

    


    Is there any other way to achieve the blurred backdrop. I am ready even to mix ffmpeg commands like boxblur and ass file. I mean separately paint the backdrops and then apply ass subtitles file. Here is an experiment to explain the idea.

    


    [0:v]split=3[base][blur1_in][blur2_in];

[blur1_in]crop=w=100:h=100:x=20:y=40[region1];
[region1]boxblur=luma_radius=10:luma_power=1[blurred_region1];
[base][blurred_region1]overlay=x=20:y=40:enable='between(t,0,5)'[tmp1];

[blur2_in]crop=w=30:h=30:x=20:y=40[region2];
[region2]boxblur=luma_radius=5:luma_power=1[blurred_region2];
[tmp1][blurred_region2]overlay=x=20:y=40:enable='between(t,5,10)'[tmp2];

[tmp2]ass=../logs/ass.ass:fontsdir=../fonts/Audiowide-Regular.ttf[final_video]


    


    The problem in this case is that the corners are not rounded. Could anyone explain to me what is the easiest way to get backdrop with rounded corners an blurred inside

    


  • how to apostrophe with os.system in ffmpeg drawtext in python

    28 septembre 2023, par Ishu singh

    I just want to execute this code with os.system('command') in ffmpeg drawtext() but unable to execute it just because of ' (apostrophe) , it fails

    


    The code goes here ->

    


    the \f is working as \n but I'm using that for seprating word

    


    from PIL import ImageFont
import os

def create_lines(longline, start, end, fontsize=75, fontfile='OpenSansCondensedBold.ttf'):

    fit = fit_text(longline, 700, fontfile)

    texts = []
    now = 0
    # breaking line on basis of '\f'
    for wordIndex in range(len(fit)):
        if fit[wordIndex] == '\f' or wordIndex == len(fit)-1:
            texts.append(fit[now:wordIndex+1].strip('\f'))
            now = wordIndex

    # adding multiple lines to video
    string = ''
    count = 0
    for line in texts:
        string += f''',drawtext=fontfile={fontfile}:fontsize={fontsize}:text='{line[enter image description here](https://i.stack.imgur.com/iuceq.png)}':fontcolor=black:bordercolor=white:borderw=4:x=(w-text_w)/2:y=(h-text_h)/2-100+{count}:'enable=between(t,{start},{end})' '''
        count += 100

    print(string)
    return string

def createVideo(content):
    input_video = 'video.mp4'
    output_video = 'output.mp4'
    font_file = 'BebasKai.ttf'
    text_file = 'OpenSansCondensedBold.ttf'
    font_size = 75
    font_color = 'white'

    part1 = create_lines(content[1], 0.5, 7)
    part2 = create_lines(content[2], 7.5, 10)

    os.system(
        f"""ffmpeg -i {} -vf "drawtext=fontfile={font_file}:fontsize={95}:text={content[0]}:fontcolor={font_color}:box=1:boxcolor=black@0.9:boxborderw=20:x=(w-text_w)/2:y=(h-text_h)/4-100{str(part1)}{str(part2)}" -c:v libx264 -c:a aac -t 10 {output_video} -y""")

my_text =['The Brain', "Your brain can't multitask effectively", "Multitasking is a myth,  it's just rapid switching between tasks"]

createVideo(my_text)


    


    enter image description here

    


    what I want is that, I would able to execute this correctly

    


  • delphi firemonkey + FFmpeg Fill image/Tbitmap with data of AVFRAME->pixelformat->YUV420P

    9 février 2020, par coban

    I have managed to create a simple Video player using SDL2 + FFmpeg libraries with Delphi VCL. It’s about the same as ffplay.exe but not a Console app.
    I’ve noticed that FFmpeg (I might be wrong) converts/scales (sws_scale) source pixelformat(any) -> to destination -> YUV420P faster than to any other format.

    What I want to achieve is some kind of a (video)surface, where over I can put other components, like for example a TProgressbar. SDL has a function sdl_createwindowfrom which can turn a tpanel into video(surface/window) where it is possible to put any component over it. But this function is only for windows.

    Maybe I am looking in the wrong direction to achieve what I want, if so, any hint is welcome.
    I was thinkin of drawing the data retrieved in pixelformat yuv420p to a TBitmap of a Timage, this way I won’t need SDL2 library, and I will be able to put any other component above, in this case, Timage. Or another component which might be faster.

    It seems like I need to convert the YUV420P into BGRA format, because TBitmap does not seem to support any YUV format, worse is FIREMONKEY tbitmap is always BGRA format, changing to other format is not possible.

    In first case, I need a function to convert yuv420 to BGRA, can anyone help with this, is there a component/package/function for this which I could use ? Or maybe is it anyhow possible to use yuv420p format directly without converting ?
    I tried to convert some SDL2 functions from SDL2 source (C/C++) to Delphi functions but it’s to complicate for me, specially with my knowledge of C/C++. In SDL2 there are methods/functions implemented for converting RGB <-> YUV. (Why did I ever start Delphi programming ? my mistake).

    BTW, I already tried TMediaplayer, it’s drawing video(picture) above everything, nothing else than the video is visible.


    I’ve made an attempt, what I don’t understand is where to get/what is "y_stride, uv_stride and rgb_stride"
    Some variable declarations and/or assignments can be incorrect, need to debug the values, but first I need to know what to pass for the above variables.


       procedure STD_FUNCTION_NAME(width, height:Cardinal;Y, U, V:PByte; Y_stride, UV_stride:Cardinal;
                             RGB:PByte;     RGB_stride:Cardinal;yuv_type:YCbCrType;
                           YUV_FORMAT,RGB_FORMAT:Word);
    var param:PYUV2RGBParam;
     y_pixel_stride,
     uv_pixel_stride,
     uv_x_sample_interval,
     uv_y_sample_interval:Word;

     x, ys:Cardinal;
     y_ptr1,y_ptr2,u_ptr,v_ptr:PByte;
     rgb_ptr1,rgb_ptr2:PByte;

     u_tmp,v_tmp,r_tmp,
     g_tmp,b_tmp:Cardinal;
     y_tmp:Integer;
    begin
    param := @(YUV2RGB[integer( yuv_type)]);
    if YUV_FORMAT = YUV_FORMAT_420
    then begin
     y_pixel_stride      := 1;
     uv_pixel_stride     := 1;
     uv_x_sample_interval:= 2;
     uv_y_sample_interval:= 2;
    end;
    if YUV_FORMAT = YUV_FORMAT_422
    then begin
     y_pixel_stride        := 2;
     uv_pixel_stride       := 4;
     uv_x_sample_interval  := 2;
     uv_y_sample_interval  := 1;
    end;
    if YUV_FORMAT = YUV_FORMAT_NV12
    then begin
     y_pixel_stride        := 1;
     uv_pixel_stride       := 2;
     uv_x_sample_interval  := 2;
     uv_y_sample_interval  := 2;
    end;


    //for(y=0; y&lt;(height-(uv_y_sample_interval-1)); y+=uv_y_sample_interval)
    ys := 0;
    while ys &lt; height-(uv_y_sample_interval-1) do
    begin
       y_ptr1  := Y+ys*Y_stride;
     y_ptr2  := Y+(ys+1)*Y_stride;
     u_ptr   := U+(ys div uv_y_sample_interval)*UV_stride;
     v_ptr   := V+(ys div uv_y_sample_interval)*UV_stride;

       rgb_ptr1:=RGB+ys*RGB_stride;

       if uv_y_sample_interval > 1
     then rgb_ptr2:=RGB+(ys+1)*RGB_stride;


       //for(x=0; x&lt;(width-(uv_x_sample_interval-1)); x+=uv_x_sample_interval)
    x := 0;
    while x&lt;(width-(uv_x_sample_interval-1)) do
       begin
           // Compute U and V contributions, common to the four pixels

           u_tmp := (( u_ptr^)-128);
           v_tmp := (( v_ptr^)-128);

           r_tmp := (v_tmp*param.v_r_factor);
           g_tmp := (u_tmp*param.u_g_factor + v_tmp*param.v_g_factor);
           b_tmp := (u_tmp*param.u_b_factor);

           // Compute the Y contribution for each pixel

           y_tmp := ((y_ptr1[0]-param.y_shift)*param.y_factor);
           PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);

           y_tmp := ((y_ptr1[y_pixel_stride]-param.y_shift)*param.y_factor);
           PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);

           if uv_y_sample_interval > 1
     then begin
       y_tmp := ((y_ptr2[0]-param.y_shift)*param.y_factor);
       PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr2);

       y_tmp := ((y_ptr2[y_pixel_stride]-param.y_shift)*param.y_factor);
       PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr2);
           end;

           y_ptr1 := y_ptr1 + 2*y_pixel_stride;
           y_ptr2 := y_ptr2 + 2*y_pixel_stride;
           u_ptr  := u_ptr  + 2*uv_pixel_stride div uv_x_sample_interval;
           v_ptr  := v_ptr  + 2*uv_pixel_stride div uv_x_sample_interval;
     x := x + uv_x_sample_interval
       end;

       //* Catch the last pixel, if needed */
       if (uv_x_sample_interval = 2) and (x = (width-1))
       then begin
           // Compute U and V contributions, common to the four pixels

           u_tmp := (( u_ptr^)-128);
           v_tmp := (( v_ptr^)-128);

           r_tmp := (v_tmp*param.v_r_factor);
           g_tmp := (u_tmp*param.u_g_factor + v_tmp*param.v_g_factor);
           b_tmp := (u_tmp*param.u_b_factor);

           // Compute the Y contribution for each pixel

           y_tmp := ((y_ptr1[0]-param.y_shift)*param.y_factor);
           PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);

           if uv_y_sample_interval > 1
     then begin
             y_tmp := ((y_ptr2[0]-param.y_shift)*param.y_factor);
       PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr2);
             //PACK_PIXEL(rgb_ptr2);
           end;
       end;
    ys := ys +uv_y_sample_interval;
    end;

    //* Catch the last line, if needed */
    if (uv_y_sample_interval = 2) and (ys = (height-1))
    then begin
       y_ptr1 :=Y+ys*Y_stride;
    u_ptr  :=U+(ys div uv_y_sample_interval)*UV_stride;
    v_ptr  :=V+(ys div uv_y_sample_interval)*UV_stride;

       rgb_ptr1:=RGB+ys*RGB_stride;

       //for(x=0; x&lt;(width-(uv_x_sample_interval-1)); x+=uv_x_sample_interval)
    x := 0;
    while x &lt; (width-(uv_x_sample_interval-1)) do
       begin
           // Compute U and V contributions, common to the four pixels

           u_tmp := (( u_ptr^)-128);
           v_tmp := (( v_ptr^)-128);

           r_tmp := (v_tmp*param.v_r_factor);
           g_tmp := (u_tmp*param.u_g_factor + v_tmp*param.v_g_factor);
           b_tmp := (u_tmp*param.u_b_factor);

           // Compute the Y contribution for each pixel

           y_tmp := ((y_ptr1[0]-param.y_shift)*param.y_factor);
           //PACK_PIXEL(rgb_ptr1);
     PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);
           y_tmp := ((y_ptr1[y_pixel_stride]-param.y_shift)*param.y_factor);
           //PACK_PIXEL(rgb_ptr1);
     PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);

           y_ptr1 := y_ptr1 + 2*y_pixel_stride;
           u_ptr  := u_ptr  + 2*uv_pixel_stride div uv_x_sample_interval;
           v_ptr  := v_ptr  + 2*uv_pixel_stride div uv_x_sample_interval;

     x := x + uv_x_sample_interval
       end;

       //* Catch the last pixel, if needed */
       if (uv_x_sample_interval = 2) and (x = (width-1))
       then begin
           // Compute U and V contributions, common to the four pixels

           u_tmp := (( u_ptr^)-128);
           v_tmp := (( v_ptr^)-128);

           r_tmp := (v_tmp*param.v_r_factor);
           g_tmp := (u_tmp*param.u_g_factor + v_tmp*param.v_g_factor);
           b_tmp := (u_tmp*param.u_b_factor);

           // Compute the Y contribution for each pixel

           y_tmp := ((y_ptr1[0]-param.y_shift)*param.y_factor);
           //PACK_PIXEL(rgb_ptr1);
     PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);
       end;
    end;

    end ;