Recherche avancée

Médias (91)

Autres articles (16)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (5356)

  • How do I fade out the edge of a video with ffmpeg ?

    11 décembre 2020, par Jared

    I have a transparent .mov and I want to "fade out" only one edge of the video into transparency using ffmpeg.

    


    My video is transparent 1000x1000 (black is transparent) :

    



    I am trying this command :

    


     ffmpeg -i movie.mov -b:v 700K -filter_complex "[0]split[v0][v1];[v0]format=yuva420p,geq=r=0:g=0:b=0:a=255*(Y/H),scale=w=1*iw:h=200[fg];[v1][fg]overlay=0:800:shortest=1" converted.mov


    


    This half works as the image has the gradient fade out but loses all transparency :

    



    What am I doing wrong ?

    


    Full output log of command :

    


    ffmpeg version 4.2.3 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 9.3.1 (GCC) 20200523
  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'movie_014.mov':
  Metadata:
    major_brand     : qt
    minor_version   : 0
    compatible_brands: qt
    creation_time   : 2020-12-10T04:36:09.000000Z
  Duration: 00:00:10.03, start: 0.000000, bitrate: 142307 kb/s
    Stream #0:0(und): Video: prores (XQ) (ap4x / 0x78347061), yuva444p12le(tv, bt709, progressive), 1000x1000, 142222 kb/s, SAR 1:1 DAR 1:1, 30 fps, 30 tbr, 30k tbn, 30k tbc (default)
    Metadata:
      creation_time   : 2020-12-10T04:36:09.000000Z
      handler_name    : Core Media Video
      encoder         : Apple ProRes 4444 XQ
File 'converted.mov' already exists. Overwrite ? [y/N] y
Stream mapping:
  Stream #0:0 (prores) -> split
  overlay -> Stream #0:0 (libx264)
Press [q] to stop, [?] for help
[swscaler @ 0000019691457fc0] No accelerated colorspace conversion found from yuva420p to gbrap.
[libx264 @ 000001968c30f5c0] using SAR=1/1
[libx264 @ 000001968c30f5c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 000001968c30f5c0] profile High, level 3.2, 4:2:0, 8-bit
[libx264 @ 000001968c30f5c0] 264 - core 160 - H.264/MPEG-4 AVC codec - Copyleft 2003-2020 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr mbtree=1 bitrate=700 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mov, to 'converted.mov':
  Metadata:
    major_brand     : qt
    minor_version   : 0
    compatible_brands: qt
    encoder         : Lavf58.29.100
    Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1000x1000 [SAR 1:1 DAR 1:1], q=-1--1, 700 kb/s, 30 fps, 15360 tbn, 30 tbc (default)
    Metadata:
      encoder         : Lavc58.54.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/700000 buffer size: 0 vbv_delay: -1
frame=  301 fps= 22 q=-1.0 Lsize=     843kB time=00:00:09.93 bitrate= 695.3kbits/s speed=0.716x
video:839kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.515537%
[libx264 @ 000001968c30f5c0] frame I:2     Avg QP:24.02  size: 22224
[libx264 @ 000001968c30f5c0] frame P:76    Avg QP:24.00  size:  6595
[libx264 @ 000001968c30f5c0] frame B:223   Avg QP:27.38  size:  1402
[libx264 @ 000001968c30f5c0] consecutive B-frames:  1.0%  0.7%  0.0% 98.3%
[libx264 @ 000001968c30f5c0] mb I  I16..4: 36.6% 56.1%  7.4%
[libx264 @ 000001968c30f5c0] mb P  I16..4:  0.5%  1.4%  0.2%  P16..4: 16.0%  7.4%  3.6%  0.0%  0.0%    skip:70.9%
[libx264 @ 000001968c30f5c0] mb B  I16..4:  0.0%  0.0%  0.0%  B16..8: 18.7%  1.3%  0.1%  direct: 0.1%  skip:79.7%  L0:39.1% L1:58.3% BI: 2.6%
[libx264 @ 000001968c30f5c0] final ratefactor: 24.27
[libx264 @ 000001968c30f5c0] 8x8 transform intra:60.1% inter:78.0%
[libx264 @ 000001968c30f5c0] coded y,uvDC,uvAC intra: 30.9% 22.8% 8.2% inter: 3.7% 1.7% 0.0%
[libx264 @ 000001968c30f5c0] i16 v,h,dc,p: 65% 24%  4%  7%
[libx264 @ 000001968c30f5c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 18% 16% 36%  4%  6%  7%  4%  5%  4%
[libx264 @ 000001968c30f5c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 24% 12% 23%  7%  9%  8%  6%  8%  5%
[libx264 @ 000001968c30f5c0] i8c dc,h,v,p: 79% 10%  9%  2%
[libx264 @ 000001968c30f5c0] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 000001968c30f5c0] ref P L0: 66.4% 21.6%  9.5%  2.6%
[libx264 @ 000001968c30f5c0] ref B L0: 92.1%  6.8%  1.1%
[libx264 @ 000001968c30f5c0] ref B L1: 97.6%  2.4%
[libx264 @ 000001968c30f5c0] kb/s:684.29


    


  • Edge of text clipped a few pixels in ffmpeg

    18 mai 2024, par THEMOUNTAINSANDTHESKIES

    Trying to make a short video of a background image with text and some background audio. It's functional, but the right side of each line of my text is always clipped a few pixels. What am I missing ?

    


    Here's my python code (which includes the ffmpeg command) :

    


    font_size = 100
text='Example Text Example Text \n Example Text'
font_path = 'Bangers-Regular.ttf'
image_path = 'background.jpg'
audio_path = 'audio.wav'
duration = 15 #or whatever the audio's length is
output_path = 'output.mp4'

font_path_escaped = font_path.replace("\\", "\\\\\\\\").replace("C:", "C\\:")

lines = text.split('\n')
num_lines = len(lines)
line_height = font_size + 10
start_y = (1080 - line_height * num_lines) // 2

filter_complex = []
for i, line in enumerate(lines):
    y_position = start_y + i * line_height
    filter_complex.append(
        f"drawtext=text='{line}':fontfile='{font_path_escaped}':fontsize={font_size}:"
        f"x=((w-text_w)/2):y={y_position}:"
        "fontcolor=white:borderw=6:bordercolor=black"
    )

filter_complex_string = ','.join(filter_complex)

command = [
    'ffmpeg',
    '-loop', '1',
    '-i', image_path,
    '-i', audio_path,
    '-filter_complex', filter_complex_string,
    '-map', '[v]',
    '-map', '1:a',
    '-c:v', 'hevc_nvenc',
    '-c:a', 'aac',
    '-pix_fmt', 'yuv420p',
    '-t', str(duration),
    '-shortest',
    '-loglevel', 'debug',
    '-y',
    output_path
]


subprocess.run(command, check=True)
print(f"Video created successfully: {output_path}")


    


    and a frame from the outputted video :

    


    enter image description here

    


  • Decoded YUV shows green edge when rendered with OpenGL

    2 février 2023, par Alex

    Any idea, why decoded YUV -> RGB(shader conversion) has that extra green edge on the right side ?
    
Almost any 1080X1920 video seems to have this issue.

    


    enter image description here

    


    A screen recording of the issue is uploaded here https://imgur.com/a/JtUZq4h

    


    Once I manually scale up the texture width, I can see it fills up to the viewport, but it would be nice if I could fix the actual cause. Is it some padding that's part of YUV colorspace ? What else could it be ?

    


    My model is -1 to 1, filling the entire width
    
The texture coordinates are also 0 to 1 ratio

    


    float vertices[] = {
    -1.0, 1.0f, 0.0f, 0.0,     // top left
     1.0f, 1.0f, 1.0, 0.0,      // top right
    -1.0f, -1.0f, 0.0f, 1.0f,  // bottom left
     1.0f, -1.0f, 1.0f, 1.0f    // bottom right
};


    


    Fragment Shader

    


    #version 330 core

in vec2 TexCoord;

out vec4 FragColor;
precision highp float;
uniform sampler2D textureY;
uniform sampler2D textureU;
uniform sampler2D textureV;
uniform float alpha;
uniform vec2 texScale;


void main()
{
    float y = texture(textureY, TexCoord / texScale).r;
    float u = texture(textureU, TexCoord / texScale).r - 0.5; 
    float v = texture(textureV, TexCoord / texScale).r - 0.5;
    
    vec3 rgb;
    
    //yuv - 709
    rgb.r = clamp(y + (1.402 * v), 0, 255);
    rgb.g = clamp(y - (0.2126 * 1.5748 / 0.7152) * u - (0.0722 * 1.8556 / 0.7152) * v, 0, 255);
    rgb.b = clamp(y + (1.8556 * u), 0,255);

    FragColor = vec4(rgb, 1.0);
} 


    


    Texture Class

    


    class VideoTexture {
   public:
    VideoTexture(Decoder *dec) : decoder(dec) {
        glGenTextures(1, &texture1);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glBindTexture(GL_TEXTURE_2D, texture1);
        glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, decoder->frameQueue.first().linesize[0], decoder->frameQueue.first().height, 0, format, GL_UNSIGNED_BYTE, 0);
        glGenerateMipmap(GL_TEXTURE_2D);

        glGenTextures(1, &texture2);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glBindTexture(GL_TEXTURE_2D, texture2);
        glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, decoder->frameQueue.first().linesize[1], decoder->frameQueue.first().height / 2, 0, format, GL_UNSIGNED_BYTE, 0);
        glGenerateMipmap(GL_TEXTURE_2D);

        glGenTextures(1, &texture3);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glBindTexture(GL_TEXTURE_2D, texture3);
        glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, decoder->frameQueue.first().linesize[2], decoder->frameQueue.first().height / 2, 0, format, GL_UNSIGNED_BYTE, 0);
        glGenerateMipmap(GL_TEXTURE_2D);
    }

    void Render(Shader *shader, Gui *gui) {
        if (decoder->frameQueue.isEmpty()) {
            return;
        }

        glActiveTexture(GL_TEXTURE0);
        glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, decoder->frameQueue.first().linesize[0], decoder->frameQueue.first().height, format, GL_UNSIGNED_BYTE, decoder->frameQueue.at(currentFrame).data[0]);
        glBindTexture(GL_TEXTURE_2D, texture1);
        shader->setInt("textureY", 0);

        glActiveTexture(GL_TEXTURE1);
        glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, decoder->frameQueue.first().linesize[1], decoder->frameQueue.first().height / 2, format, GL_UNSIGNED_BYTE, decoder->frameQueue.at(currentFrame).data[1]);
        glBindTexture(GL_TEXTURE_2D, texture2);
        shader->setInt("textureU", 1);

        glActiveTexture(GL_TEXTURE2);
        glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, decoder->frameQueue.first().linesize[2], decoder->frameQueue.first().height / 2, format, GL_UNSIGNED_BYTE, decoder->frameQueue.at(currentFrame).data[2]);
        glBindTexture(GL_TEXTURE_2D, texture3);
        shader->setInt("textureV", 2);
    }

    ~VideoTexture() {
        printf("\nVideo texture destructor");
        glDeleteTextures(1, &texture1);
        glDeleteTextures(1, &texture2);
        glDeleteTextures(1, &texture3);
    }

   private:
    GLuint texture1;
    GLuint texture2;
    GLuint texture3;
    GLint internalFormat = GL_RG8;
    GLint format = GL_RED;
    int currentFrame = 0;
    Decoder *decoder;
}