Recherche avancée

Médias (91)

Autres articles (27)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (3937)

  • Center crop image overlay using FFmpeg

    27 mai 2020, par HB.

    I currently have an image that is being overlayed on top of a video, as demonstrated in this image :

    



    original

    



    The blue square is representing the video and the purple lines are representing the image on top of the video.

    




    



    Currently, I have the following command :

    



    "-i", InputVideoPath, "-i", InputImagePath, "-filter_complex", "[0:v]scale=iw*sar:ih,setsar=1,pad='max(iw\\,2*trunc(ih*9/16/2))':'max(ih\\,2*trunc(ow*16/9/2))':(ow-iw)/2:(oh-ih)/2[v0];[1:v][v0]scale2ref[v1][v0];[v0][v1]overlay=x=(W-w)/2:y=(H-h)/2[v]", "-map", "[v]", "-map", "0:a", "-c:v", "libx264", "-preset", "ultrafast", "-r", "30", OutputPath


    



    This adds black padding to the sides of the video and outputs the following :

    



    Output

    



    But I would like the center crop the image that is being overlayed instead, giving me this output :

    



    DesiredResult

    



    I've seen answers that demonstrate how to crop an image or crop 2 videos, but I couldn't find a way to center crop an image that is being overlayed on top of a video.

    



    The video I'm testing with is 1920x1080 and the size of the image is not constant.

    



    Any help in achieving this will be appreciated.

    




    



    EDIT (This edit is to add more clarification).

    



    Please have a look at the image below :

    



    enter image description here

    



    The image above demonstrates :

    



      

    • Purple Lines : The entire screen of the device/player, this will be used as the input image. The user draws on the screen/player.
    • 


    • Blue : The input video, scaled to fill the screen
    • 


    • Green : The actual size of the input video
    • 


    



    With this example, the player/image is 1920x1080 and the actual video size is 640x640. So the video is scaled 440x440 to fill the player.

    




    



    I tried to use a simple overlay, with the hopes that it will crop the video/image and output a video with the image at the same position as what was displayed on the device, by doing the following :

    



    ffmpeg -i InputVideo -i InputImage -filter_complex "overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2" -c:v libx264 -preset ultrafast OutputPath


    



    But, the image is not at the same position as what it was on the device.

    



    I suspect that I will have to take into account the size difference between the scaling of the video to fit into the video player.

    



    I'm not sure how I can do this ?

    


  • How to change the colour of pixels in videos with OpenGL in C++ [closed]

    8 juin 2020, par Dennio

    I would like to overlay a specifically coloured pattern or matrix over every frame of a video, so that every pixel changes it's color slightly corresponding to a data-matrix which I generate from a bitstream. It would begin with the upper left pixel and would go on till the end of the "line" and so on. I would like to change the red and blue values which means if the bitstream begins with a "1" the amount of red should be raised by 5 and if it begins with a "0" the amount of blue should be raised by 5. That would be done for every pixel of the frame.

    



    I can already open a video using FFmpeg in a selfmade videoplayer, I can also generate the data-matrix but I just don't know which way is suitable to manipulate the videoframes in c++. I already successfully compiled some OpenGL and OpenGL ES triangles on my RaspberryPi 4. Is it possible to convert the frames and pixels into textures and go from there to display everything ? Or is there maybe a better way to do this ? I would like to use the GPU of the RaspberryPi for the tasks to get a good performance out of this.

    


  • How do I set the framerate/FPS in an FFmpeg code (C) ?

    2 juin 2020, par Tobias v. Brevern

    I try to encode single pictures to a .avi video. The goal is to have every picture displayed for a set amount of seconds to create a slide show. I tried my script with 10 pictures and a delay of 1/5 of a second but the output file was not even half a second long (but displayed every picture). For setting the framerate I use the time_base option of the AVCodeContext :

    



    ctx->time_base = (AVRational) {1, 5};

    



    When I use the command ffmpeg -framerate 1/3 -i img%03d.png -codec png output.avi everything works fine and I get the file I want. I use the png codec because it was the only one i tried that is playable with Windows Media Player.

    



    Am I missing anything here ? Is there another option that has impact on the framerate ?

    



    This is my code so far :

    



    Note : I use a couple of self made data structures and methodes from other classes. They are the ones written in Caps Lock. They basicly do what the name suggests but are necessary for my project. The Input Array contains the pictures that i want to encode.

    



    include <libavutil></libavutil>opt.h>&#xA;include <libavutil></libavutil>imgutils.h>&#xA;include <libavutil></libavutil>error.h>&#xA;&#xA;void PixmapsToAVI (ARRAY* arr, String outfile, double secs)&#xA;{&#xA;     if (arr!=nil &amp;&amp; outfile!="" &amp;&amp; secs!=0) {&#xA;         AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_PNG);&#xA;         if (codec) {&#xA;             int width  = -1;&#xA;             int height = -1;&#xA;             int ret = 0;&#xA;&#xA;             AVCodecContext* ctx = NULL;&#xA;             ctx = avcodec_alloc_context3(codec);&#xA;             AVFrame* frame = av_frame_alloc();&#xA;             AVPacket* pkt  = av_packet_alloc();&#xA;&#xA;             FILE* file = fopen(outfile, "wb");&#xA;&#xA;             ARRAYELEMENT* e;&#xA;             int count = 0;&#xA;             forall (e, *arr) {&#xA;                 BITMAP bitmap (e->value, false);&#xA;                 if (width &lt; 0) {&#xA;                     width  = bitmap.Width();&#xA;                     height = bitmap.Height();&#xA;&#xA;                     ctx->width = width;&#xA;                     ctx->height = height;&#xA;                     ctx->time_base = (AVRational){1, 5};&#xA;                     ctx->framerate = (AVRational){5, 1};&#xA;                     ctx->pix_fmt = AV_PIX_FMT_RGB24;&#xA;                     ret = avcodec_open2(ctx, codec, NULL);&#xA;&#xA;                     frame->width  = width;&#xA;                     frame->height = height;&#xA;                     frame->format = ctx->pix_fmt;&#xA;                     av_opt_set(ctx->priv_data, "preset", "slow", 1);&#xA;&#xA;                 }&#xA;                 ret  = av_frame_get_buffer(frame, 1);&#xA;                 frame->linesize[0] = width*3;&#xA;&#xA;                 bitmap.Convert32();&#xA;                 byte* pixels = bitmap.PixelsRGB();      &#xA;&#xA;//The two methodes above convert the Pixmap into the RGB structure we need&#xA;//They are not needed to get an output file but are needed to get one that makes sense&#xA;&#xA;                     fflush(stdout);&#xA;                     int writeable = av_frame_make_writable(frame);&#xA;                     if (writeable>=0) {&#xA;                         for(int i=0; i&lt;(height*width*3); i&#x2B;&#x2B;){&#xA;                             frame->data[0][i] = pixels[i];&#xA;                         }&#xA;                     }&#xA;                     ret = avcodec_send_frame(ctx, frame);&#xA;                     for(int i=0; i= 0) {&#xA;                       ret = avcodec_receive_packet(ctx, pkt);&#xA;                     }&#xA;                     count&#x2B;&#x2B;;&#xA;                 avcodec_receive_packet(ctx, pkt);&#xA;                 fwrite(pkt->data, 1, pkt->size, file);&#xA;                 fflush(stdout);&#xA;                 av_packet_unref(pkt);&#xA;             }&#xA;             fclose(file);&#xA;             avcodec_free_context(&amp;ctx);&#xA;             av_frame_free(&amp;frame);&#xA;             av_packet_free(&amp;pkt);&#xA;&#xA;         }&#xA;     }&#xA;} &#xA;&#xA;&#xA;&#xA;&#xA;&#xA;&#xA;

    &#xA;