Recherche avancée

Médias (1)

Mot : - Tags -/ticket

Autres articles (36)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (4044)

  • How to add blurred border on top and bottom based on the video

    5 avril 2023, par Noob69

    I am trying to modify a video to 1080x1920 scale and I want to add to the borders on top and bottom, a blurred version of the video based on the pixel on the edge.

    


    import subprocess

input_file = "my_video1.mp4"
output_file = "my_video_processed1.mp4"

command = f'ffmpeg -i {input_file} -vf "scale=1080:1920:force_original_aspect_ratio=decrease,pad=1080:1920:(ow-iw)/2:(oh-ih)/2,eq=saturation=2.0:gamma=1.2:contrast=1.2,unsharp=lx=5:ly=5:la=0.5:cx=5:cy=5:ca=0.5" -c:v libx264 -preset slow -crf 18 -c:a copy {output_file}'
subprocess.call(command, shell=True)



    


    I tried
mode = replicate, however it is not working for the latest version of ffmpeg from Windows builds by BtbN.

    


  • How to embed pic_timing SEI wallclock timecodes in RTMP streaming ?

    13 novembre 2023, par MorenoGentili

    I need to stream my desktop to the AWS MediaLive service and, as a requirement, I must include wallclock timecodes in the stream. The AWS support kindly informed me that for h.264 encoded streams, I need to provide timecodes as "pic_timing SEI messages".

    



    I'm streaming with FFmpeg via the RTMP protocol on Windows 10 so, I tried adding the use_wallclock_as_timestamps and copyts flags to my command.

    



    ffmpeg -f gdigrab -framerate 30 -offset_x 0 -offset_y 0 -video_size 1920x1080 -show_region 1 -use_wallclock_as_timestamps 1 -i desktop -vf scale=320:240 -c:v libx264 -c:a aac -profile:v main -level 3.1 -pix_fmt yuv420p -copyts -f flv rtmp://<ip>:1935/<app>/<stream>&#xA;</stream></app></ip>

    &#xA;&#xA;

    However, the timecodes are not picked up by AWS MediaLive.

    &#xA;&#xA;

    My questions are :

    &#xA;&#xA;

      &#xA;
    • Does the use_wallclock_as_timestamps flag actually create timecodes as "pic_timing SEI messages" as required ?
    • &#xA;

    • If not, how do I add wallclock timecodes as "pic_timing SEI messages" ? It doesn't have to be every frame. Every 2 or 3 seconds would suffice.
    • &#xA;

    &#xA;&#xA;

    I'm stumped. I couldn't find the answer in the FFmpeg documentation.

    &#xA;&#xA;

    Thanks for your help.

    &#xA;

  • How send opencv video's to ffmpeg

    20 mai 2016, par Jahan Balasubramaniam

    I m trying to send the processed opencv Mat as video to ffmpeg. I m encoding the frame and writing it to std output and then piping it to ffmpeg. Here is my code.

    C++ :

    if(!cap.isOpened()) {
       cout &lt;&lt; "Video not accessible" &lt;&lt; endl;
       return -1;
    } else {
       cout &lt;&lt; "Video is accessible" &lt;&lt; endl;
    }

    while (true) {

       cap >> frame;

       //some processing

       cv::imencode(".jpg", frame, buff);
       for (i = buff.begin(); i != buff.end(); ++i)
           std::cout &lt;&lt; *i ;
    }

    My input video resolution is 640x418. I do not alter the video size

    After building, I use following command to execute.

    ./a.out | fmpeg -f rawvideo -pix_fmt bgr24 -s 640x418 -r 30 -i - -an -f mpegts udp://0.0.0.0:8182

    and also this

    ./a.out | fmpeg -i pipe:0 -f rawvideo -pix_fmt bgr24 -s 640x418 -r 30 -i - -an -f mpegts udp://0.0.0.0:8182

    However none of this seems to work.

    Kindly help.