Recherche avancée

Médias (1)

Mot : - Tags -/wave

Autres articles (87)

  • Qu’est ce qu’un masque de formulaire

    13 juin 2013, par

    Un masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
    Chaque formulaire de publication d’objet peut donc être personnalisé.
    Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
    Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 is the first MediaSPIP stable release.
    Its official release date is June 21, 2013 and is announced here.
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP Player : les contrôles

    26 mai 2010, par

    Les contrôles à la souris du lecteur
    En plus des actions au click sur les boutons visibles de l’interface du lecteur, il est également possible d’effectuer d’autres actions grâce à la souris : Click : en cliquant sur la vidéo ou sur le logo du son, celui ci se mettra en lecture ou en pause en fonction de son état actuel ; Molette (roulement) : en plaçant la souris sur l’espace utilisé par le média (hover), la molette de la souris n’exerce plus l’effet habituel de scroll de la page, mais diminue ou (...)

Sur d’autres sites (5852)

  • How can we merge two video in which one is horizontal resolution and other one is vertical resolution without stretching it using ffmpeg ?

    26 février 2021, par Rohan Patil

    I want to make an application that merge two video, in which one video is of vertical resolution and other one is of horizontal resolution. I managed to merge but it leads to stretching of video which ideally shouldn't happen. Does anyone have any idea to do this ? Thank you !

    



    command = new String[]{"-y", "-i", video1.mp4, "-i", video2.mp4,"-strict", "experimental", "-filter_complex",
            "[0:v]scale=1920x1080,setdar=4:3[v0];[1:v]scale=1920x1080,setdar=4:3[v1];[v0][0:a][v1][1:a] concat=n=2:v=1:a=1",
            "-ab", "48000", "-ac", "2", "-ar", "22050", "-s", "1920x1080", "-vcodec", "libx264", "-crf", "27", "-q", "4", "-preset", "ultrafast",output.mp4};


    


  • avcodec/indeo2 : Check input size against resolution in ir2_decode_plane()

    17 mars 2019, par Michael Niedermayer
    avcodec/indeo2 : Check input size against resolution in ir2_decode_plane()
    

    Fixes : Timeout (56 sec -> 14 sec)
    Fixes : 13708/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_INDEO2_fuzzer-5656342004498432

    Found-by : continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
    Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>

    • [DH] libavcodec/indeo2.c
  • Can't fix timestamp of WebM video with dynamic resolution using FFmpeg

    11 février 2019, par Yurii Kit

    I’m developing the platform for 1-1 video calls with recording. For my purposes, I work with the following stack : WebRTC, Kurento Media Server, FFmpeg.

    1. It works perfectly in an ideal environment, but if my users have a poor connection, after the recording I see a lot of problems with the out of sync audio and video tracks.

      As I understand, the problem appears due to the incorrect timestamp, so I’m doing a bit post-processing where I generate a new timestamp and it helps !

      Here is the command example :

      ffmpeg -fflags +genpts -acodec libopus -vcodec libvpx \
         -i in.webm \
         -filter_complex "fps=30, setpts=PTS-STARTPTS" \
         -acodec libvorbis -vcodec libvpx \
         -vsync 1 -async 1 -r 30 -threads 4 out.webm
    2. After that, I’ve faced one more problem. If the user has a poor connection, WebRTC can dynamically change the video resolution. After the post-processing for such type of videos (with the different resolutions during the video) I see the frozen image until the end of the video and it started from the moment, where the resolution was dynamically changed. There are no error in the FFmpeg logs, just information about changing the resolution :

      [libvpx @ 0x559335713440] dimension change! 480x270 -> 320x180
      -async is forwarded to lavfi similarly to -af aresample=async=1:min_hard_comp=0.100000.

      After analyzing the logs, I realized that the problem was due to the STARTPTS parameter, which, after automatically changing the extension, became very large (equal to the number of frames that were before it). I tried to remove STARTPTS and leave only PTS.

      After that, the video started to work well, but only until the video resolution are dynamically changed, then again the audio and video tracks are out of sync.

      I’ve tried to scale videos to a static resolution before fixing timestamp and it helps. But it’s a little bit extra work. Command example :

      ffmpeg -acodec libopus -vcodec libvpx \
         -i in.webm \
         -vf scale=640:480 \
         -acodec libvorbis -vcodec libvpx \
         -threads 4 out.webm

      Also I’ve tried to combine both commands using filter_complex, but it didn’t work.

      I’ve worked with FFmpeg not so many time so far, so, maybe I’m doing something wrong ? Maybe there are some easier ways to do that ?

      Since Kurento uses GStreamer for the video recording, so maybe it would be a better option to reconfigure Kurento to fix timestamp during the video recording ?

    I can provide any videos and commands which I use.

    I’m using :
    Kurento Media Server 6.9.0,
    FFmpeg 4.1