Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (108)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

Sur d’autres sites (11082)

  • ffmpeg concat drops audio frames

    5 octobre 2017, par Shaun

    I have an mp4 file and I want to take two sequential sections of the video out and render them as individual files, later recombining them back into the original video. For instance, with my video video.mp4, I can run

    ffmpeg -i video.mp4 -ss 56 -t 4 out1.mp4
    ffmpeg -i video.mp4 -ss 60 -t 4 out2.mp4

    creating out1.mp4 which contains 00:00:56 to 00:01:00 of video.mp4, and out2.mp4 which contains 00:01:00 to 00:01:04. However, later I want to be able to recombine them again quickly (i.e., without reencoding), so I use the concat demuxer,

    ffmpeg -f concat -safe 0 -i files.txt -c copy concat.mp4

    where files.txt contains

    file out1.mp4
    file out2.mp4

    which theoretically should give me back 00:00:56 to 00:01:04 of video.mp4, however there are always dropped audio frames where the concatenation occurs, creating a very unpleasant sound artifact, an audio blip, if you will.

    missing audio frames

    I have tried using async and -af apad on initially creating the two sections of the video but I am still faced with the same problem, and have not found the solution elsewhere. I have experienced this issue in multiple different use cases, so hopefully this simple example will shed some light on the real problem.

  • displaying a baseline h264 frames stream in browsers

    6 août 2021, par Thabet Sabha

    So, I have a server that receives a live rtsp stream then generates baseline h264 frames using ffmpeg, which then are sent via an rtcDataChannel to browser, and while the frames arrive as intended, I can't figure out a way to display them on my html5 videoElement,
here is a simplified version of my current approach :

    


    const remoteStream = new MediaSource();&#xA;myVideoElement.src = window.URL.createObjectURL(remoteStream);&#xA;&#xA;// called when remoteStream.readyState === "open"&#xA;let sourceBuffer = remoteStream.addSourceBuffer(&#x27;video/mp4; codecs="avc1.4d002a"&#x27;);&#xA;&#xA;// this gets called when ever a new frame is received from the webrtc data channel.&#xA;function onFrame(frame) {&#xA;      sourceBuffer.appendBuffer(new Uint8Array(frame));&#xA;&#xA;      /*&#xA;      console.log(frame) ==> <buffer 00="00" 01="01" 41="41" 9b="9b" a0="a0" 22="22" 80="80" a5="a5" d7="d7" 42="42" ea="ea" 34="34" 14="14" 85="85" ba="ba" bc="bc" 1b="1b" f2="f2" 71="71" 0d="0d" 8b="8b" e1="e1" 3c="3c" 52="52" d5="d5" 8c="8c" ef="ef" c1="c1" 89="89" 10="10" c5="c5" 05="05" 78="78" ee="ee" 1d="1d" 03="03" 8d="8d" 2896="2896" more="more" bytes="bytes">&#xA;      */&#xA;}&#xA;</buffer>

    &#xA;

    ffmpeg options :

    &#xA;

    [&#xA;    "-rtsp_transport", "tcp",&#xA;    "-i", `${rtspCamURL}`,  &#xA;    "-framerate", "15",&#xA;    "-c:v", "libx264",&#xA;    "-vprofile", "baseline",&#xA;    "-b:v", "600k",&#xA;    "-bufsize", "600k",&#xA;    "-pix_fmt", "yuv420p",&#xA;    &#x27;-tune&#x27;, &#x27;zerolatency&#x27;,&#xA;    "-preset", "ultrafast",&#xA;    "-f", "rawvideo",&#xA;    &#x27;-&#x27;&#xA;]; &#xA;

    &#xA;

    ffmpeg stream is then split using NAL delimiter (to generate individual frames) then each frame is sent via the data channel like so :&#xA;Buffer.concat([nalDelimiter, frame]).

    &#xA;

    I am not sure if i'm missing something as i'm not getting any helpful errors due to the remoteSource closing as soon as the first frame arrives for some reason.

    &#xA;

    or does the media source just not support raw h264 frames, and if so is there a workaround to solve this issue ? (even if it has to do with changing the ffmpeg params.

    &#xA;

  • ffmpeg Error : Pattern type 'glob' was selected but globbing is not support ed by this libavformat build

    14 septembre 2017, par Aryan Naim

    I’m trying to convert group of ".jpg" files acting as individual frames into 1 single mpeg video ".mp4"

    Example parameters i used :

    frame duration  = 2 secs
    frame rate      = 30  fps
    encoder         = libx264 (mpeg)
    input pattern   = "*.jpg"
    output pattern  = video.mp4

    Based on ffmpeg wiki instructions at (https://trac.ffmpeg.org/wiki/Create%20a%20video%20slideshow%20from%20images), I issued this command :

    ffmpeg -framerate 1/2 -pattern_type glob -i "*.jpg" -c:v libx264 -r 30 -pix_fmt yuv420p video.mp4

    But I’m getting this error :

    [image2 @ 049ab120] Pattern type 'glob' was selected but globbing is not
    supported by this libavformat build *.jpg: Function not implemented

    Which probably means the API pattern matching commands for my build/version have changed. By the way this my windows 32bit ffmpeg download build (ffmpeg-20150702-git-03b2b40-win32-static).

    How can I choose a group of files using pattern matching using ffmpeg ?