
Recherche avancée
Autres articles (102)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Modifier la date de publication
21 juin 2013, parComment changer la date de publication d’un média ?
Il faut au préalable rajouter un champ "Date de publication" dans le masque de formulaire adéquat :
Administrer > Configuration des masques de formulaires > Sélectionner "Un média"
Dans la rubrique "Champs à ajouter, cocher "Date de publication "
Cliquer en bas de la page sur Enregistrer -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (12189)
-
lavf : Remove codec_tag from dashenc and smoothstreamingenc
30 juin 2017, par Martin Storsjölavf : Remove codec_tag from dashenc and smoothstreamingenc
Currently, the tags enforced and set on the segmenter muxer level
mismatch what the mp4/ismv muxer uses (since 713efb2c0d013).Skip the codec_tag altogether here, to let the user (try to) set
whichever codec/tag is preferred ; the individual chained muxer will
reject invalid codecs anyway.Signed-off-by : Martin Storsjö <martin@martin.st>
-
ffmpeg concat drops audio frames
5 octobre 2017, par ShaunI have an mp4 file and I want to take two sequential sections of the video out and render them as individual files, later recombining them back into the original video. For instance, with my video
video.mp4
, I can runffmpeg -i video.mp4 -ss 56 -t 4 out1.mp4
ffmpeg -i video.mp4 -ss 60 -t 4 out2.mp4creating
out1.mp4
which contains 00:00:56 to 00:01:00 ofvideo.mp4
, andout2.mp4
which contains 00:01:00 to 00:01:04. However, later I want to be able to recombine them again quickly (i.e., without reencoding), so I use the concat demuxer,ffmpeg -f concat -safe 0 -i files.txt -c copy concat.mp4
where
files.txt
containsfile out1.mp4
file out2.mp4which theoretically should give me back 00:00:56 to 00:01:04 of
video.mp4
, however there are always dropped audio frames where the concatenation occurs, creating a very unpleasant sound artifact, an audio blip, if you will.I have tried using
async
and-af apad
on initially creating the two sections of the video but I am still faced with the same problem, and have not found the solution elsewhere. I have experienced this issue in multiple different use cases, so hopefully this simple example will shed some light on the real problem. -
displaying a baseline h264 frames stream in browsers
6 août 2021, par Thabet SabhaSo, I have a server that receives a live rtsp stream then generates baseline h264 frames using ffmpeg, which then are sent via an rtcDataChannel to browser, and while the frames arrive as intended, I can't figure out a way to display them on my html5 videoElement,
here is a simplified version of my current approach :


const remoteStream = new MediaSource();
myVideoElement.src = window.URL.createObjectURL(remoteStream);

// called when remoteStream.readyState === "open"
let sourceBuffer = remoteStream.addSourceBuffer('video/mp4; codecs="avc1.4d002a"');

// this gets called when ever a new frame is received from the webrtc data channel.
function onFrame(frame) {
 sourceBuffer.appendBuffer(new Uint8Array(frame));

 /*
 console.log(frame) ==> <buffer 00="00" 01="01" 41="41" 9b="9b" a0="a0" 22="22" 80="80" a5="a5" d7="d7" 42="42" ea="ea" 34="34" 14="14" 85="85" ba="ba" bc="bc" 1b="1b" f2="f2" 71="71" 0d="0d" 8b="8b" e1="e1" 3c="3c" 52="52" d5="d5" 8c="8c" ef="ef" c1="c1" 89="89" 10="10" c5="c5" 05="05" 78="78" ee="ee" 1d="1d" 03="03" 8d="8d" 2896="2896" more="more" bytes="bytes">
 */
}
</buffer>


ffmpeg options :


[
 "-rtsp_transport", "tcp",
 "-i", `${rtspCamURL}`, 
 "-framerate", "15",
 "-c:v", "libx264",
 "-vprofile", "baseline",
 "-b:v", "600k",
 "-bufsize", "600k",
 "-pix_fmt", "yuv420p",
 '-tune', 'zerolatency',
 "-preset", "ultrafast",
 "-f", "rawvideo",
 '-'
]; 



ffmpeg stream is then split using NAL delimiter (to generate individual frames) then each frame is sent via the data channel like so :

Buffer.concat([nalDelimiter, frame])
.

I am not sure if i'm missing something as i'm not getting any helpful errors due to the remoteSource closing as soon as the first frame arrives for some reason.


or does the media source just not support raw h264 frames, and if so is there a workaround to solve this issue ? (even if it has to do with changing the ffmpeg params.