
Recherche avancée
Médias (1)
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (61)
-
Emballe Médias : Mettre en ligne simplement des documents
29 octobre 2010, parLe plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)
Sur d’autres sites (11687)
-
OpenCV+ffmpeg : Set max_analyze_duration ?
3 avril 2013, par Christian Jonassenimport sys
import cv
video = "http://192.168.1.15:8000/video.mjpg" # from motion
capture = cv.CaptureFromFile(video)
print capture
cv.NamedWindow('Stream', 1 )
while True:
frame = cv.QueryFrame(capture)
if frame is None:
print "No frame, exiting program"
break
else:
cv.ShowImage('Stream', frame)
cv.WaitKey(20), which sort of works. Meaning that after a while, I get
Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later.
Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later.
[mjpeg @ 0xb46720] max_analyze_duration reached
[mjpeg @ 0xb46720] Estimating duration from bitrate, this may be inaccurate
Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later.
Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later.
[mjpeg @ 0xb46720] max_analyze_duration reached
[mjpeg @ 0xb46720] Estimating duration from bitrate, this may be inaccurate
<capture 0x7f0aed25c630="0x7f0aed25c630">
</capture>The stream then works fine (it's a bit speedy in the start).
It makes sense : FFMPEG finds the stream mysterious, and therefore analyzes it for the maximum duration, which makes the buffer full, and hence the speediness at the start. Now, is there a way I can (without recompiling FFMPEG) set max_analyze_duration, or otherwise pass some hints to FFMPEG so that it doesn't analyse the stream until max_analyze_duration has passed ?
The stream is from the motion program : an MJPEG stream of 10 FPS with 640x480.
-
FFMPEG libraries : Bitstream "h264_mp4toannexb" filter does not work
25 décembre 2013, par user2677612We are using
FFmpeg
libraries git-ee94362libavformat
v55.2.100. Our purpose is to mux two streams (video and audio) into M3U8 playlist using HLS.We are using
AV_CODEC_ID_H264
output encoder,AV_PIX_FMT_YUV420P
output video pixel format andCODEC_FLAG_GLOBAL_HEADER
flag for the encoder.The last causes us to use "h264_mp4toannexb" bit stream filter.
So, here is the code snippet :
AVPacket outpkt = {0};
int isGotVideoPacket = 0;
av_init_packet(&outpkt);
out_video_frame->pts = (int64_t) (video_frame_count * in_video_frame_duration / av_q2d(out_video_stream->time_base));
int ret = avcodec_encode_video2(enc_out_video_ctx, &outpkt, out_video_frame[i], &isGotVideoPacket);
assert(0 <= ret);
if ((1 == isGotVideoPacket) && (0 < outpkt.size)) {
AVPacket new_outpkt = outpkt;
if ((AVBitStreamFilterContext*) 0 != vbsf_ctx) {
AVPacket new_outpkt = outpkt;
ret = av_bitstream_filter_filter(vbsf_ctx, enc_out_video_ctx, (const char*)0, &new_outpkt.data, &new_outpkt.size, outpkt.data, outpkt.size, outpkt.flags & AV_PKT_FLAG_KEY);
if (ret > 0)
{
outpkt = new_outpkt;
}
else
{
// We get ret = -22
char errbuf[128] = "";
// Both the functions get "Error number -22 occurred" that don't explain anything
av_strerror (ret, errbuf, 128);
av_make_error_string (errbuf, 128, ret);
}
assert(0 <= ret);
}
outpkt->stream_index = output_video_stream->index;
// If to comment av_bitstream_filter_filter() and "if-else", then
// At frame #37 we get the following error from av_interleaved_write_frame():
// [mpegts @ 09628140] H.264 bitstream malformed, no startcode found, use the h264_mp4toannexb bitstream filter (-bsf h264_mp4toannexb).
ret = av_interleaved_write_frame(ofmt_ctx, &outpkt);
assert(0 <= ret);
}Our questions :
1. What is the meaning of the "-22" error from av_bitstream_filter_filter()?
2. Where can we get full FFMPEG error code description list?
3. If we are using av_bitstream_filter_filter() right? If no, what is the right way?Andrey Mochenov.
-
Prepare mp4 videos for Media Source Extensions API using ffmpeg
6 février 2018, par lerThis command produce init.mp4 + bunch of m4s files, i’m trying to play them using MSE :
ffmpeg -i <input file="file" /> -f hls -hls_segment_type fmp4 -c:v copy playlist.m3u8
This is the client side code i’m using :
var socket = io();
var video = document.querySelector('video');
var mimeCodec = 'video/mp4; codecs="avc1.64000d,mp4a.40.2"';
if ('MediaSource' in window && MediaSource.isTypeSupported(mimeCodec)) {
var mediaSource = new MediaSource;
video.src = URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', sourceOpen);
} else {
console.error('Unsupported MIME type or codec: ', mimeCodec);
}
function sourceOpen (_) {
var mediaSource = this;
var sourceBuffer = mediaSource.addSourceBuffer(mimeCodec);
sourceBuffer.mode = 'sequence';
socket.on('broadcast', function (newPiece) {
// here i'm getting the buffer of the video == buffer
sourceBuffer.addEventListener('updateend', function (_) {
video.play().then(function() { }).catch(function(error) { });
});
sourceBuffer.appendBuffer(buffer); // when the seconde video comes i append it's buffer
})
};Everything works fine when i send
init.mp4
file followed byplaylist0.m4s, playlist1.m4s, playlist2.m4s, ....
.
But when i try to playinit.mp4
file followed immediately 6,7,8 not 0,1,2 meaningplaylist6.m4s, playlist7.m4s, playlist8.m4s, ....
, it didn’t work.
I don’t know why, this supposed to be live video, the viewer that is watching the live from the beginning getsinit.mp4, playlist0.m4s, playlist1.m4s, playlist2.m4s, ....
.
Someone that came after 5 minutes gets something like thisinit.mp4, playlist32.m4s, playlist33.m4s, playlist34.m4s, ....
and so on, but so far it works only for the viewer that get’sinit.mp4, playlist0.m4s, playlist1.m4s, playlist2.m4s, ....
. the video can’t play for the others