
Recherche avancée
Médias (1)
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
Autres articles (103)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
Sur d’autres sites (11618)
-
ffmpeg - RTSP stream with no audio to html5 video tag over mp4 fragmented stream
17 mai 2021, par Play_itI am already trying to make a live video feed on my website and found this code but it does not work as nicely as i wanted to. I modified the code a bit to just process the video feed and not the screenshots.


I have stuttering and also lags where the video continues playing but the video does not update.


This is my mod :


if(isset($_GET["type"]) && !empty($_GET["type"])) {
 header("Cache-Control: no-store, no-cache, must-revalidate, max-age=0");
 header("Cache-Control: post-check=0, pre-check=0", false);
 header("Pragma: no-cache");
 header("Accept-Ranges: bytes");
 header("Connection: keep-alive");

 $rtsp = "rtsp://[some ip]:554/some/path";

 $ffmpeg_base = "ffmpeg -rtsp_transport tcp -probesize 32 -stimeout 5000000 -i \"{$rtsp}\" -loglevel quiet";
 $duration_limit = 600;
 $def_rate = 6;

 $type = $_GET["type"];
 switch($_GET["type"]) {
 case "mjpeg":
 $Ct = "multipart/x-mixed-replace;boundary=mjpegboundery";
 $ffmpeg_params = "-b:v 512k -s 426x240 -r 2";
 break;
 case "mp4":
 $Ct = "video/mp4";
 $ffmpeg_params = "-c copy -an -movflags +frag_keyframe+separate_moof+omit_tfhd_offset+empty_moov";
 break;
 case "webm":
 $Ct = "video/webm";
 $ffmpeg_params = "-c:v vp8 -b:v 256k -an -s 426x240 -r {$def_rate}";
 break;
 case "ogg":
 $Ct = "video/ogg";
 $ffmpeg_params = " -c:v libtheora -b:v 256k -an -s 426x240 -r {$def_rate}";
 break;
 default:
 exit;
 }
 header("Content-type: $Ct");
 passthru("{$ffmpeg_base} -t {$duration_limit} {$ffmpeg_params} -f $type pipe:");
}



I dont know much about ffmpeg and have some difficulties to set the stream up.


My approch would be to decode the stream and encode it as mp4 fragmented. Maybe you have a better and different solution ?


-
Different between fragmented mp4 files generated by ffmpeg and by code
13 mai 2021, par dinhnguyenCurrently I have a problem when generating fragmented MP4 file from code using libavformat. My file can be played using VLC, but can't be streamed (via WebSocket) and played (via MediaSource) in (Chrome) browser. (I used this to test streaming fragmented MP4 file to browser via WebSocket).



Note : The files below is encoded by Baseline profile, level 4. So you should change the MIME type (in index.html) to const mimeCodec = 'video/mp4 ; codecs="avc1.42C028"' ; to be able to play them.



I checked and found that my generated Mp4 file is a bit different with the file generated by using
ffmpeg
tool.


Here's what I've done :



I have a .h264 file



- 

-
The first approach, I use ffmpeg to generate the fragmented MP4 file.



ffmpeg -i temp.h264 -vcodec copy -f mp4 -movflags empty_moov+default_base_moof+frag_keyframe ffmpeg.mp4




The generated file can be played by both Quicktime player and VLC player
-
The second approach, I programmaticaly generate the fragmented Mp4 file by using libavformat



First I initialize the context, the
codec
in code is theAVCodecContext*
of the input stream


av_register_all();
 avcodec_register_all();
 int ret;
 AVOutputFormat* fmt = av_guess_format("mp4", 0, 0);
 if(!fmt) {
 return;
 }

 AVFormatContext* ctx = avformat_alloc_context();
 // Create AVIO context to capture generated Mp4 contain
 uint8_t *avio_ctx_buffer = NULL;
 size_t avio_ctx_buffer_size = 50000;
 IOOutput buffer = {};
 const size_t bd_buf_size = 50000;
 avio_ctx_buffer = (uint8_t*)av_malloc(avio_ctx_buffer_size);
 buffer.buff = (uint8_t*)av_malloc(bd_buf_size);

 buffer.size = bd_buf_size;
 AVIOContext* ioCtx = avio_alloc_context(avio_ctx_buffer, (int)avio_ctx_buffer_size, 1, &buffer, NULL, MP4Formatter::onWritePacket, NULL);

 ctx->oformat = fmt;
 if (ctx->oformat->flags & AVFMT_GLOBALHEADER)
 ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;
 ctx->pb = ioCtx;
 av_opt_set(ctx, "movflags", "frag_keyframe+empty_moov+default_base_moof", 0);

 AVStream* st = avformat_new_stream(ctx, codec->codec);
 if (!st) {
 return;
 }

 st->id = (ctx->nb_streams - 1);
 avcodec_parameters_from_context(st->codecpar, codec);
 st->time_base = codec->time_base;
 ioCtx->seekable = false;




Second I implement the onWritePacket callback



int MP4Formatter::onWritePacket(void *opaque, uint8_t* buf, int buf_size) {
 file.write((char*)buf, buf_size);
 }




Third, on every packet from h264 file, I write it using
av_interleaved_write_frame



if (firstFrame) {
 AVDictionary *opts = NULL;
 av_dict_set(&opts, "movflags", "frag_keyframe+empty_moov+default_base_moof", 0);
 if(!parseSPSPPS(data, length)) {
 return;
 }
 cout << "spslen " << spslen << " ppslen " << ppslen << endl;
 auto c = st->codecpar;
 // Extradata contains PPS & SPS for AVCC format
 int extradata_len = 8 + spslen + 1 + 2 + ppslen;
 c->extradata = (uint8_t*)av_mallocz(extradata_len);
 c->extradata_size = extradata_len;
 c->extradata[0] = 0x01;
 c->extradata[1] = sps[1];
 c->extradata[2] = sps[2];
 c->extradata[3] = sps[3];
 c->extradata[4] = 0xFC | 3;
 c->extradata[5] = 0xE0 | 1;
 int tmp = spslen;
 c->extradata[6] = (tmp >> 8) & 0x00ff;
 c->extradata[7] = tmp & 0x00ff;
 int i = 0;
 for (i=0; iextradata[8 + i] = sps[i];
 }
 c->extradata[8 + tmp] = 0x01;
 int tmp2 = ppslen;
 c->extradata[8 + tmp + 1] = (tmp2 >> 8) & 0x00ff;
 c->extradata[8 + tmp + 2] = tmp2 & 0x00ff;
 for (i=0; iextradata[8 + tmp + 3 + i] = pps[i];
 }
 int ret = avformat_write_header(ctx, &opts);
 if(ret < 0) {
 return;
 }
 firstFrame = false;
 }
 AVPacket pkt;
 av_init_packet(&pkt);
 pkt.buf = av_buffer_alloc(length);
 memcpy(pkt.buf->data, data, length);
 pkt.buf->size = length;
 pkt.data = pkt.buf->data;
 pkt.size = pkt.buf->size;
 pkt.pts = ts;
 pkt.dts = ts;

 if (keyFrame) {
 pkt.flags |= AV_PKT_FLAG_KEY;
 }
 else {
 pkt.flags = 0;
 }

 pkt.stream_index = st->id;

 av_interleaved_write_frame(ctx, &pkt);
 av_buffer_unref(&pkt.buf);
 av_packet_unref(&pkt);








Can you guys take a look at my file to see what's wrong ?


-
-
Mux ts segments to fragmented mp4 in realtime
10 avril 2021, par bsackI have an HLS stream which plays in the browser. What I want to do is have a URL like
/stream.mp4
that plays the livestream back as a fragmented mp4 from the current point in time. This way you could download the stream as a single file and it would work without any js dependencies.

The method I thought of is :


- 

- send an mp4 header
- wait for a new segment
- convert the segment into an mp4 fragment with
ffmpeg
- send the mp4 fragment
- go to 2.












(I know you can concatenate
.ts
segments but they don't play in Firefox.)

This answer describes how to create a fragmented mp4 when you have all the segments at hand (tl ;dr use
-movflags frag_keyframe+emptymoov
). But I want to do this on the fly so each new HLS segment is converted to an mp4 fragment as soon as it's created.

I think this is similar to what DASH does natively, where any chunk
chunk-$n.m4s
can be appended toinit.m4s
and it'll be a proper fragmented mp4. But can this be done without using DASH ?

How can I transmux a ts segment into an mp4 fragment ?


Edit : I found a way of doing what I wanted, but not through transmuxing mpegts segments to fmp4. It turns out that later versions of HLS do support fragmented mp4 (like DASH does), and FFmpeg provides the
-hls_segment_type fmp4
option for this.