
Recherche avancée
Médias (91)
-
Corona Radiata
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Lights in the Sky
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Head Down
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Echoplex
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Discipline
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Letting You
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (98)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (5461)
-
read videos from a .m3u8 file with HTML5
6 septembre 2016, par LeCintasI am looking for use a .m3u8 file to play a video playlist on my local HTML page but nothing happened when i run the page.
Here is my HTML code :
<video height="360" controls="controls" autoplay="autoplay">
<source src="testlist.m3u8" type="application/x-mpegURL"></source>
</video>
testlist.m3u8 is in the same folder as the html file and .mp4 files.
here is the command that i used to create the .m3u8 file and the sliced .mp4 files.
ffmpeg -i "video.mp4" -vcodec libx264 -acodec copy -flags -global_header -map 0:0 -map 0:1 -f segment -segment_time 4 -segment_list_size 0 -segment_list testlist.m3u8 -segment_format mpegts stream%d.mp4
-
ffmpeg : How to define AVFormatContext input to take the output of another AVFormatContext ?
21 août 2016, par Victor.dMdBI have 2 functions running in 2 threads. One is written with opencv to parse images then write them to avformatcontext :
VideoCapture vidSrc = cv::VideoCapture(vidURL);
int vidWidth = vidSrc.get(CV_CAP_PROP_FRAME_WIDTH);
int vidHeight = vidSrc.get(CV_CAP_PROP_FRAME_HEIGHT);
AVRational vidFPS = {(int) vidSrc.get(CV_CAP_PROP_FPS), 1};
AVFormatContext* av_fmt_ctx = avformat_alloc_context();
av_fmt_ctx->flags = AVFMT_NOFILE;
cmd_data_ptr->video_data_conf.av_fmt_ctx = av_fmt_ctx;
av_fmt_ctx->oformat = av_guess_format( NULL, props.vidURL.c_str(), NULL );
AVCodec* vcodec = avcodec_find_encoder(av_fmt_ctx->oformat->video_codec);
AVStream* vstrm = avformat_new_stream(av_fmt_ctx, vcodec);
vstrm->codec = avcodec_alloc_context3(vcodec);
vstrm->codec->width = vidWidth;
vstrm->codec->height = vidHeight;
vstrm->codec->pix_fmt = vcodec->pix_fmts[0];
vstrm->codec->time_base = vstrm->time_base = av_inv_q(vidFPS);
vstrm->r_frame_rate = vstrm->avg_frame_rate = vidFPS;
if (av_fmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
vstrm->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
avcodec_open2(vstrm->codec, vcodec, nullptr);
int got_pkt = 0;
AVFrame* frame = av_frame_alloc();
std::vector framebuf(av_image_get_buffer_size(vstrm->codec->pix_fmt, vidWidth, vidHeight,32));
avpicture_fill(reinterpret_cast(frame), framebuf.data(), vstrm->codec->pix_fmt, vidWidth, vidHeight);
frame->width = vidWidth;
frame->height = vidHeight;
frame->format = static_cast<int>(vstrm->codec->pix_fmt);
while(true){
vidSrc >> in;
cv::Mat towrite;
const int stride[] = { static_cast<int>(towrite.step[0]) };
sws_scale(swsctx, &towrite.data, stride, 0, towrite.rows, frame->data, frame->linesize);
frame->pts = frame_pts++;
AVPacket pkt;
pkt.data = nullptr;
pkt.size = 0;
pkt.stream_index = vstrm->id;
av_init_packet(&pkt);
avcodec_encode_video2(vstrm->codec, &pkt, end_of_stream ? nullptr : frame, &got_pkt);
if (got_pkt) {
pkt.duration = 1;
av_packet_rescale_ts(&pkt, vstrm->codec->time_base, vstrm->time_base);
av_write_frame(av_fmt_ctx, &pkt);
}
av_packet_unref(&pkt);
}
av_frame_free(&frame);
avcodec_close(vstrm->codec);
</int></int>And another thread reads those frames
while(1){
AVPacket packet;
av_read_frame(av_fmt_ctx, &packet);
...
}The first function doesn’t seem to be working properly, as there is a segmentation error on
av_write_frame
or even withav_dump_format
. Is it correct that the first function is created as an output context ? Or should I be setting up as an input context for the other thread ? I’m a bit confused. I’m still trying to wrap my head around ffmpeg. -
AVFormatContext Transcode : Input or Output ?
21 août 2016, par Victor.dMdBI have 2 functions running in 2 threads. One is written with opencv to parse images then write them to avformatcontext :
VideoCapture vidSrc = cv::VideoCapture(vidURL);
int vidWidth = vidSrc.get(CV_CAP_PROP_FRAME_WIDTH);
int vidHeight = vidSrc.get(CV_CAP_PROP_FRAME_HEIGHT);
AVRational vidFPS = {(int) vidSrc.get(CV_CAP_PROP_FPS), 1};
AVFormatContext* av_fmt_ctx = avformat_alloc_context();
av_fmt_ctx->flags = AVFMT_NOFILE;
cmd_data_ptr->video_data_conf.av_fmt_ctx = av_fmt_ctx;
av_fmt_ctx->oformat = av_guess_format( NULL, props.vidURL.c_str(), NULL );
AVCodec* vcodec = avcodec_find_encoder(av_fmt_ctx->oformat->video_codec);
AVStream* vstrm = avformat_new_stream(av_fmt_ctx, vcodec);
vstrm->codec = avcodec_alloc_context3(vcodec);
vstrm->codec->width = vidWidth;
vstrm->codec->height = vidHeight;
vstrm->codec->pix_fmt = vcodec->pix_fmts[0];
vstrm->codec->time_base = vstrm->time_base = av_inv_q(vidFPS);
vstrm->r_frame_rate = vstrm->avg_frame_rate = vidFPS;
if (av_fmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
vstrm->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
avcodec_open2(vstrm->codec, vcodec, nullptr);
int got_pkt = 0;
AVFrame* frame = av_frame_alloc();
std::vector framebuf(av_image_get_buffer_size(vstrm->codec->pix_fmt, vidWidth, vidHeight,32));
avpicture_fill(reinterpret_cast(frame), framebuf.data(), vstrm->codec->pix_fmt, vidWidth, vidHeight);
frame->width = vidWidth;
frame->height = vidHeight;
frame->format = static_cast<int>(vstrm->codec->pix_fmt);
while(true){
vidSrc >> in;
cv::Mat towrite;
const int stride[] = { static_cast<int>(towrite.step[0]) };
sws_scale(swsctx, &towrite.data, stride, 0, towrite.rows, frame->data, frame->linesize);
frame->pts = frame_pts++;
AVPacket pkt;
pkt.data = nullptr;
pkt.size = 0;
pkt.stream_index = vstrm->id;
av_init_packet(&pkt);
avcodec_encode_video2(vstrm->codec, &pkt, end_of_stream ? nullptr : frame, &got_pkt);
if (got_pkt) {
pkt.duration = 1;
av_packet_rescale_ts(&pkt, vstrm->codec->time_base, vstrm->time_base);
av_write_frame(av_fmt_ctx, &pkt);
}
av_packet_unref(&pkt);
}
av_frame_free(&frame);
avcodec_close(vstrm->codec);
</int></int>And another thread reads those frames
while(1){
AVPacket packet;
av_read_frame(av_fmt_ctx, &packet);
...
}The first function doesn’t seem to be working properly, as there is a segmentation error on
av_write_frame
or even withav_dump_format
. Is it correct that the first function is created as an output context ? Or should I be setting up as an input context for the other thread ? I’m a bit confused. I’m still trying to wrap my head around ffmpeg.