
Recherche avancée
Médias (2)
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (111)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (11019)
-
send h264 video to nginx-rtmp server using ffmpeg API
11 décembre 2019, par GlenI have C++ code that grabs frames from a GigE camera and writes them out to a file. I’m using the libx264 codec and ffmpeg version 4.0.
Writing to the file works fine, however I would also like to send the video to nginx configured with the nginx-rtmp plug-in to make the video available live via HLS.
I can use the ffmpeg command line program to stream one of my previously captured files to my nginx server and rebroadcast as HLS, however if I try to stream from my C++ code the nginx server closes the connection after one or two frames are sent.
To test further, I used the ffmpeg command line program to receive a rtmp stream and write it out to a file. I am able to send video to ffmpeg from my C++ program with rtmp, however every frame generates a warning like this :
[avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1771, current: 53; changing to 1772. This may result in incorrect timestamps in the output file.
[avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1772, current: 53; changing to 1773. This may result in incorrect timestamps in the output file.
[avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1773, current: 53; changing to 1774. This may result in incorrect timestamps in the output file.
[avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1774, current: 53; changing to 1775. This may result in incorrect timestamps in the output file.
[avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1775, current: 53; changing to 1776. This may result in incorrect timestamps in the output file.
[avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1776, current: 53; changing to 1777. This may result in incorrect timestamps in the output file.
[avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1777, current: 53; changing to 1778. This may result in incorrect timestamps in the output file.
[avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1778, current: 53; changing to 1779. This may result in incorrect timestamps in the output file.
[avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1779, current: 53; changing to 1780. This may result in incorrect timestamps in the output file.I printed PTS and DTS for my packet before writing it, and the numbers were monotonous (for example, in this last frame the pts and dts printed from my code were 1780, not the ’current : 53’ that ffmpeg reports>
also, unless I tell ffmpeg what the output framerate should be I end up with a file that plays 2x speed.
After ffmpeg receives the rtmp stream and writes it to the file, I am then able to successfully send that file to my nginx server using ffmpeg.
here is some relevant code :
//configuring the codec context
// make sure that config.codec is something we support
// for now we are only supporting LIBX264
if (config.codec() != codecs::LIBX264) {
throw std::invalid_argument("currently only libx264 codec is supported");
}
// lookup specified codec
ffcodec_ = avcodec_find_encoder_by_name(config.codec().c_str());
if (!ffcodec_) {
throw std::invalid_argument("unable to get codec " + config.codec());
}
// unique_ptr to manage the codec_context
codec_context_ = av_pointer::codec_context(avcodec_alloc_context3(ffcodec_));
if (!codec_context_) {
throw std::runtime_error("unable to initialize AVCodecContext");
}
// setup codec_context_
codec_context_->width = frame_width;
codec_context_->height = frame_height;
codec_context_->time_base = (AVRational){1, config.target_fps()};
codec_context_->framerate = (AVRational){config.target_fps(), 1};
codec_context_->global_quality = 0;
codec_context_->compression_level = 0;
codec_context_->bits_per_raw_sample = 8;
codec_context_->gop_size = 1;
codec_context_->max_b_frames = 1;
codec_context_->pix_fmt = AV_PIX_FMT_YUV420P;
// x264 only settings
if (config.codec() == codecs::LIBX264) {
av_opt_set(codec_context_->priv_data, "preset", config.compression_target().c_str(), 0);
av_opt_set(codec_context_->priv_data, "crf", std::to_string(config.crf()).c_str(), 0);
}
// Open up the codec
if (avcodec_open2(codec_context_.get(), ffcodec_, NULL) < 0) {
throw std::runtime_error("unable to open ffmpeg codec");
}
// setup the output format context and stream for RTMP
AVFormatContext *tmp_f_context;
avformat_alloc_output_context2(&tmp_f_context, NULL, "flv", uri.c_str());
rtmp_format_context_ = av_pointer::format_context(tmp_f_context);
rtmp_stream_ = avformat_new_stream(rtmp_format_context_.get(), ffcodec_);
avcodec_parameters_from_context(rtmp_stream_->codecpar, codec_context_.get());
rtmp_stream_->time_base = codec_context_->time_base;
rtmp_stream_->r_frame_rate = codec_context_->framerate;
/* open the output file */
if (!(rtmp_format_context_->flags & AVFMT_NOFILE)) {
int r = avio_open(&rtmp_format_context_->pb, uri.c_str(), AVIO_FLAG_WRITE);
if (r < 0) {
throw std::runtime_error("unable to open " + uri + " : " + av_err2str(r));
}
}
if (avformat_write_header(rtmp_format_context_.get(), NULL) < 0) {
throw std::runtime_error("unable to write header");
}
av_dump_format(rtmp_format_context_.get(), 0,uri.c_str() , 1);at this point the av_dump_format produces this output :
Output #0, flv, to 'rtmp://[MY URI]':
Metadata:
encoder : Lavf58.12.100
Stream #0:0, 0, 1/1000: Video: h264 (libx264), 1 reference frame ([7][0][0][0] / 0x0007), yuv420p, 800x800 (0x0), 0/1, q=-1--1, 30 tbr, 1k tbnencoding and writing the frame :
// send the frame to the encoder, filtering first if necessary
void VideoWriter::Encode(AVFrame *frame)
{
int rval;
if (!apply_filter_) {
//send frame to encoder
rval = avcodec_send_frame(codec_context_.get(), frame);
if (rval < 0) {
throw std::runtime_error("error sending frame for encoding");
}
} else {
// push frame to filter
// REMOVED, currently testing without filtering
}
// get packets from encoder
while (rval >= 0) {
// create smart pointer to allocated packet
av_pointer::packet pkt(av_packet_alloc());
if (!pkt) {
throw std::runtime_error("unable to allocate packet");
}
rval = avcodec_receive_packet(codec_context_.get(), pkt.get());
if (rval == AVERROR(EAGAIN) || rval == AVERROR_EOF) {
return;
} else if (rval < 0) {
throw std::runtime_error("error during encoding");
}
// if I print pkt->pts and pkt->dts here, I see sequential numbers
// write packet
rval = av_interleaved_write_frame(rtmp_format_context_.get(), pkt.get());
if (rval < 0 ) {
std::cerr << av_err2str(rval) << std::endl;
}
}
}Since I am able to send video from a previously recorded file to nginx with the ffmpeg command line program, I believe the problem is in my code and not my nginx configuration.
EDIT : I think it may have to do with SPS/PPS as I see a bunch of these error messages in the nginx log before it closes the stream
2019/12/11 11:11:31 [error] 10180#0: *4 hls: failed to read 5 byte(s), client: XXX, server: 0.0.0.0:1935
2019/12/11 11:11:31 [error] 10180#0: *4 hls: error appenging SPS/PPS NALs, client: XXX, server: 0.0.0.0:1935As I mentioned, this code works fine if I set it up to write to an avi file rather stream to rtmp, and I can stream to ffmpeg listening for rtmp but with lots of warnings about the DTS but if I try to send to nginx, it closes the connection almost immediately. My first thought was that there was something wrong with the frame timestamps, but when I print pts and dts prior to writing the packet to the stream they look okay to me.
My end goal is to capture video to a file, and also be able to turn on the rtmp stream on demand — but for now I’m just trying to get the rtmp stream working continuously (without writing to a file)
Thanks for any insights.
-
FFmpeg Get the previous 30 seconds of m3u8 stream
17 novembre 2019, par Garret HarpI currently am trying to use an m3u8 file of a live stream to get the previous 30 seconds of what happened and save it to an mp4 file, basically a clip of the last 30 seconds of the live stream.
I am currently running the following FFmpeg command :
ffmpeg -live_start_index 0 -i source.m3u8 -c copy -ss 30 -t 30 output.mp4
Rather than getting the previous 30 seconds of what happened on the stream, it gets 30 seconds of the stream once the command is run.
Does anyone know how I could change this so go back 30 seconds in the stream to save over what previously happened ?
-
broadcast HLS/TS Stream from FFMPEG
2 août 2014, par Dylan LundyI’m still wrapping my head around most of this, so sorry if my issue is an obvious one !
I’m trying to broadcast a FFMPEG ’stream’ to a HLS stream i can load into a webpage.
My understanding is that i need a media server to broadcast it from FFMPEG to a HLS stream ? with FFMPEG feeding the data to the server. I’m probably wrong though.
I previously semi achieved what i want to do with stream-m, however it was too slow (less than 1 FPS) and wasn’t broadcasting in HLS.
Previously, the steps i went through were to launch the stream-m server. then launce FFMPEG with the following command :
ffmpeg.exe -f dshow -i video="screen-capture-recorder" -r 1 -g 2 -vcodec libvpx -vb 1024 -f matroska http://localhost:8080/publish/first?password=secret
I could connect to the stream with Chrome using the following HTML :
<video src="http://localhost/consume" autoplay="autoplay">
</video>The issue is that this was slow, unusable in fact, and it’s not in HLS/H264.
I need the feed in the specific codec as i intend to send it to a WiiU browser, which only supports "M3U8+TS(HTTPLiveStreaming)"
http://www.nintendo.com/wiiu/built-in-software/browser-specs/Cheers