
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (61)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (7307)
-
pyav / ffmpeg / libav receiving too many keyframe
26 mai 2021, par user1315621I am streaming from an rtsp source. It looks like half of the frames received are key frames. Is there a way to reduce this percentage and have an higher number of P-frames and B-frames ? If possible, I would like to increase the number of P-frames (not the one of B-frames).
I am using
pyav
which is a Python wrapper forlibav
(ffmpeg
)

Code :


container = av.open(
 url, 'r',
 options={
 'rtsp_transport': 'tcp',
 'stimeout': '5000000',
 'max_delay': '5000000',
 }
)
stream = container.streams.video[0]
codec_context = stream.codec_context
codec_context.export_mvs = True
codec_context.gop_size = 25 

for packet in self.container.demux(video=0):
 for video_frame in packet.decode():
 print(video_frame.is_key_frame)



Output :


True
False
True
False
...



Note 1 : I can't edit the source. I can just edit the code used to stream the video.


Note 2 : same solution should apply to
pyav
,libavi
andffmpeg
.

Edit : it seems that B-frames are disabled :
codec_context.has_b_frames
isFalse


-
How do you set framerate of AVStream in FFmpeg muxing ?
16 avril 2021, par TheNeuronalCoderAlmost everything works fine, this code at least produces playable video. The issue is basically with all time-related metadata about the video file. The FPS and bitrate of the video are WAY higher than what I specified and the duration is only milliseconds. I did some playing around with the framerate data and found this all ultimately stems from incorrectly specifying the framerate. What did I do wrong ? How do I fix this ? How do you properly set the framerate of a video stream in FFmpeg ?


ffprobe version N-101948-g870bfe1 Copyright (c) 2007-2021 the FFmpeg developers
 built with Apple LLVM version 10.0.1 (clang-1001.0.46.4)
 configuration: --disable-asm --enable-shared --enable-libx264 --enable-gpl
 libavutil 56. 72.100 / 56. 72.100
 libavcodec 58.136.101 / 58.136.101
 libavformat 58. 78.100 / 58. 78.100
 libavdevice 58. 14.100 / 58. 14.100
 libavfilter 7.111.100 / 7.111.100
 libswscale 5. 10.100 / 5. 10.100
 libswresample 3. 10.100 / 3. 10.100
 libpostproc 55. 10.100 / 55. 10.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'animation.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.78.100
 Duration: 00:00:00.01, start: 0.000000, bitrate: 990171 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080, 1082928 kb/s, 12409.66 fps, 12288 tbr, 12288 tbn, 48 tbc (default)
 Metadata:
 handler_name : VideoHandler
 vendor_id : [0][0][0][0]



class Camera {
 private:
 AVFrame* frame_;
 AVStream* stream_;
 AVPacket* packet_;
 AVCodecContext* context_;
 AVFormatContext* format_;

 public:
 const int fps_;
 bool recording_;
 std::string output_;

 Camera(std::string path, int fps) : fps_(fps), output_(path) {
 recording_ = false;
 }

 void Record() {
 if (recording_ == true)
 throw std::runtime_error(
 "you must close your camera before starting another recording"
 );
 recording_ = true;

 avformat_alloc_output_context2(&format_, NULL, "mp4", output_.c_str());
 if (!format_)
 throw std::runtime_error("failed to find output format");

 stream_ = avformat_new_stream(format_, NULL);
 if (!stream_)
 throw std::runtime_error("failed to allocate video stream");

 stream_->id = format_->nb_streams-1;
 stream_->codecpar->codec_id = AV_CODEC_ID_H264;
 stream_->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 stream_->codecpar->width = 1920;
 stream_->codecpar->height = 1080;
 stream_->codecpar->bit_rate = 0.15 * fps_ * stream_->codecpar->width *
 stream_->codecpar->height;
 stream_->codecpar->format = AV_PIX_FMT_YUV420P;
 stream_->time_base = (AVRational){ 1, fps_ };

 AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_H264);
 if (!codec)
 throw std::runtime_error("failed to find codec");

 context_ = avcodec_alloc_context3(codec);
 if (!context_)
 throw std::runtime_error("failed to allocate video codec context");

 context_->width = stream_->codecpar->width;
 context_->height = stream_->codecpar->height;
 context_->bit_rate = stream_->codecpar->bit_rate;
 context_->time_base = stream_->time_base;
 context_->framerate = (AVRational){ fps_, 1 };
 context_->gop_size = 10;
 context_->max_b_frames = 1;
 context_->pix_fmt = AV_PIX_FMT_YUV420P;

 if (avcodec_open2(context_, codec, NULL) < 0)
 throw std::runtime_error("failed to open codec");

 if (format_->oformat->flags & AVFMT_GLOBALHEADER)
 context_->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

 if (!(format_->flags & AVFMT_NOFILE))
 if (avio_open(&format_->pb, output_.c_str(), AVIO_FLAG_WRITE) < 0)
 throw std::runtime_error("failed to open video file");

 if (avformat_write_header(format_, NULL) < 0)
 throw std::runtime_error("failed to write headers");

 frame_ = av_frame_alloc();
 if (!frame_)
 throw std::runtime_error("failed to allocate video frame");

 frame_->width = context_->width;
 frame_->height = context_->height;
 frame_->format = AV_PIX_FMT_YUV420P;

 if (av_frame_get_buffer(frame_, 32) < 0)
 throw std::runtime_error("failed to allocate the video frame data");
 }

 void Encode(Frame frame) {
 if (av_frame_make_writable(frame_) < 0)
 throw std::runtime_error("failed to write to frame");

 for (int y = 0; y < frame.height_; ++y) {
 for (int x = 0; x < frame.width_; ++x) {
 int r = frame.pixels_[3 * frame.width_ * y + 3 * x];
 int g = frame.pixels_[3 * frame.width_ * y + 3 * x + 1];
 int b = frame.pixels_[3 * frame.width_ * y + 3 * x + 2];
 frame_->data[0][y * frame_->linesize[0] + x] = (uint8_t)((66*r+129*g+25*b+128) >> 8) + 16;
 }
 }

 for (int y = 0; y < frame.height_/2; ++y) {
 for (int x = 0; x < frame.width_/2; ++x) {
 int r = frame.pixels_[3 * frame.width_ * y + 3 * x];
 int g = frame.pixels_[3 * frame.width_ * y + 3 * x + 1];
 int b = frame.pixels_[3 * frame.width_ * y + 3 * x + 2];
 frame_->data[1][y * frame_->linesize[1] + x] = (uint8_t)((-38*r-74*g+112*b+128) >> 8) + 128;
 frame_->data[2][y * frame_->linesize[2] + x] = (uint8_t)((112*r-94*g-18*b+128) >> 8) + 128;
 }
 }

 if (avcodec_send_frame(context_, frame_) < 0)
 throw std::runtime_error("failed to send a frame for encoding");

 int ret = 0;
 while (ret >= 0) {
 AVPacket packet = { 0 };
 ret = avcodec_receive_packet(context_, &packet);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 break;
 else if (ret < 0)
 throw std::runtime_error("failed to encode frame");

 av_packet_rescale_ts(&packet, context_->time_base, stream_->time_base);
 packet.pos = -1;
 packet.stream_index = stream_->index;

 ret = av_interleaved_write_frame(format_, &packet);
 av_packet_unref(&packet);
 }
 }

 void Close() {
 if (recording_ == false)
 throw std::runtime_error(
 "you cannot close the camera without starting a recording"
 );
 recording_ = false;

 av_write_trailer(format_);

 avcodec_free_context(&context_);
 av_frame_free(&frame_);

 if (!(format_->oformat->flags & AVFMT_NOFILE))
 avio_closep(&format_->pb);
 avformat_free_context(format_);
 }
}



-
what filters affect ffmpeg encoding speed
16 janvier 2021, par ohroblotWhat are the options in this command that would cause my encoding speed to be 0.999x instead of 1.0x or higher ?


ffmpeg -y \
-loop 1 -framerate 30 -re \
-i ./1280x720.jpg \
-stream_loop -1 -re \
-i ./audio.mp3 \
-vcodec libx264 -pix_fmt yuv420p \
-b:v 2500k -maxrate 2500k -bufsize 10000k \
-preset slow -tune stillimage \
-b:a 128k -ar 44100 -ac 2 -acodec aac \
-af "dynaudnorm=f=150:g=15" \
-g 60 \
-f flv tmp.flv



I am trying to figure out why would this only be encoding at 0.999x speed, is there anything that I could do to speed this up ? 2 pass encoding ? I cannot understand why the encoding speed is so slow ?


Also please not i've tried present from slow - ultrafast, the encoding speed stays relatively unchanged.