
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (87)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (9894)
-
Problems in my sample C code using FFmpeg API
11 juillet 2019, par Tina JI’ve been trying to change an FFmpeg’s example code HERE to call other filters using its C APIs. Say the filter be
freezedetect=n=-60dB:d=8
which normally runs like this :ffmpeg -i small.mp4 -vf "freezedetect=n=-60dB:d=8" -map 0:v:0 -f null -
And prints outputs like this :
[freezedetect @ 0x25b91c0] lavfi.freezedetect.freeze_start: 5.005
[freezedetect @ 0x25b91c0] lavfi.freezedetect.freeze_duration: 2.03537
[freezedetect @ 0x25b91c0] lavfi.freezedetect.freeze_end: 7.04037However, the original example displays frames, not these metadata information. How can I change the code to print this metadata information (and not the frames) ?
I’ve been trying to change the
display_frame
function below into adisplay_metadata
function. Looks like theframe
variable has ametadata
dictionary which looks promising, but my attempts failed to use it. I’m also new to C language.Original
display_frame
function :static void display_frame(const AVFrame *frame, AVRational time_base)
{
int x, y;
uint8_t *p0, *p;
int64_t delay;
if (frame->pts != AV_NOPTS_VALUE) {
if (last_pts != AV_NOPTS_VALUE) {
/* sleep roughly the right amount of time;
* usleep is in microseconds, just like AV_TIME_BASE. */
delay = av_rescale_q(frame->pts - last_pts,
time_base, AV_TIME_BASE_Q);
if (delay > 0 && delay < 1000000)
usleep(delay);
}
last_pts = frame->pts;
}
/* Trivial ASCII grayscale display. */
p0 = frame->data[0];
puts("\033c");
for (y = 0; y < frame->height; y++) {
p = p0;
for (x = 0; x < frame->width; x++)
putchar(" .-+#"[*(p++) / 52]);
putchar('\n');
p0 += frame->linesize[0];
}
fflush(stdout);
}My new
display_metadata
function that needs to be completed :static void display_metadata(const AVFrame *frame)
{
// printf("%d\n",frame->height);
AVDictionary* dic = frame->metadata;
printf("%d\n",*(dic->count));
// fflush(stdout);
} -
FFmpeg wrong output duration after av_seek_frame
18 septembre 2022, par GogogoI try to transcode a video and also cut it, but in the output file, I get the wrong duration for the file(video duration is correct). It happens when I seek the video, as an example, if I try cut from 60000 to 63000 ms I will get :
Format : WebM
Format version : Version 2
File size : 17.6 KiB
Duration : 1 min 4 s
Overall bit rate : 2 232 b/s
Writing application : Lavf59.31.100
Writing library : Lavf59.31.100


Video
ID : 1
Format : VP9
Codec ID : V_VP9
Duration : 2 s 961 ms
Width : 100 pixels
Height : 100 pixels
Display aspect ratio : 1.000
Frame rate mode : Constant
Frame rate : 24.000 FPS
Default : No
Forced : No


Here is my code, what I am doing wrong ?


namespace {
 
 constexpr auto maxDurationMs = 3000;
 constexpr auto maxFileSizeByte = 100000;
 
 struct StreamingParams {
 std::string output_extension;
 std::string muxer_opt_key;
 std::string muxer_opt_value;
 std::string video_codec;
 std::string codec_priv_key;
 std::string codec_priv_value;
 };
 
 struct StreamingContext {
 AVFormatContext* avfc = nullptr;
 AVCodec* video_avc = nullptr;
 AVStream* video_avs = nullptr;
 AVCodecContext* video_avcc = nullptr;
 int video_index = 0;
 std::string filename;
 ~StreamingContext() {}
 };
 
 struct StreamingContextDeleter {
 void operator()(StreamingContext* context) {
 if (context) {
 auto* avfc = &context->avfc;
 auto* avcc = &context->video_avcc;
 if (avfc)
 avformat_close_input(avfc);
 if (avcc)
 avcodec_free_context(avcc);
 if (context->avfc)
 avformat_free_context(context->avfc);
 }
 }
 };
 
 struct AVFrameDeleter {
 void operator()(AVFrame* frame) {
 if (frame)
 av_frame_free(&frame);
 }
 };
 
 struct AVPacketDeleter {
 void operator()(AVPacket* packet) {
 if (packet)
 av_packet_free(&packet);
 }
 };
 
 struct SwsContextDeleter {
 void operator()(SwsContext* context) {
 if (context)
 sws_freeContext(context);
 }
 };
 
 struct AVDictionaryDeleter {
 void operator()(AVDictionary* dictionary) {
 if (dictionary)
 av_dict_free(&dictionary);
 }
 };
 
 int fill_stream_info(AVStream* avs, AVCodec** avc, AVCodecContext** avcc) {
 *avc = const_cast(avcodec_find_decoder(avs->codecpar->codec_id));
 if (!*avc) return -1;

 *avcc = avcodec_alloc_context3(*avc);
 if (!*avcc) return -1;
 if (avcodec_parameters_to_context(*avcc, avs->codecpar) < 0) return -1;
 if (avcodec_open2(*avcc, *avc, nullptr) < 0) return -1;

 return 0;
 }
 
 int open_media(const char* in_filename, AVFormatContext** avfc) {
 *avfc = avformat_alloc_context();
 if (!*avfc) return -1;
 if (avformat_open_input(avfc, in_filename, nullptr, nullptr) != 0) return -1;
 if (avformat_find_stream_info(*avfc, nullptr) < 0) return -1;
 
 return 0;
 }
 
 int prepare_decoder(std::shared_ptr<streamingcontext> sc) {
 for (int i = 0; i < sc->avfc->nb_streams; i++) {
 if (sc->avfc->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
 sc->video_avs = sc->avfc->streams[i];
 sc->video_index = i;
 
 if (fill_stream_info(sc->video_avs, &sc->video_avc, &sc->video_avcc)) return -1;
 }
 }
 
 return 0;
 }
 
 int prepare_video_encoder(std::shared_ptr<streamingcontext> sc,
 AVCodecContext* decoder_ctx,
 AVRational input_framerate,
 const StreamingParams& sp) {
 sc->video_avs = avformat_new_stream(sc->avfc, nullptr);
 
 sc->video_avc = const_cast(
 avcodec_find_encoder_by_name(sp.video_codec.c_str()));
 if (!sc->video_avc) return -1;
 
 sc->video_avcc = avcodec_alloc_context3(sc->video_avc);
 if (!sc->video_avcc) return -1;
 
 av_opt_set(sc->video_avcc->priv_data, "preset", "fast", 0);

 sc->video_avcc->height = 100;
 sc->video_avcc->width = 100;
 sc->video_avcc->sample_aspect_ratio = decoder_ctx->sample_aspect_ratio;
 if (sc->video_avc->pix_fmts)
 sc->video_avcc->pix_fmt = sc->video_avc->pix_fmts[0];
 else
 sc->video_avcc->pix_fmt = decoder_ctx->pix_fmt;
 
 constexpr int64_t maxBitrate = maxFileSizeByte / (maxDurationMs / 1000.0) - 1;
 
 sc->video_avcc->bit_rate = maxBitrate;
 sc->video_avcc->rc_buffer_size = decoder_ctx->rc_buffer_size;
 sc->video_avcc->rc_max_rate = maxBitrate;
 sc->video_avcc->rc_min_rate = maxBitrate;
 sc->video_avcc->time_base = av_inv_q(input_framerate);
 sc->video_avs->time_base = sc->video_avcc->time_base;
 
 if (avcodec_open2(sc->video_avcc, sc->video_avc, nullptr) < 0) return -1;
 avcodec_parameters_from_context(sc->video_avs->codecpar, sc->video_avcc);
 
 return 0;
 }
 
 int encode_video(std::shared_ptr<streamingcontext> decoder,
 std::shared_ptr<streamingcontext> encoder,
 AVFrame* input_frame) {
 if (input_frame)
 input_frame->pict_type = AV_PICTURE_TYPE_NONE;
 
 AVPacket* output_packet = av_packet_alloc();
 if (!output_packet) return -1;
 
 int response = avcodec_send_frame(encoder->video_avcc, input_frame);
 
 while (response >= 0) {
 response = avcodec_receive_packet(encoder->video_avcc, output_packet);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 break;
 } else if (response < 0) return -1;
 
 output_packet->stream_index = decoder->video_index;
 output_packet->duration = encoder->video_avs->time_base.den /
 encoder->video_avs->time_base.num /
 decoder->video_avs->avg_frame_rate.num *
 decoder->video_avs->avg_frame_rate.den;
 
 av_packet_rescale_ts(output_packet, decoder->video_avs->time_base,
 encoder->video_avs->time_base);
 
 response = av_interleaved_write_frame(encoder->avfc, output_packet);
 if (response != 0) return -1;
 }
 av_packet_unref(output_packet);
 av_packet_free(&output_packet);
 return 0;
 }
 
 int transcode_video(std::shared_ptr<streamingcontext> decoder,
 std::shared_ptr<streamingcontext> encoder,
 AVPacket* input_packet,
 AVFrame* input_frame) {
 int response = avcodec_send_packet(decoder->video_avcc, input_packet);
 if (response < 0) return response;

 
 while (response >= 0) {
 response = avcodec_receive_frame(decoder->video_avcc, input_frame);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 break;
 } else if (response < 0) return response;
 
 if (response >= 0) {
 if (encode_video(decoder, encoder, input_frame)) return -1;
 }
 av_frame_unref(input_frame);
 }
 
 return 0;
 }
 
 } // namespace
 
 
 int VideoToGifConverter::convert(VideoProp input, QString output) {
 StreamingParams sp;
 sp.output_extension = ".webm";
 sp.video_codec = "libvpx-vp9";
 
 auto inputStd = input.path.toStdString();
 auto outputStd =
 (output + '/' + QUuid::createUuid().toString(QUuid::StringFormat::Id128))
 .toStdString() +
 sp.output_extension;
 
 auto decoder = std::shared_ptr<streamingcontext>(new StreamingContext,
 StreamingContextDeleter{});
 auto encoder = std::shared_ptr<streamingcontext>(new StreamingContext,
 StreamingContextDeleter{});
 
 encoder->filename = std::move(outputStd);
 decoder->filename = std::move(inputStd);
 
 if (open_media(decoder->filename.c_str(), &decoder->avfc))
 return -1;
 if (prepare_decoder(decoder))
 return -1;
 
 avformat_alloc_output_context2(&encoder->avfc, nullptr, nullptr,
 encoder->filename.c_str());
 if (!encoder->avfc) return -1;
 
 AVRational input_framerate =
 av_guess_frame_rate(decoder->avfc, decoder->video_avs, nullptr);
 prepare_video_encoder(encoder, decoder->video_avcc, input_framerate, sp);
 
 if (encoder->avfc->oformat->flags & AVFMT_GLOBALHEADER)
 encoder->avfc->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

 if (!(encoder->avfc->oformat->flags & AVFMT_NOFILE)) {
 if (avio_open(&encoder->avfc->pb, encoder->filename.c_str(),
 AVIO_FLAG_WRITE) < 0) return -1;
 }
 
 AVDictionary* muxer_opts = nullptr;
 
 if (!sp.muxer_opt_key.empty() && !sp.muxer_opt_value.empty()) {
 av_dict_set(&muxer_opts, sp.muxer_opt_key.c_str(),
 sp.muxer_opt_value.c_str(), 0);
 }
 
 if (avformat_write_header(encoder->avfc, &muxer_opts) < 0) return -1;
 
 auto inputFrame = std::unique_ptr(av_frame_alloc());
 if (!inputFrame) return -1;
 
 auto inputPacket =
 std::unique_ptr(av_packet_alloc());
 if (!inputPacket) return -1;
 
 auto** streams = decoder->avfc->streams;
 
 const auto fps = static_cast<double>(
 streams[inputPacket->stream_index]->avg_frame_rate.num) /
 streams[inputPacket->stream_index]->avg_frame_rate.den;
 const size_t beginFrame = input.beginPosMs * fps / 1000;
 const size_t endFrame = input.endPosMs * fps / 1000;
 const auto totalFrames = endFrame - beginFrame;
 
 size_t count = 0;
 
 int64_t startTime =
 av_rescale_q(input.beginPosMs * AV_TIME_BASE / 1000, {1, AV_TIME_BASE},
 decoder->video_avs->time_base);
 
 av_seek_frame(decoder->avfc, inputPacket->stream_index, startTime, 0);
 avcodec_flush_buffers(decoder->video_avcc);
 
 while (count < totalFrames &&
 av_read_frame(decoder->avfc, inputPacket.get()) >= 0) {
 if (streams[inputPacket->stream_index]->codecpar->codec_type ==
 AVMEDIA_TYPE_VIDEO) {
 if (transcode_video(decoder, encoder, inputPacket.get(), inputFrame.get())) {
 return -1;
 }
 ++count;
 }
 av_packet_unref(inputPacket.get());
 }
 
 if (encode_video(decoder, encoder, nullptr, nullptr)) return -1;
 
 av_write_trailer(encoder->avfc);
 
 return 0;
 }
</double></streamingcontext></streamingcontext></streamingcontext></streamingcontext></streamingcontext></streamingcontext></streamingcontext></streamingcontext>


-
Qt - H.264 video streaming using FFmpeg libraries
9 juillet 2021, par franzI am trying to get my IP camera stream in my Qt Widget application. First, I connect to UDP port of IP camera. IP camera is streaming H.264 encoded video. After socket is bound, on each
readyRead()
signal, I am filling the buffer with received datagrams in order to get a full frame.

Variable initialization :


AVCodec *codec;
AVCodecContext *codecCtx;
AVFrame *frame;
AVPacket packet;
this->buffer.clear();
this->socket = new QUdpSocket(this);

QObject::connect(this->socket, &QUdpSocket::connected, this, &H264VideoStreamer::connected);
QObject::connect(this->socket, &QUdpSocket::disconnected, this, &H264VideoStreamer::disconnected);
QObject::connect(this->socket, &QUdpSocket::readyRead, this, &H264VideoStreamer::readyRead);
QObject::connect(this->socket, &QUdpSocket::hostFound, this, &H264VideoStreamer::hostFound);
QObject::connect(this->socket, SIGNAL(error(QAbstractSocket::SocketError)), this, SLOT(error(QAbstractSocket::SocketError)));
QObject::connect(this->socket, &QUdpSocket::stateChanged, this, &H264VideoStreamer::stateChanged);

avcodec_register_all();

codec = avcodec_find_decoder(AV_CODEC_ID_H264);
if (!codec){
 qDebug() << "Codec not found";
 return;
}

codecCtx = avcodec_alloc_context3(codec);
if (!codecCtx){
 qDebug() << "Could not allocate video codec context";
 return;
}

if (codec->capabilities & CODEC_CAP_TRUNCATED)
 codecCtx->flags |= CODEC_FLAG_TRUNCATED;

codecCtx->flags2 |= CODEC_FLAG2_CHUNKS;

AVDictionary *dictionary = nullptr;

if (avcodec_open2(codecCtx, codec, &dictionary) < 0) {
 qDebug() << "Could not open codec";
 return;
}



Algorithm is as follows :


void H264VideoImageProvider::readyRead() {
 QByteArray datagram;
 datagram.resize(this->socket->pendingDatagramSize());
 QHostAddress sender;
 quint16 senderPort;

 this->socket->readDatagram(datagram.data(), datagram.size(), &sender, &senderPort);

 QByteArray rtpHeader = datagram.left(12);
 datagram.remove(0, 12);

 int nal_unit_type = datagram[0] & 0x1F;
 bool start = (datagram[1] & 0x80) != 0;

 int seqNo = rtpHeader[3] & 0xFF;

 qDebug() << "H264 video decoder::readyRead()"
 << "from: " << sender.toString() << ":" << QString::number(senderPort)
 << "\n\tDatagram size: " << QString::number(datagram.size())
 << "\n\tH264 RTP header (hex): " << rtpHeader.toHex()
 << "\n\tH264 VIDEO data (hex): " << datagram.toHex();

 qDebug() << "nal_unit_type = " << nal_unit_type << " - " << getNalUnitTypeStr(nal_unit_type);
 if (start)
 qDebug() << "START";

 if (nal_unit_type == 7){
 this->sps = datagram;
 qDebug() << "Sequence parameter found = " << this->sps.toHex();
 return;
 } else if (nal_unit_type == 8){
 this->pps = datagram;
 qDebug() << "Picture parameter found = " << this->pps.toHex();
 return;
 }

 //VIDEO_FRAME
 if (start){
 if (!this->buffer.isEmpty())
 decodeBuf();

 this->buffer.clear();
 qDebug() << "Initializing new buffer...";

 this->buffer.append(char(0x00));
 this->buffer.append(char(0x00));
 this->buffer.append(char(0x00));
 this->buffer.append(char(0x01));

 this->buffer.append(this->sps);

 this->buffer.append(char(0x00));
 this->buffer.append(char(0x00));
 this->buffer.append(char(0x00));
 this->buffer.append(char(0x01));

 this->buffer.append(this->pps);

 this->buffer.append(char(0x00));
 this->buffer.append(char(0x00));
 this->buffer.append(char(0x00));
 this->buffer.append(char(0x01));
 }

 qDebug() << "Appending buffer data...";
 this->buffer.append(datagram);
}



- 

-
first 12 bytes of datagram is RTP header


-
everything else is VIDEO DATA


-
last 5 bits of first VIDEO DATA byte, says which NAL unit type it is. I always get one of the following 4 values (1 - coded non-IDR slice, 5 code IDR slice, 7 SPS, 8 PPS)


-
5th bit in 2nd VIDEO DATA byte says if this datagram is START data in frame


-
all VIDEO DATA is stored in buffer starting with START


-
once new frame arrives - START is set, it is decoded and new buffer is generated


-
frame for decoding is generated like this :


00 00 00 01
 SPS
 00 00 00 01
 PPS
 00 00 00 01



concatenated VIDEO DATA


-
decoding is made using
avcodec_decode_video2()
function from FFmpeg library

void H264VideoStreamer::decode() {
 av_init_packet(&packet);
 av_new_packet(&packet, this->buffer.size());
 memcpy(packet.data, this->buffer.data_ptr(), this->buffer.size());
 packet.size = this->buffer.size(); 
 frame = av_frame_alloc();
 if(!frame){
 qDebug() << "Could not allocate video frame";
 return;
 }
 int got_frame = 1;
 int len = avcodec_decode_video2(codecCtx, frame, &got_frame, &packet);
 if (len < 0){
 qDebug() << "Error while encoding frame.";
 return;
 }
 //if(got_frame > 0){ // got_frame is always 0
 // qDebug() << "Data decoded: " << frame->data[0];
 //}
 char * frameData = (char *) frame->data[0];
 QByteArray decodedFrame;
 decodedFrame.setRawData(frameData, len);
 qDebug() << "Data decoded: " << decodedFrame;
 av_frame_unref(frame);
 av_free_packet(&packet);
 emit imageReceived(decodedFrame);
 }





















My idea is in UI thread which receives
imageReceived
signal, convertdecodedFrame
directly inQImage
and refresh it once new frame is decoded and sent to UI.

Is this good approach for decoding H.264 stream ? I am facing following problems :


- 

avcodec_decode_video2()
returns value that is the same like encoded buffer size. Is it possible that encoded and decoded date are always same size ?got_frame
is always 0, so it means that I never really received full frame in the result. What can be the reason ? Video frame incorrectly created ? Or video frame incorrectly converted fromQByteArray
toAVframe
?- How can I convert decoded
AVframe
back toQByteArray
, and can it just be simply converted toQImage
?








-