
Recherche avancée
Autres articles (44)
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Soumettre améliorations et plugins supplémentaires
10 avril 2011Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)
Sur d’autres sites (5499)
-
FFMPEG Compile Custom
14 décembre 2016, par Madskillz RCEI need to compile FFMPEG with version 2.1.8 of source and with the following options -
./configure --arch=x86 --target-os=mingw32 --cross-prefix=i686-w64-mingw32- --cc=i686-w64-mingw32-gcc --disable-postproc --enable-shared --disable-static --disable-decoder=libvpx --disable-encoder=aac --enable-avisynth --enable-gpl --enable-version3 --enable-pthreads --enable-avfilter --enable-runtime-cpudetect --enable-nonfree --pkg-config=pkg-config --enable-libquvi --enable-libfaac --enable-libnut --enable-libgsm --enable-libfreetype --enable-libvorbis --enable-libspeex --enable-libmp3lame --enable-zlib --enable-libtheora --enable-bzlib --enable-libvpx --enable-libxvid --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libschroedinger --enable-librtmp --enable-libass --enable-libx264 --enable-libbluray --enable-openssl --enable-libflite --enable-libsox --disable-ffplay --enable-libcdio --enable-libcelt --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libxavs --disable-outdev=sdl --disable-muxers --enable-muxer=encx --extra-cflags=-I/usr/i686-mingw32/include --extra-ldflags=-L/usr/i686-mingw32/lib --extra-libs='-lx264 -lpthread -lwinmm -llua -liconv -lcurl -lws2_32 -lssl -lcrypto -lwldap32 -lgdi32 -lwsock32'
I need to know on which system will be good to compile this ?
also some libraries might be obsolete so where can I find them ?
Please provide your input how would you compile it ?
The configuration was extracted from a custom version of avcodec-55.dll , need to demux a video using new compiled ffmpeg.
Regards
-
Custom IO writes only header, rest of frames seems omitted
18 septembre 2023, par DanielI'm using libavformat to read packets from rtsp and remux it to mp4 (fragmented).


Video frames are intact, meaning I don't want to transcode/modify/change anything.
Video frames shall be remuxed into mp4 in their original form. (i.e. : NALUs shall remain the same).


I have updated libavformat to latest (currently 4.4).


Here is my snippet :


//open input, probesize is set to 32, we don't need to decode anything
avformat_open_input

//open output with custom io
avformat_alloc_output_context2(&ofctx,...);
ofctx->pb = avio_alloc_context(buffer, bufsize, 1/*write flag*/, 0, 0, &writeOutput, 0);
ofctx->flags |= AVFMT_FLAG_NOBUFFER | AVFMT_FLAG_FLUSH_PACKETS | AVFMT_FLAG_CUSTOM_IO;

avformat_write_header(...);

//loop
av_read_frame()
LOGPACKET_DETAILS //<- this works, packets are coming
av_write_frame() //<- this doesn't work, my write callback is not called. Also tried with av_write_interleaved_frame, not seem to work.

int writeOutput(void *opaque, uint8_t *buffer, int buffer_size) {
 printf("writeOutput: writing %d bytes: ", buffer_size);
}



avformat_write_header
works, it prints the header correctly.

I'm looking for the reason on why my custom IO is not called after a frame has been read.


There must be some more flags should be set to ask avformat to don't care about decoding, just write out whatever comes in.


More information :
Input stream is a VBR encoded H264. It seems
av_write_frame
calls my write function only in case an SPS, PPS or IDR frame. Non-IDR frames are not passed at all.

Update


I found out if I request IDR frame at every second (I can ask it from the encoder),
writeOutput
is called at every second.

I created a test : after a client joins, I requested the encoder to create IDRs @1Hz for 10 times. Libav calls
writeOutput
at 1Hz for 10 seconds, but then encoder sets itself back to create IDR only at every 10 seconds. And then libav callswriteOutput
only at every 10s, which makes my decoder fail. In case 1Hz IDRs, decoder is fine.

-
How to handle metadatas in the Chromium FFmpegDecodingLoop with custom Avfilter ?
14 septembre 2020, par michal-bI'm trying to add a deinterlacing filter to the decoding loop of ffmpeg in Chromium by using a custom avFilter.


What I have now :

I made alll the changes required for ffmpeg to work with theyadif
filter.

I created an avFilter graph :

void FFmpegDecodingLoop::InitFilterGraph(AVFrame *frame) {
 if (media_log_) MEDIA_LOG(DEBUG, media_log_) << "entering InitFilterGraph : " << filter_initialised;
 if (filter_initialised) return;

 int result;

 const AVFilter *buffer_src = avfilter_get_by_name("buffer");
 const AVFilter *buffer_sink = avfilter_get_by_name("buffersink");
 AVFilterInOut *inputs = avfilter_inout_alloc();
 AVFilterInOut *outputs = avfilter_inout_alloc();

 AVCodecContext *ctx = context_;
 char args[512];

 int frame_fix = 0; // fix bad width on some streams
 if (frame->width < 704) frame_fix = 2;
 else if (frame->width > 704) frame_fix = -16;

 snprintf(args, sizeof(args),
 "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
 frame->width + frame_fix,
 frame->height,
 frame->format,// ctx->pix_fmt,
 ctx->time_base.num,
 ctx->time_base.den,
 ctx->sample_aspect_ratio.num,
 ctx->sample_aspect_ratio.den);

 const char *description = "yadif";

 filter_graph = avfilter_graph_alloc();
 if (media_log_) MEDIA_LOG(DEBUG, media_log_) << "Filter graph - args : " << args;
 result = avfilter_graph_create_filter(&buffersrc_ctx_, buffer_src, "in", args, NULL, filter_graph);
 if (result < 0) {
 if (media_log_) MEDIA_LOG(ERROR, media_log_) << "Filter graph - Unable to create buffer source : " << result;
 return;
 }

 AVBufferSinkParams *params = av_buffersink_params_alloc();
 enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_YUV420P9, AV_PIX_FMT_NONE };

 params->pixel_fmts = pix_fmts;
 result = avfilter_graph_create_filter(&buffersink_ctx_, buffer_sink, "out", NULL, params, filter_graph);
 av_free(params);
 if (result < 0) {
 if (media_log_) MEDIA_LOG(ERROR, media_log_) << "Filter graph - Unable to create buffer sink";
 return;
 }

 inputs->name = av_strdup("out");
 inputs->filter_ctx = buffersink_ctx_;
 inputs->pad_idx = 0;
 inputs->next = NULL;

 outputs->name = av_strdup("in");
 outputs->filter_ctx = buffersrc_ctx_;
 outputs->pad_idx = 0;
 outputs->next = NULL;


 result = avfilter_graph_parse_ptr(filter_graph, description, &inputs, &outputs, NULL);
 if (result < 0 && media_log_) MEDIA_LOG(ERROR, media_log_) << "Filter graph - avfilter_graph_parse_ptr ERROR : " << result;

 result = avfilter_graph_config(filter_graph, NULL);
 if (result < 0 && media_log_) MEDIA_LOG(ERROR, media_log_) << "Filter graph - avfilter_graph_config error";

 filter_initialised = true;
}



And when I receive the frame, depending on the fact it's interlaced or not, I send it to the avFilter graph or straight to the
frame_ready_cb
.

...
 bool frame_processing_success = false;
 if (!frame_.get()->interlaced_frame) { // not interlaced
 if (media_log_) MEDIA_LOG(DEBUG, media_log_) << "Detected not interlaced video frame";
 frame_processing_success = frame_ready_cb.Run(frame_.get());
 } else {
 if (media_log_) MEDIA_LOG(DEBUG, media_log_) << "Detected interlaced video frame" << frame_.get()->metadata;

 if (!filter_initialised) {
 this->InitFilterGraph(frame_.get());
 if (media_log_) MEDIA_LOG(DEBUG, media_log_) << "Media filter ok";
 } 

 if (av_buffersrc_add_frame_flags(buffersrc_ctx_, frame_.get(), AV_BUFFERSRC_FLAG_KEEP_REF) < 0) {
 if (!continue_on_decoding_errors_)
 return DecodeStatus::kDecodeFrameFailed;
 decoder_error = true;
 continue;
 }

 while (true) {
 const int ret = av_buffersink_get_frame(buffersink_ctx_, filter_frame_.get());
 if (media_log_) MEDIA_LOG(DEBUG, media_log_) << "ret = " << ret;
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF){
 if (media_log_) MEDIA_LOG(DEBUG, media_log_) << "ret error but waiting for more frames" << ret;
 frame_processing_success = true;
 if (media_log_) MEDIA_LOG(DEBUG, media_log_) << "ret error but waitning for more frames ; filter description = " << buffersink_ctx_->filter->description;
 break;
 }
 
 if (ret < 0) {
 if (!continue_on_decoding_errors_)
 return DecodeStatus::kDecodeFrameFailed;
 decoder_error = true;
 frame_processing_success = true;
 break;
 }
 frame_processing_success = frame_ready_cb.Run(filter_frame_.get());
 if (media_log_) MEDIA_LOG(DEBUG, media_log_) << "producing deinterlaced frame : " << frame_processing_success;
 av_frame_unref(filter_frame_.get());
 }
 }
 av_frame_unref(frame_.get());
 if (!frame_processing_success)
 return DecodeStatus::kFrameProcessingFailed;
}



This seems to work, I can send frames to the avFilter and get them back but I'm having an issue with the metadas (it seems).


The issue :

When I get a frame from the avFilter (av_buffersink_get_frame(buffersink_ctx_, filter_frame_.get())
) and send it to the frame ready callback I have this kind of errors :

[36723:38755:0914/105148.322600:FATAL:values.cc(516)] Check failed: is_dict(). 
0 libbase.dylib 0x00000001052febff base::debug::CollectStackTrace(void**, unsigned long) + 31
1 libbase.dylib 0x0000000104f9717b base::debug::StackTrace::StackTrace(unsigned long) + 75
2 libbase.dylib 0x0000000104f971fd base::debug::StackTrace::StackTrace(unsigned long) + 29
3 libbase.dylib 0x0000000104f971d8 base::debug::StackTrace::StackTrace() + 40
4 libbase.dylib 0x0000000104fe4897 logging::LogMessage::~LogMessage() + 183
5 libbase.dylib 0x0000000104fe3515 logging::LogMessage::~LogMessage() + 21
6 libbase.dylib 0x00000001052df4dc base::Value::SetKey(std::__1::basic_string, std::__1::allocator<char> >&&, base::Value&&) + 188
7 libmedia.dylib 0x000000010eda0dea media::VideoFrameMetadata::SetBoolean(media::VideoFrameMetadata::Key, bool) + 90
8 libmedia.dylib 0x000000010efb6943 media::FFmpegVideoDecoder::OnNewFrame(AVFrame*) + 403
9 libmedia.dylib 0x000000010efb716b bool base::internal::FunctorTraits<bool void="void">::Invoke<bool>(bool (media::FFmpegVideoDecoder::*)(AVFrame*), media::FFmpegVideoDecoder*&&, AVFrame*&&) + 155
10 libmedia.dylib 0x000000010efb7046 bool base::internal::InvokeHelper::MakeItSo<bool>(bool (media::FFmpegVideoDecoder::* const&)(AVFrame*), media::FFmpegVideoDecoder*&&, AVFrame*&&) + 102
11 libmedia.dylib 0x000000010efb6fa5 bool base::internal::Invoker<base::internal::BindState<bool> >, bool (AVFrame*)>::RunImpl<bool> > const&, 0ul>(bool (media::FFmpegVideoDecoder::* const&)(AVFrame*), std::__1::tuple<base::internal::UnretainedWrapper > const&, std::__1::integer_sequence<unsigned 0ul="0ul">, AVFrame*&&) + 101
12 libmedia.dylib 0x000000010efb6f0a base::internal::Invoker<base::internal::BindState<bool> >, bool (AVFrame*)>::Run(base::internal::BindStateBase*, AVFrame*) + 106
13 libmedia.dylib 0x000000010f127101 base::RepeatingCallback<bool>::Run(AVFrame*) const & + 113
14 libmedia.dylib 0x000000010f126e70 media::FFmpegDecodingLoop::DecodePacket(AVPacket const*, base::RepeatingCallback<bool>) + 2528
</bool></bool></bool></unsigned></bool></bool></bool></bool></bool></char>


So the error is in
FFmpegVideoDecoder::OnNewFrame
when it callsmedia::VideoFrameMetadata::SetBoolean
.
I get the same kind of errors whenvoid VideoFrameMetadata::SetTimeTicks
gets called.

So obviously I have an issue with the metadatas of the video frame I get back from the avFilter which has just a
yadif
filter.

Is there something I missed with the metadas or otherwise ?