
Recherche avancée
Autres articles (73)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (9919)
-
libavcodec initialization to achieve real time playback with frame dropping when necessary
20 octobre 2019, par Blake SenftnerI have a C++ computer vision application linking with the ffmpeg libraries that provides frames from video streams to analysis routines. The idea being one can provide a moderately generic video stream identifier, and that video source will be decompressed and passed frame after frame to an analysis routine (which runs the user’s analysis functions.) The "moderately generic video identifier" covers 3 generic video stream types : paths to video files on disk, IP video streams (cameras or video streaming services), and USB webcam pins with desired format & rate.
My current video player is generic as possible : video only, ignoring audio and other streams. It has a switch case for retrieving a stream’s frame rate based upon the stream’s source and codec, which is used to estimate the delay between decompressing frames. I’ve had many issues with trying to get reliable timestamps from the streams, so I am currently ignoring pts and dts. I know ignoring pts/dts is bad for variable frame rate streams. I plan to special case them later. The player currently checks to see if the last decompressed frame is more than 2 frames late (assuming a constant frame rate), and if so "drops the frame" - does not pass it to the user’s analysis routine.
Essentially, the video player’s logic is determining when to skip frames (not pass them to the time consuming analysis routine) so the analysis is fed video frames in as close as possible to real time.
I am looking for examples or discussions how one can initialize and/or maintain their AVFormatContext, AVStream, and AVCodecContext using (presumably but not limited to) AVDictionary options such that frame dropping as is necessary to maintain real time is performed at the libav libraries level, and not at my video player level. If achieving this requires separate AVDictionaies (or more) for each stream type and codec, then so be it. I am interested in understanding the pros and cons of both approachs : dropping frames at the player level or at the libav level.
(When some analysis requires every frame, the existing player implementation with frame dropping disabled is fine. I suspect if I can get frame dropping to occur at the libav level, I’ll save the packet to frame decompression time as well, reducing the processing more than my current version.)
-
H.264 to DNxHR 444 issue. Colors are not transcoded correctly (HDR project)
21 janvier 2020, par Raulo1985I’m having an issue transcoding a H.264 UHD HDR file to a DNxHR file in a mxf container with FFmpeg. The issue is that both files don’t look the same at all, the colors look washed out on the DNxHR video, and I tried to make the transcoding as lossless as possible (DNxHR 444 flavor). The original file is a movie I ripped a while ago, H.264, UHD, HDR, in a mkv container.
My goal : to create an almost lossless DNxHR file to use it as source file in Adobe Premiere Pro, and use another DNxHR file with less quality as proxy for editing. I wanted to do it that way and not use the original H.264 as the source file because it’s out of sync with the proxy file (I mean, when I toggle the proxy icon on and off, you can tell that there’s a short delay between them, which defeats all purposes for editing). My guess is that it may be because H.264 is compressed and DNxHR isn’t, and since I edit making a lot of fast cuts, I need both the source file and the proxy file to be as synced as possible. When the source file and the proxy file are both DNxHR, no matter the flavor, they are perfectly synced. I don’t want to go with Prores for the proxies, because the sync problem is a lot worse (several seconds of delay between files), maybe because it’s a VBR codec and my original file and DNxHR are CBR (for the record, I always prefer CBR).
Well, the thing is that when I import the original H.264 file to Premiere Pro, use a DNxHR proxy, edit a little, and export it directly from the original file (H.264 10 bits, with all the settings required for HDR output enabled) the colors look as they should. When I do the same with the high quality DNxHR as source file, with the exact same export settings, the colors look washed out. The same with any DNxHR flavor.
Then I opened both files (original H.264 and high quality DNxHR transcoded from the H.264 one) with VLC, and I also can tell that the mxf file looks washed out and the H.264 file doesn’t. So it’s not an export issue on Premiere’ side, it’s something that has to do with the original transcoding.
I understand that DNxHR 444 is as lossless as you can get with that codec, preserving all the HDR required data, and I believe that the mfx container has some advantages over MOV, which is the other container that supports DNxHD/DNxHR. So I don’t know what’s happening really.
The command I used was :
ffmpeg -channel_layout 63 -i input.mkv -map 0:0 -c:v dnxhd -vf "scale=in_range=limited:out_range=full" -color_range 2 -profile:v dnxhr_444 -pix_fmt yuv444p10le -acodec pcm_s24le -ar 48000 -ac 6 -channel_layout 63 -map 0:2 -hide_banner output.mxf
Like I said, after the transcoding, both video files look a lot different from each other, color wise. And after using them in Premiere and exporting with the exact same settings, the output files suffer from the same difference.
Mediainfo shows the expected data for both files :
10 bits, main 10, level 5, 4:2:0, CBR, BT.2020 for the original h.264 file.
10 bits, 4:4:4, CBR for the DNxHR 444 file.
One thing I noticed in Mediainfo is that both have YUV as color space, but the DNxHR 444 video has an extra field that says ColorSpace_Original : RGB. Honestly, I don’t know what that means, since the original is YUV. Color range is fine, from 0 to 1023 (and chroma range 1023). The other thing is that it says "limited" on the color range field of the H.264 file, but I’ve read that that could be a bug or missinterpretation of the file by Mediainfo.
Well, that’s it, any help would be appreciated. I’d really like to edit with DNxHR 444 as source file and DNxHR LB for the proxies, so I can edit in a fast pace and without sync issues, but the color is just not acceptable. And I do understand that I’m adding an extra transcoding step (from original to DNxHR), but the sync issue between the original and the DNxHR proxies, even though it may be a delay of a fraction of a second, makes my workflow a lot harder since I’ll have to export many times to see if the cuts are made exactly where I want them to be. Not ideal by any means. And Prores is not an option apparently, the sync issue is a lot worse. For me, it all comes down to being able to get a DNxHR 444 file that looks, well, as close to lossless as it can be, and that goal obviously involves the colors.
Thanks in advance.
PS : file size is not an issue for me, so having an entire UHD HDR movie transcoded to DNxHR 444 is not a problem.
PS2 : I tried with a different chroma subsampling (like DNxHR HQX 10 bits, which is 4:2:2), same result. Haven´t tried with 8 bits yet, but I don’t see the point since this is a HDR project.
EXTRA INFO :
1) FFprobe output of the MXF DNxHR file (this one is 4:2:2, the only difference with the command used compared to the one stated on the OP is -pix_fmt yuv444p10le being -pix_fmt yuv422p10le) :
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
[mxf @ 000001f4d17fbac0] Stream #0: not enough frames to estimate rate; consider increasing probesize
Input #0, mxf, from 'Interstellar_Master_DNxHR_444_UHD_422_PCM24_5.1.mxf':
Metadata:
operational_pattern_ul: 060e2b34.04010101.0d010201.01010900
uid : adab4424-2f25-4dc7-92ff-29bd000c0000
generation_uid : adab4424-2f25-4dc7-92ff-29bd000c0001
company_name : FFmpeg
product_name : OP1a Muxer
product_version : 58.29.100
product_uid : adab4424-2f25-4dc7-92ff-29bd000c0002
material_package_umid: 0x060A2B340101010501010D001393EE79529471348D93EE7900529471348D9300
timecode : 00:00:00:00
Duration: 02:49:03.97, start: 0.000000, bitrate: 1404833 kb/s
Stream #0:0: Video: dnxhd (DNXHR 444), yuv444p10le(bt709/unknown/unknown, progressive), 3840x2160, SAR 1:1 DAR 16:9, 23.98 tbr, 23.98 tbn, 23.98 tbc
Metadata:
file_package_umid: 0x060A2B340101010501010D001393EE79529471348D93EE7900529471348D9301
Stream #0:1: Audio: pcm_s24le, 48000 Hz, 6 channels, s32 (24 bit), 6912 kb/s
Metadata:
file_package_umid: 0x060A2B340101010501010D001393EE79529471348D93EE7900529471348D93012) FFprobe output of the MP4 H.264 source file (this one is 4:2:0, 10 bits, HDR) :
Stream #0:0(eng): Video: hevc (Main 10) (hev1 / 0x31766568), yuv420p10le(tv, bt2020nc/bt2020/smpte2084), 3840x2160 [SAR 1:1 DAR 16:9], 15584 kb/s, 23.98 fps, 23.98 tbr, 16k tbn, 23.98 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(eng): Audio: ac3 (ac-3 / 0x332D6361), 48000 Hz, 5.1(side), fltp, 640 kb/s (default)
Metadata:
handler_name : SoundHandler
Side data:
audio service type: main
Stream #0:2(eng): Data: bin_data (text / 0x74786574)
Metadata:
handler_name : SubtitleHandler
Unsupported codec with id 100359 for input stream 2 -
FFmpeg C demo generates "Could not update timestamps for skipped samples" warning
14 juillet 2024, par aabijiWhen I run my demo code I get these warnings when testing it on a webm video :


[opus @ 0x5ec0fc1b4580] Could not update timestamps for skipped samples.
[opus @ 0x5ec0fc1b4580] Could not update timestamps for discarded samples.



But it's not limited to webm, I also get this warning when running with a mp4 :


[aac @ 0x61326fb83700] Could not update timestamps for skipped samples.



I know I'm getting warnings because ffmpeg must be skipping packets, but I have no idea why. Why are we skipping packets (and if that's not the problem, what is) and how can we fix the problem ?


Here's my code for context :


#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>

int main()
{
 int ret = 0;

 const AVCodec* codec;
 AVFormatContext* fmt_ctx = avformat_alloc_context();

 const char* file2 = "/home/aabiji/Videos/sync-test.webm";
 if ((ret = avformat_open_input(&fmt_ctx, file2, NULL, NULL)) < 0) {
 av_log(NULL, AV_LOG_ERROR, "Couldn't open input file\n");
 return -1;
 }

 ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_AUDIO, -1, -1, &codec, 0);
 if (ret < 0) {
 av_log(NULL, AV_LOG_ERROR, "Couldn't find a media stream\n");
 return -1;
 }

 int stream_index = ret;
 AVStream* media = fmt_ctx->streams[stream_index];

 AVCodecContext* codec_context = avcodec_alloc_context3(codec);
 if (avcodec_parameters_to_context(codec_context, media->codecpar) < 0) {
 return -1;
 }

 if ((ret = avcodec_open2(codec_context, codec, NULL)) < 0) {
 av_log(NULL, AV_LOG_ERROR, "Couldn't open media decoder\n");
 return -1;
 }

 AVPacket* packet = av_packet_alloc();
 AVFrame* frame = av_frame_alloc();

 while ((ret = av_read_frame(fmt_ctx, packet)) >= 0) {
 if (packet->stream_index != stream_index) {
 continue;
 }

 ret = avcodec_send_packet(codec_context, packet);
 if (ret < 0) {
 break; // Error
 }

 while (ret >= 0) {
 ret = avcodec_receive_frame(codec_context, frame);
 if (ret == AVERROR_EOF || ret == AVERROR(EAGAIN)) {
 break;
 } else if (ret < 0) {
 fprintf(stderr, "Error during decoding\n");
 break;
 }
 av_frame_unref(frame);
 }

 av_packet_unref(packet);
 }

 avcodec_flush_buffers(codec_context);

 av_packet_unref(packet);
 av_frame_free(&frame);
 av_packet_free(&packet);
 avcodec_free_context(&codec_context);
 avformat_close_input(&fmt_ctx);
 return 0;
}