
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (71)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Modifier la date de publication
21 juin 2013, parComment changer la date de publication d’un média ?
Il faut au préalable rajouter un champ "Date de publication" dans le masque de formulaire adéquat :
Administrer > Configuration des masques de formulaires > Sélectionner "Un média"
Dans la rubrique "Champs à ajouter, cocher "Date de publication "
Cliquer en bas de la page sur Enregistrer -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (10876)
-
why is ffmpeg so fast
8 juillet 2017, par jx.xuI have written a ffmpeg-based C++ program about converting yuv to rgb using libswscale, similar to the official example.
Just simple copy and modify the official example before build in my visual studio 2017 on windows 10. However, the time performance is much slower than the ffmpeg.exe executable file, as 36 seconds vs 12 seconds.
I already know that ffmpeg uses some optimization techniques like SIMD instructions. While in my performance profiling, the bottleneck is disk I/O writing which takes at least 2/3 time.
Then I develop a concurrent version where a dedicated thread will handle all I/O task while the situation doesn’t seem to improve. It’s worth noting that I use Boost C++ library to utilize multi-thread and asynchronous events.So, I just wanna know how can I modify program using the libraries of ffmpeg or the time performance gap towards ffmpeg.exe just can’ be catched up.
As requested by friendly answers, I post my codes. Compiler is msvc in vs2017 and I turn on the full optimization /Ox .
Supplement my main question, I make another plain disk I/O test merely copying the file of the same size. It’s surprising to find that plain sequential disk I/O costs 28 seconds while the front codes cost 36 seconds in total... Any one knows how can ffmpeg finishes the same job in only 12 seconds ? That must use some optimization techniques, like random disk I/O or memory buffer reusing ?
#include "stdafx.h"
#define __STDC_CONSTANT_MACROS
extern "C" {
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>parseutils.h>
#include <libswscale></libswscale>swscale.h>
}
#ifdef _WIN64
#pragma comment(lib, "avformat.lib")
#pragma comment(lib, "avcodec.lib")
#pragma comment(lib, "avutil.lib")
#pragma comment(lib, "swscale.lib")
#endif
#include <common></common>cite.hpp> // just include headers of c++ STL/Boost
int main(int argc, char **argv)
{
chrono::duration<double> period;
auto pIn = fopen("G:/Panorama/UHD/originalVideos/DrivingInCountry_3840x1920_30fps_8bit_420_erp.yuv", "rb");
auto time_mark = chrono::steady_clock::now();
int src_w = 3840, src_h = 1920, dst_w, dst_h;
enum AVPixelFormat src_pix_fmt = AV_PIX_FMT_YUV420P, dst_pix_fmt = AV_PIX_FMT_RGB24;
const char *dst_filename = "G:/Panorama/UHD/originalVideos/out.rgb";
const char *dst_size = "3840x1920";
FILE *dst_file;
int dst_bufsize;
struct SwsContext *sws_ctx;
int i, ret;
if (av_parse_video_size(&dst_w, &dst_h, dst_size) < 0) {
fprintf(stderr,
"Invalid size '%s', must be in the form WxH or a valid size abbreviation\n",
dst_size);
exit(1);
}
dst_file = fopen(dst_filename, "wb");
if (!dst_file) {
fprintf(stderr, "Could not open destination file %s\n", dst_filename);
exit(1);
}
/* create scaling context */
sws_ctx = sws_getContext(src_w, src_h, src_pix_fmt,
dst_w, dst_h, dst_pix_fmt,
SWS_BILINEAR, NULL, NULL, NULL);
if (!sws_ctx) {
fprintf(stderr,
"Impossible to create scale context for the conversion "
"fmt:%s s:%dx%d -> fmt:%s s:%dx%d\n",
av_get_pix_fmt_name(src_pix_fmt), src_w, src_h,
av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h);
ret = AVERROR(EINVAL);
exit(1);
}
io_service srv; // Boost.Asio class
auto work = make_shared(srv);
thread t{ bind(&io_service::run,&srv) }; // I/O worker thread
vector> result;
/* utilize function class so that lambda can capture itself */
function)> recursion;
recursion = [&](int left, unique_future<bool> writable)
{
if (left <= 0)
{
writable.wait();
return;
}
uint8_t *src_data[4], *dst_data[4];
int src_linesize[4], dst_linesize[4];
/* promise-future pair used for thread synchronizing so that the file part is written in the correct sequence */
promise<bool> sync;
/* allocate source and destination image buffers */
if ((ret = av_image_alloc(src_data, src_linesize,
src_w, src_h, src_pix_fmt, 16)) < 0) {
fprintf(stderr, "Could not allocate source image\n");
}
/* buffer is going to be written to rawvideo file, no alignment */
if ((ret = av_image_alloc(dst_data, dst_linesize,
dst_w, dst_h, dst_pix_fmt, 1)) < 0) {
fprintf(stderr, "Could not allocate destination image\n");
}
dst_bufsize = ret;
fread(src_data[0], src_h*src_w, 1, pIn);
fread(src_data[1], src_h*src_w / 4, 1, pIn);
fread(src_data[2], src_h*src_w / 4, 1, pIn);
result.push_back(async([&] {
/* convert to destination format */
sws_scale(sws_ctx, (const uint8_t * const*)src_data,
src_linesize, 0, src_h, dst_data, dst_linesize);
if (left>0)
{
assert(writable.get() == true);
srv.post([=]
{
/* write scaled image to file */
fwrite(dst_data[0], 1, dst_bufsize, dst_file);
av_freep((void*)&dst_data[0]);
});
}
sync.set_value(true);
av_freep(&src_data[0]);
}));
recursion(left - 1, sync.get_future());
};
promise<bool> root;
root.set_value(true);
recursion(300, root.get_future()); // .yuv file only has 300 frames
wait_for_all(result.begin(), result.end()); // wait for all unique_future to callback
work.reset(); // io_service::work releses
srv.stop(); // io_service stops
t.join(); // I/O thread joins
period = steady_clock::now() - time_mark; // calculate valid time
fprintf(stderr, "Scaling succeeded. Play the output file with the command:\n"
"ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n",
av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h, dst_filename);
cout << "period" << et << period << en;
end:
fclose(dst_file);
// av_freep(&src_data[0]);
// av_freep(&dst_data[0]);
sws_freeContext(sws_ctx);
return ret < 0;
}
</bool></bool></bool></double> -
How do I initialize multiple libffmpeg contexts in multiple threads and convert the video stream of multiple IP cameras into jpeg image ?
10 juillet 2017, par hipittAccess to multiple IP cameras, and the resolution of each network camera may be different, now I need to decode the video stream of each camera with LIBFFMPEG and convert it into JPG picture (the video stream is H264 encoded). Can I use multi threading, each thread to initialize (instantiate) a LIBFFMPEG context to decode ? Or what should I do ? use multi processes ?
The two thread initializes the two ffmpeg context crash
[h264 @ 0x7f1a08000a60] no frame!
[2017-07-10 09:52:58,443 WARN ] H264 Error while decoding for send frame: 0 0 0 1
[h264 @ 0x7f1a08000a60] non-existing PPS 0 referenced
[h264 @ 0x7f1a08000a60] [2017-07-10 09:52:58,444 ERROR] Signal caught:11, dumping backtrace...
decode_slice_header error
[h264 @ 0x7f1a08000a60] no frame!
[2017-07-10 09:52:58,444 WARN ] H264 Error while decoding for send frame: 0 0 0 1
[h264 @ 0x7f1a08000a60] non-existing PPS 0 referenced
[h264 @ 0x7f1a08000a60] decode_slice_header error
[h264 @ 0x7f1a08000a60] no frame!
[2017-07-10 09:52:58,444 WARN ] H264 Error while decoding for send frame: 0 0 0 1
[h264 @ 0x7f1a08000a60] non-existing PPS 0 referenced
[h264 @ 0x7f1a08000a60] decode_slice_header error
[h264 @ 0x7f1a08000a60] no frame!
[2017-07-10 09:52:58,445 WARN ] H264 Error while decoding for send frame: 0 0 0 1
[h264 @ 0x7f1a08000a60] non-existing PPS 0 referenced
[h264 @ 0x7f1a08000a60] decode_slice_header error
[h264 @ 0x7f1a08000a60] no frame!
[2017-07-10 09:52:58,445 WARN ] H264 Error while decoding for send frame: 0 0 0 1
[h264 @ 0x7f1a08000a60] non-existing PPS 0 referenced
[h264 @ 0x7f1a08000a60] decode_slice_header error
[h264 @ 0x7f1a08000a60] no frame!
[2017-07-10 09:52:58,445 WARN ] H264 Error while decoding for send frame: 0 0 0 1
[h264 @ 0x7f1a08000a60] non-existing PPS 0 referenced
[h264 @ 0x7f1a08000a60] decode_slice_header error
[h264 @ 0x7f1a08000a60] no frame!
[2017-07-10 09:52:58,446 WARN ] H264 Error while decoding for send frame: 0 0 0 1
[h264 @ 0x7f1a08000a60] non-existing PPS 0 referenced
[h264 @ 0x7f1a08000a60] decode_slice_header error
[h264 @ 0x7f1a08000a60] no frame!
[2017-07-10 09:52:58,446 WARN ] H264 Error while decoding for send frame: 0 0 0 1
[h264 @ 0x7f1a08000a60] non-existing PPS 0 referenced
[h264 @ 0x7f1a08000a60] decode_slice_header error
[h264 @ 0x7f1a08000a60] no frame!
[2017-07-10 09:52:58,446 WARN ] H264 Error while decoding for send frame: 0 0 0 1
*** Error in `./camera-stream': corrupted size vs. prev_size: 0x00007f1a0801cbc0 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f1a4a6877e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x82aec)[0x7f1a4a692aec]
/lib/x86_64-linux-gnu/libc.so.6(__libc_malloc+0x54)[0x7f1a4a694184]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(_Znwm+0x18)[0x7f1a4af86e78]
/compile/lib/liblog4cxx.so.10(+0x14f34d)[0x7f1a4c99a34d]
/compile/lib/liblog4cxx.so.10(_ZN7log4cxx7rolling27RollingFileAppenderSkeleton9subAppendERKNS_7helpers10ObjectPtrTINS_3spi12LoggingEventEEERNS2_4PoolE+0x67)[0x7f1a4c99b707]
/compile/lib/liblog4cxx.so.10(_ZN7log4cxx16AppenderSkeleton8doAppendERKNS_7helpers10ObjectPtrTINS_3spi12LoggingEventEEERNS1_4PoolE+0x222)[0x7f1a4c92a692]
/compile/lib/liblog4cxx.so.10(_ZN7log4cxx7helpers22AppenderAttachableImpl21appendLoopOnAppendersERKNS0_10ObjectPtrTINS_3spi12LoggingEventEEERNS0_4PoolE+0x3f)[0x7f1a4c92838f]
/compile/lib/liblog4cxx.so.10(_ZNK7log4cxx6Logger13callAppendersERKNS_7helpers10ObjectPtrTINS_3spi12LoggingEventEEERNS1_4PoolE+0xe8)[0x7f1a4c970058]
/compile/lib/liblog4cxx.so.10(_ZNK7log4cxx6Logger9forcedLogERKNS_7helpers10ObjectPtrTINS_5LevelEEERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_3spi12LocationInfoE+0xbe)[0x7f1a4c9702ae]
./camera-stream(_Z10handleCorei+0x2ca)[0x62fa17]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7f1a4b4ae390]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x22)[0x7f1a4a694512]
./camera-stream(av_fast_padded_malloc+0x78)[0xa692d8]
./camera-stream(ff_h2645_extract_rbsp+0x131)[0xd68591]
./camera-stream(ff_h2645_packet_split+0x204)[0xd689d4]
./camera-stream[0x7dd357]
./camera-stream(avcodec_decode_video2+0x184)[0xa6bf74]
./camera-stream[0xa6cd50]
./camera-stream(avcodec_send_packet+0xb8)[0xa71a38]
./camera-stream(_ZN11H264Capture6decodeEPhiRN2cv3MatE+0x98)[0x637fe6]
./camera-stream(_ZN7Capture13face_work_funEv+0x532)[0x630944]
./camera-stream(_ZN7Capture15face_thread_funEPv+0x20)[0x63040a]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7f1a4b4a46ba]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f1a4a7173dd] -
Stream specifier ':v:0' in filtergraph description [1:v:0] [1:a:0] [2:v:0] [2:a:0] [3:v:0] [3:v:0] concat=n=4:v=1:a=1 [v] [a] matches no streams
13 septembre 2022, par Timur RidjanovicThis is my command (url1, url2, url3, url4 are placeholders) :



ffmpeg -i url1 -i url2 -i url3 -i url4 -filter_complex "[1:v:0] [1:a:0] [2:v:0] [2:a:0] [3:v:0] [3:v:0] concat=n=4:v=1:a=1 [v] [a]" -map [v] -map [a] /Users/myname/Downloads/f1-2017-07-12.mp4 -y



I get this error



Stream specifier ':v:0' in filtergraph description [1:v:0] [1:a:0] [2:v:0] [2:a:0] [3:v:0] [3:v:0] concat=n=4:v=1:a=1 [v] [a] matches no streams.



Not sure what is going on. I tried all urls individually and they all work (video and audio). I'm just getting this error when I try to concatenate them.



I also tried this using another syntax for filter_complex :



ffmpeg -i url1 -i url2 -i url3 -i url4 -filter_complex [0:0] [0:1] [1:0] [1:1] [2:0] [2:1] [3:0] [3:1] concat=n=4:v=1:a=1 [v] [a] -map [v] [a] /Users/timurridjanovic/Downloads/f1-2017-07-12.mp4 -y



And I get this error :



[AVFilterGraph @ 0x7ffe91703a00] No such filter: ''
Error initializing complex filters.
Invalid argument



Can someone help me ?