
Recherche avancée
Médias (2)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (18)
-
Participer à sa documentation
10 avril 2011La documentation est un des travaux les plus importants et les plus contraignants lors de la réalisation d’un outil technique.
Tout apport extérieur à ce sujet est primordial : la critique de l’existant ; la participation à la rédaction d’articles orientés : utilisateur (administrateur de MediaSPIP ou simplement producteur de contenu) ; développeur ; la création de screencasts d’explication ; la traduction de la documentation dans une nouvelle langue ;
Pour ce faire, vous pouvez vous inscrire sur (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community. -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)
Sur d’autres sites (7311)
-
Keep ffmpeg listening to new inputs
14 décembre 2018, par Vincent BavaroGood morning, I have a server that generates images and I have to stream them via hls to a client while they’re generated. I need ffmpeg to start working while they start getting transmitted and stop working when no more images are received. I tried feeding them via stdin and via img2pipe to ffmpeg but it actually needs a stopping point to process. I wanna know if ffmpeg can process images slideshow into video pieces WHILE they come in or I need multiple commands (one for each .ts) ?
-
why is ffmpeg so fast
8 juillet 2017, par jx.xuI have written a ffmpeg-based C++ program about converting yuv to rgb using libswscale, similar to the official example.
Just simple copy and modify the official example before build in my visual studio 2017 on windows 10. However, the time performance is much slower than the ffmpeg.exe executable file, as 36 seconds vs 12 seconds.
I already know that ffmpeg uses some optimization techniques like SIMD instructions. While in my performance profiling, the bottleneck is disk I/O writing which takes at least 2/3 time.
Then I develop a concurrent version where a dedicated thread will handle all I/O task while the situation doesn’t seem to improve. It’s worth noting that I use Boost C++ library to utilize multi-thread and asynchronous events.So, I just wanna know how can I modify program using the libraries of ffmpeg or the time performance gap towards ffmpeg.exe just can’ be catched up.
As requested by friendly answers, I post my codes. Compiler is msvc in vs2017 and I turn on the full optimization /Ox .
Supplement my main question, I make another plain disk I/O test merely copying the file of the same size. It’s surprising to find that plain sequential disk I/O costs 28 seconds while the front codes cost 36 seconds in total... Any one knows how can ffmpeg finishes the same job in only 12 seconds ? That must use some optimization techniques, like random disk I/O or memory buffer reusing ?
#include "stdafx.h"
#define __STDC_CONSTANT_MACROS
extern "C" {
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>parseutils.h>
#include <libswscale></libswscale>swscale.h>
}
#ifdef _WIN64
#pragma comment(lib, "avformat.lib")
#pragma comment(lib, "avcodec.lib")
#pragma comment(lib, "avutil.lib")
#pragma comment(lib, "swscale.lib")
#endif
#include <common></common>cite.hpp> // just include headers of c++ STL/Boost
int main(int argc, char **argv)
{
chrono::duration<double> period;
auto pIn = fopen("G:/Panorama/UHD/originalVideos/DrivingInCountry_3840x1920_30fps_8bit_420_erp.yuv", "rb");
auto time_mark = chrono::steady_clock::now();
int src_w = 3840, src_h = 1920, dst_w, dst_h;
enum AVPixelFormat src_pix_fmt = AV_PIX_FMT_YUV420P, dst_pix_fmt = AV_PIX_FMT_RGB24;
const char *dst_filename = "G:/Panorama/UHD/originalVideos/out.rgb";
const char *dst_size = "3840x1920";
FILE *dst_file;
int dst_bufsize;
struct SwsContext *sws_ctx;
int i, ret;
if (av_parse_video_size(&dst_w, &dst_h, dst_size) < 0) {
fprintf(stderr,
"Invalid size '%s', must be in the form WxH or a valid size abbreviation\n",
dst_size);
exit(1);
}
dst_file = fopen(dst_filename, "wb");
if (!dst_file) {
fprintf(stderr, "Could not open destination file %s\n", dst_filename);
exit(1);
}
/* create scaling context */
sws_ctx = sws_getContext(src_w, src_h, src_pix_fmt,
dst_w, dst_h, dst_pix_fmt,
SWS_BILINEAR, NULL, NULL, NULL);
if (!sws_ctx) {
fprintf(stderr,
"Impossible to create scale context for the conversion "
"fmt:%s s:%dx%d -> fmt:%s s:%dx%d\n",
av_get_pix_fmt_name(src_pix_fmt), src_w, src_h,
av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h);
ret = AVERROR(EINVAL);
exit(1);
}
io_service srv; // Boost.Asio class
auto work = make_shared(srv);
thread t{ bind(&io_service::run,&srv) }; // I/O worker thread
vector> result;
/* utilize function class so that lambda can capture itself */
function)> recursion;
recursion = [&](int left, unique_future<bool> writable)
{
if (left <= 0)
{
writable.wait();
return;
}
uint8_t *src_data[4], *dst_data[4];
int src_linesize[4], dst_linesize[4];
/* promise-future pair used for thread synchronizing so that the file part is written in the correct sequence */
promise<bool> sync;
/* allocate source and destination image buffers */
if ((ret = av_image_alloc(src_data, src_linesize,
src_w, src_h, src_pix_fmt, 16)) < 0) {
fprintf(stderr, "Could not allocate source image\n");
}
/* buffer is going to be written to rawvideo file, no alignment */
if ((ret = av_image_alloc(dst_data, dst_linesize,
dst_w, dst_h, dst_pix_fmt, 1)) < 0) {
fprintf(stderr, "Could not allocate destination image\n");
}
dst_bufsize = ret;
fread(src_data[0], src_h*src_w, 1, pIn);
fread(src_data[1], src_h*src_w / 4, 1, pIn);
fread(src_data[2], src_h*src_w / 4, 1, pIn);
result.push_back(async([&] {
/* convert to destination format */
sws_scale(sws_ctx, (const uint8_t * const*)src_data,
src_linesize, 0, src_h, dst_data, dst_linesize);
if (left>0)
{
assert(writable.get() == true);
srv.post([=]
{
/* write scaled image to file */
fwrite(dst_data[0], 1, dst_bufsize, dst_file);
av_freep((void*)&dst_data[0]);
});
}
sync.set_value(true);
av_freep(&src_data[0]);
}));
recursion(left - 1, sync.get_future());
};
promise<bool> root;
root.set_value(true);
recursion(300, root.get_future()); // .yuv file only has 300 frames
wait_for_all(result.begin(), result.end()); // wait for all unique_future to callback
work.reset(); // io_service::work releses
srv.stop(); // io_service stops
t.join(); // I/O thread joins
period = steady_clock::now() - time_mark; // calculate valid time
fprintf(stderr, "Scaling succeeded. Play the output file with the command:\n"
"ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n",
av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h, dst_filename);
cout << "period" << et << period << en;
end:
fclose(dst_file);
// av_freep(&src_data[0]);
// av_freep(&dst_data[0]);
sws_freeContext(sws_ctx);
return ret < 0;
}
</bool></bool></bool></double> -
ffmpeg getting filter logs
5 août 2020, par MahdiI've used ffmpeg libavfilter/af_silencedetect.c im my program and it works like a charm.



This filter logs some information about duration of silence in output as follows :



[silencedetect @ 0x2a894c0] silence_start: 0
[silencedetect @ 0x2a894c0] silence_end: 1.61725 | silence_duration: 1.61725
[silencedetect @ 0x2a894c0] silence_start: 3.19175
[silencedetect @ 0x2a894c0] silence_end: 4.70413 | silence_duration: 1.51238




But I need to get these durations in my program. How can I get these values as a variable in my program. It's worth to note that because of modularity I don't wanna make changes in af_silencedetect.c file.



Thanks