
Recherche avancée
Médias (1)
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (51)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (8567)
-
avformat : add AV1 RTP depacketizer and packetizer
26 août 2024, par Chris Hodgesavformat : add AV1 RTP depacketizer and packetizer
Add RTP packetizer and depacketizer according to (most)
of the official AV1 RTP specification. This enables
streaming via RTSP between ffmpeg and ffmpeg and has
also been tested to work with AV1 RTSP streams via
GStreamer.It also adds the required SDP attributes for AV1.
AV1 RTP encoding is marked as experimental due to
draft specification status, debug amount reduced
and other changes suggested by Tristan.Added optional code for searching the sequence
header to determine the first packet for broken
AV1 encoders / parsers.Stops depacketizing on corruption until next keyframe,
no longer prematurely issues packet on decoding if
temporal unit was not complete yet.Change-Id : I90f5c5b9d577908a0d713606706b5654fde5f910
Signed-off-by : Chris Hodges <chrishod@axis.com>
Signed-off-by : Ronald S. Bultje <rsbultje@gmail.com> -
Introducing the Matomo Connector for Looker Studio (Formerly Google Data Studio)
26 janvier 2024, par Erin — Community -
why is ffmpeg so fast
8 juillet 2017, par jx.xuI have written a ffmpeg-based C++ program about converting yuv to rgb using libswscale, similar to the official example.
Just simple copy and modify the official example before build in my visual studio 2017 on windows 10. However, the time performance is much slower than the ffmpeg.exe executable file, as 36 seconds vs 12 seconds.
I already know that ffmpeg uses some optimization techniques like SIMD instructions. While in my performance profiling, the bottleneck is disk I/O writing which takes at least 2/3 time.
Then I develop a concurrent version where a dedicated thread will handle all I/O task while the situation doesn’t seem to improve. It’s worth noting that I use Boost C++ library to utilize multi-thread and asynchronous events.So, I just wanna know how can I modify program using the libraries of ffmpeg or the time performance gap towards ffmpeg.exe just can’ be catched up.
As requested by friendly answers, I post my codes. Compiler is msvc in vs2017 and I turn on the full optimization /Ox .
Supplement my main question, I make another plain disk I/O test merely copying the file of the same size. It’s surprising to find that plain sequential disk I/O costs 28 seconds while the front codes cost 36 seconds in total... Any one knows how can ffmpeg finishes the same job in only 12 seconds ? That must use some optimization techniques, like random disk I/O or memory buffer reusing ?
#include "stdafx.h"
#define __STDC_CONSTANT_MACROS
extern "C" {
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>parseutils.h>
#include <libswscale></libswscale>swscale.h>
}
#ifdef _WIN64
#pragma comment(lib, "avformat.lib")
#pragma comment(lib, "avcodec.lib")
#pragma comment(lib, "avutil.lib")
#pragma comment(lib, "swscale.lib")
#endif
#include <common></common>cite.hpp> // just include headers of c++ STL/Boost
int main(int argc, char **argv)
{
chrono::duration<double> period;
auto pIn = fopen("G:/Panorama/UHD/originalVideos/DrivingInCountry_3840x1920_30fps_8bit_420_erp.yuv", "rb");
auto time_mark = chrono::steady_clock::now();
int src_w = 3840, src_h = 1920, dst_w, dst_h;
enum AVPixelFormat src_pix_fmt = AV_PIX_FMT_YUV420P, dst_pix_fmt = AV_PIX_FMT_RGB24;
const char *dst_filename = "G:/Panorama/UHD/originalVideos/out.rgb";
const char *dst_size = "3840x1920";
FILE *dst_file;
int dst_bufsize;
struct SwsContext *sws_ctx;
int i, ret;
if (av_parse_video_size(&dst_w, &dst_h, dst_size) < 0) {
fprintf(stderr,
"Invalid size '%s', must be in the form WxH or a valid size abbreviation\n",
dst_size);
exit(1);
}
dst_file = fopen(dst_filename, "wb");
if (!dst_file) {
fprintf(stderr, "Could not open destination file %s\n", dst_filename);
exit(1);
}
/* create scaling context */
sws_ctx = sws_getContext(src_w, src_h, src_pix_fmt,
dst_w, dst_h, dst_pix_fmt,
SWS_BILINEAR, NULL, NULL, NULL);
if (!sws_ctx) {
fprintf(stderr,
"Impossible to create scale context for the conversion "
"fmt:%s s:%dx%d -> fmt:%s s:%dx%d\n",
av_get_pix_fmt_name(src_pix_fmt), src_w, src_h,
av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h);
ret = AVERROR(EINVAL);
exit(1);
}
io_service srv; // Boost.Asio class
auto work = make_shared(srv);
thread t{ bind(&io_service::run,&srv) }; // I/O worker thread
vector> result;
/* utilize function class so that lambda can capture itself */
function)> recursion;
recursion = [&](int left, unique_future<bool> writable)
{
if (left <= 0)
{
writable.wait();
return;
}
uint8_t *src_data[4], *dst_data[4];
int src_linesize[4], dst_linesize[4];
/* promise-future pair used for thread synchronizing so that the file part is written in the correct sequence */
promise<bool> sync;
/* allocate source and destination image buffers */
if ((ret = av_image_alloc(src_data, src_linesize,
src_w, src_h, src_pix_fmt, 16)) < 0) {
fprintf(stderr, "Could not allocate source image\n");
}
/* buffer is going to be written to rawvideo file, no alignment */
if ((ret = av_image_alloc(dst_data, dst_linesize,
dst_w, dst_h, dst_pix_fmt, 1)) < 0) {
fprintf(stderr, "Could not allocate destination image\n");
}
dst_bufsize = ret;
fread(src_data[0], src_h*src_w, 1, pIn);
fread(src_data[1], src_h*src_w / 4, 1, pIn);
fread(src_data[2], src_h*src_w / 4, 1, pIn);
result.push_back(async([&] {
/* convert to destination format */
sws_scale(sws_ctx, (const uint8_t * const*)src_data,
src_linesize, 0, src_h, dst_data, dst_linesize);
if (left>0)
{
assert(writable.get() == true);
srv.post([=]
{
/* write scaled image to file */
fwrite(dst_data[0], 1, dst_bufsize, dst_file);
av_freep((void*)&dst_data[0]);
});
}
sync.set_value(true);
av_freep(&src_data[0]);
}));
recursion(left - 1, sync.get_future());
};
promise<bool> root;
root.set_value(true);
recursion(300, root.get_future()); // .yuv file only has 300 frames
wait_for_all(result.begin(), result.end()); // wait for all unique_future to callback
work.reset(); // io_service::work releses
srv.stop(); // io_service stops
t.join(); // I/O thread joins
period = steady_clock::now() - time_mark; // calculate valid time
fprintf(stderr, "Scaling succeeded. Play the output file with the command:\n"
"ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n",
av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h, dst_filename);
cout << "period" << et << period << en;
end:
fclose(dst_file);
// av_freep(&src_data[0]);
// av_freep(&dst_data[0]);
sws_freeContext(sws_ctx);
return ret < 0;
}
</bool></bool></bool></double>