
Recherche avancée
Médias (2)
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
Autres articles (47)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (6488)
-
My RTMP server(freebsd) wont let me hear video when I play a huge file over the server itself :/
19 avril 2021, par Engi Gangrm -rf /mnt/hls/loool && ffmpeg -re -i "$file" -c:v libx264 -c:a aac -b:v 300k -b:a 95k -f flv -flvflags no_duration_filesize rtmp ://lambright.xyz:1935/live/loool


any work around I literally cant play the audio :( I can only hear (my source file is an MKV and 3gb )


rtmp {
 server {
 listen 1935; # Listen on standard RTMP port
 chunk_size 4000;

 application live {
 allow play all;
 live on;
 record off;
 hls on;
 hls_nested on;
 hls_path /mnt/hls/;
 hls_fragment 2s;
 }
 
 }
}



-
Huge memory leak when filtering video with libavfilter
29 mai 2017, par Captain JackI have a relatively simple FFMPEG C program, to which a video frame is fed, processed via filter graph and sent to frame renderer.
Here are some code snippets :
/* Filter graph here */
char args[512];
enum AVPixelFormat pix_fmts[] = {AV_PIX_FMT_RGB32 };
AVFilterGraph *filter_graph;
avfilter_register_all();
AVFilter *buffersrc = avfilter_get_by_name("buffer");
AVFilter *buffersink = avfilter_get_by_name("ffbuffersink");
AVBufferSinkParams *buffersink_params;
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
filter_graph = avfilter_graph_alloc();
snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
av->codec_ctx->width, av->codec_ctx->height, av->codec_ctx->pix_fmt,
av->codec_ctx->time_base.num, av->codec_ctx->time_base.den,
av->codec_ctx->sample_aspect_ratio.num, av->codec_ctx->sample_aspect_ratio.den);
if(avfilter_graph_create_filter(&av->buffersrc_ctx, buffersrc, "in",args, NULL, filter_graph) < 0)
{
fprintf(stderr, "Cannot create buffer source\n");
return(0);
}
/* buffer video sink: to terminate the filter chain. */
buffersink_params = av_buffersink_params_alloc();
buffersink_params->pixel_fmts = pix_fmts;
if(avfilter_graph_create_filter(&av->buffersink_ctx, buffersink, "out",NULL, buffersink_params, filter_graph) < 0)
{
printf("Cannot create buffer sink\n");
return(HACKTV_ERROR);
}
/* Endpoints for the filter graph. */
outputs->name = av_strdup("in");
outputs->filter_ctx = av->buffersrc_ctx;
outputs->pad_idx = 0;
outputs->next = NULL;
inputs->name = av_strdup("out");
inputs->filter_ctx = av->buffersink_ctx;
inputs->pad_idx = 0;
inputs->next = NULL;
const char *filter_descr = "vflip";
if (avfilter_graph_parse_ptr(filter_graph, filter_descr, &inputs, &outputs, NULL) < 0)
{
printf("Cannot parse filter graph\n");
return(0);
}
if (avfilter_graph_config(filter_graph, NULL) < 0)
{
printf("Cannot configure filter graph\n");
return(0);
}
av_free(buffersink_params);
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);The above code is called by these elsewhere :
av->frame_in->pts = av_frame_get_best_effort_timestamp(av->frame_in);
/* push the decoded frame into the filtergraph*/
if (av_buffersrc_add_frame(av->buffersrc_ctx, av->frame_in) < 0)
{
printf( "Error while feeding the filtdergraph\n");
break;
}
/* pull filtered pictures from the filtergraph */
if(av_buffersink_get_frame(av->buffersink_ctx, av->frame_out) < 0)
{
printf( "Error while sourcing the filtergraph\n");
break;
}
/* do stuff with frame */Now, the code works absolutely fine and the video comes out the way I expect it to (vertically flipped for testing purposes).
The biggest issue I have is that there is a massive memory leak. An high res video will consume 2Gb in a matter of seconds and crash the program. I traced the leak to this piece of code :
/* push the decoded frame into the filtergraph*/
if (av_buffersrc_add_frame(av->buffersrc_ctx, av->frame_in) < 0)If I bypass the filter by doing
av->frame_out=av->frame_in;
without pushing the frame into it (and obviously not pulling from it), there is no leak and memory usage is stable.Now, I am very new to C, so be gentle, but it seems like I should be clearing out the buffersrc_ctx somehow but no idea how. I’ve looked in official documentations but couldn’t find anything.
Can someone advise ?
-
FFMpeg sws_scale Static and Shared Huge Performance Difference
6 novembre 2018, par AliI used swscale in my code as a shared library then managed to compile FFMpeg (4.1) to static libraries with Visual Studio with this command just to get swscale :
./configure --toolchain=msvc --arch=x86_32 --disable-everything --disable-programs
I have nasm and yasm installed. this my config output :
install prefix /usr/local
source path .
C compiler cl
C library msvcrt
ARCH x86 (generic)
big-endian no
runtime cpu detection yes
standalone assembly yes
x86 assembler nasm
MMX enabled yes
MMXEXT enabled yes
3DNow! enabled yes
3DNow! extended enabled yes
SSE enabled yes
SSSE3 enabled yes
AESNI enabled yes
AVX enabled yes
AVX2 enabled yes
AVX-512 enabled yes
XOP enabled yes
FMA3 enabled yes
FMA4 enabled yes
i686 features enabled yes
CMOV is fast no
EBX available no
EBP available no
debug symbols yes
strip symbols no
optimize for size no
optimizations yes
static yes
shared no
postprocessing support no
network support yes
threading support w32threads
safe bitstream reader yes
texi2html enabled no
perl enabled no
pod2man enabled no
makeinfo enabled no
makeinfo supports HTML no
External libraries:
schannel
External libraries providing hardware acceleration:
d3d11va dxva2
Libraries:
avcodec avdevice avfilter avformat avutil swresample swscale
Programs:
Enabled decoders:
Enabled encoders:
Enabled hwaccels:
Enabled parsers:
Enabled demuxers:
Enabled muxers:
Enabled protocols:
Enabled filters:
Enabled bsfs:
null
Enabled indevs:
Enabled outdevs:This compiled successfully and I replaced the lib files with .a file in QT :
INCLUDEPATH += $$PWD/ffmpeg/inc/
LIBS += $$files($$PWD/ffmpeg/lib/*.a, true)I didn’t change anything else. EXE works correctly without dependency but problem is static swscale is so much slower than the shared one. For 1080p share .DLL takes 2ms to shrink and convert yuv to rgb and static .A takes 6ms to
I also tried removing
--disable-everything --disable-programs
but still the same. I want to know if it’s because of the cl compiler or I missed a library or a setting ?BTW this my system : Win10/i7 4820K/16GB/GTX970
EDIT :
I got this in app output :
No accelerated colorspace conversion found from yuv420p to bgra.
Although x86 folder in swscale is compiled, it seems it’s not linked in the output.