
Recherche avancée
Autres articles (58)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)
Sur d’autres sites (14246)
-
How To Play Hardware Accelerated Video on A Mac
28 mai 2013, par Multimedia Mike — GeneralI have a friend who was considering purchasing a Mac Mini recently. At the time of this writing, there are 3 desktop models (and 2 more “server” models).
The cheapest one is a Core i5 2.5 GHz. Then there are 2 Core i7 models : 2.3 GHz and 2.6 GHz. The difference between the latter 2 is US$100. The only appreciable technical difference is the extra 0.3 GHz and the choice came down to those 2.
He asked me which one would be able to play HD video at full frame rate. I found this query puzzling. But then, I have been “in the biz” for a bit too long. Whether or not a computer or device can play a video well depends on a lot of factors.
Hardware Support
First of all, looking at the raw speed of the general-purpose CPU inside of a computer as a gauge of video playback performance is generally misguided in this day and age. In general, we have a video standard (H.264, which I’ll focus on for this post) and many bits of hardware are able to accelerate decoding. So, the question is not whether the CPU can decode the data in real time, but can any other hardware in the device (likely the graphics hardware) handle it ? These machines have Intel HD 4000 graphics and, per my reading of the literature, they are capable of accelerating H.264 video decoding.Great, so the hardware supports accelerated decoding. So it’s a done deal, right ? Not quite…
Operating System Support
An application can’t do anything pertaining to hardware without permission from the operating system. So the next question is : Does Mac OS X allow an application to access accelerated video decoding hardware if it’s available ? This used to be a contentious matter (notably, Adobe Flash Player was unable to accelerate H.264 playback on Mac in the absence of such an API) but then Apple released an official API detailed in Technical Note TN2267.So, does this mean that video is magically accelerated ? Nope, we’re still not there yet…
Application Support
It’s great that all of these underlying pieces are in place, but if an individual application chooses to decode the video directly on the CPU, it’s all for naught. An application needs to query the facilities and direct data through the API if it wants to leverage the acceleration. Obviously, at this point it becomes a matter of “which application ?”My friend eventually opted to get the pricier of the desktop Mac Mini models and we ran some ad-hoc tests since I was curious how widespread the acceleration support is among Mac multimedia players. Here are some programs I wanted to test, playing 1080p H.264 :
- Apple QuickTime Player
- VLC
- YouTube with Flash Player (any browser)
- YouTube with Safari/HTML5
- YouTube with Chrome/HTML5
- YouTube with Firefox/HTML5
- Netflix
I didn’t take exhaustive notes but my impromptu tests revealed QuickTime Player was, far and away, the most performant player, occupying only around 5% of the CPU according to the Mac OS X System Profiler graph (which is likely largely spent on audio decoding).
VLC consistently required 20-30% CPU, so it’s probably leveraging some acceleration facilities. I think that Flash Player and the various HTML5 elements performed similarly (their multi-process architectures can make such a trivial profiling test difficult).
The outlier was Netflix running in Firefox via Microsoft’s Silverlight plugin. Of course, the inner workings of Netflix’s technology are opaque to outsiders and we don’t even know if it uses H.264. It may very well use Microsoft’s VC-1 which is not a capability provided by the Mac OS X acceleration API (it doesn’t look like the Intel HD 4000 chip can handle it either). I have never seen any data one way or another about how Netflix encodes video. However, I was able to see that Netflix required an enormous amount of CPU muscle on the Mac platform.
Conclusion
The foregoing is a slight simplification of the video playback pipeline. There are some other considerations, most notably how the video is displayed afterwards. To circle back around to the original question : Can the Mac Mini handle full HD video playback ? As my friend found, the meager Mac Mini can do an admirable job at playing full HD video without loading down the CPU. -
FFmpeg filter config with aecho fails to configure all the links and formats - avfilter_graph_config
23 janvier 2021, par cs guyI am following the official tutorial of FFMpeg to create a filter chain. This tutorial shows how to pass data through a chain as :




The filter chain it uses is : * (input) -> abuffer -> volume ->
aformat -> abuffersink -> (output)




Here is my code - sorry for boiler code, it is just ffmpeg way :(


frame = av_frame_alloc();
 filterGraph = avfilter_graph_alloc();

 if (!frame) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Could not allocate memory for frame");
 return;
 }

 if (!filterGraph) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor FXProcessor! %s", av_err2str(AVERROR(ENOMEM)));
 return;
 }

 const AVFilter *abuffer;
 const AVFilter *abuffersink;
 AVFilterContext *aformat_ctx;
 const AVFilter *aformat;
 AVFilterContext *choisen_beat_fx_ctx;
 const AVFilter *choisen_beat_fx;

 /* Create the abuffer filter;
 * it will be used for feeding the data into the graph. */
 abuffer = avfilter_get_by_name("abuffer");
 if (!abuffer) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Could not find the abuffer filter!");
 return;
 }
 abuffer_ctx = avfilter_graph_alloc_filter(filterGraph, abuffer, "src");
 if (!abuffer_ctx) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Could not allocate the abuffer_ctx instance! %s",
 av_err2str(AVERROR(ENOMEM)));
 return;
 }

 char ch_layout[64];
 /* Set the filter options through the AVOptions API. */
 av_get_channel_layout_string(ch_layout, sizeof(ch_layout), 0, AV_CH_LAYOUT_STEREO);
 av_opt_set(abuffer_ctx, "channel_layout", ch_layout, AV_OPT_SEARCH_CHILDREN);
 av_opt_set(abuffer_ctx, "sample_fmt", av_get_sample_fmt_name(AV_SAMPLE_FMT_FLT),
 AV_OPT_SEARCH_CHILDREN);
 av_opt_set_q(abuffer_ctx, "time_base", (AVRational) {1, defaultSampleRate},
 AV_OPT_SEARCH_CHILDREN);
 av_opt_set_int(abuffer_ctx, "sample_rate", defaultSampleRate, AV_OPT_SEARCH_CHILDREN);
 /* Now initialize the filter; we pass NULL options, since we have already
 * set all the options above. */

 if (avfilter_init_str(abuffer_ctx, nullptr) < 0) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Could not initialize the abuffer filter!");
 return;
 }

 // TODO: select FX's dynamically
 /* Create aecho filter. */
 if (true) {

 choisen_beat_fx = avfilter_get_by_name("volume");
 if (!choisen_beat_fx) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Could not find the aecho filter!");
 return;
 }

 choisen_beat_fx_ctx = avfilter_graph_alloc_filter(filterGraph, choisen_beat_fx, "echo");
 if (!choisen_beat_fx_ctx) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Could not allocate the choisen_beat_fx_ctx instance! %s",
 av_err2str(AVERROR(ENOMEM)));
 return;
 }

 av_opt_set (choisen_beat_fx_ctx, "volume", AV_STRINGIFY(0.5), AV_OPT_SEARCH_CHILDREN);

 if (avfilter_init_str(choisen_beat_fx_ctx, nullptr) < 0) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Could not initialize the choisen_beat_fx_ctx filter!");
 return;
 }
 }

 /* Create the aformat filter;
 * it ensures that the output is of the format we want. */
 aformat = avfilter_get_by_name("aformat");
 if (!aformat) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Could not find the aformat filter!");
 return;
 }
 aformat_ctx = avfilter_graph_alloc_filter(filterGraph, aformat, "aformat");
 if (!aformat_ctx) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Could not allocate the aformat instance!");
 return;
 }

 av_opt_set(aformat_ctx, "sample_fmts", av_get_sample_fmt_name(AV_SAMPLE_FMT_FLT),
 AV_OPT_SEARCH_CHILDREN);
 av_opt_set_int(aformat_ctx, "sample_rates", defaultSampleRate, AV_OPT_SEARCH_CHILDREN);
 av_get_channel_layout_string(ch_layout, sizeof(ch_layout), 0, AV_CH_LAYOUT_STEREO);
 av_opt_set(aformat_ctx, "channel_layouts", ch_layout, AV_OPT_SEARCH_CHILDREN);

 if (avfilter_init_str(aformat_ctx, nullptr) < 0) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Could not initialize the aformat filter!");
 return;
 }

 /* Finally create the abuffersink filter;
 * it will be used to get the filtered data out of the graph. */
 abuffersink = avfilter_get_by_name("abuffersink");
 if (!abuffersink) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Could not find the abuffersink filter!");
 return;
 }

 abuffersink_ctx = avfilter_graph_alloc_filter(filterGraph, abuffersink, "sink");
 if (!abuffersink_ctx) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Could not allocate the abuffersink instance!");
 return;
 }

 /* This filter takes no options. */
 if (avfilter_init_str(abuffersink_ctx, nullptr) < 0) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Could not initialize the abuffersink instance.!");
 return;
 }

 /* Connect the filters;
 * in this simple case the filters just form a linear chain. */
 if (avfilter_link(abuffer_ctx, 0, choisen_beat_fx_ctx, 0) != 0) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Error connecting filters.!");
 return;
 }
 if (avfilter_link(choisen_beat_fx_ctx, 0, aformat_ctx, 0) != 0) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Error connecting filters.!");
 return;
 }
 if (avfilter_link(aformat_ctx, 0, abuffersink_ctx, 0) != 0) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Error connecting filters.!");
 return;
 }

 /* Configure the graph. */
 if (avfilter_graph_config(filterGraph, nullptr) < 0) {
 *mediaLoadPointer = FAILED_TO_LOAD;
 LOGE("FXProcessor::FXProcessor Error configuring the filter graph!");
 return;
 }



This code works fine when the chain is




- 

- (input) -> abuffer -> aecho-> aformat -> abuffersink -> (output)






However, I would like to use
adelay
instead ofvolume
filter. So I want :



The filter chain it uses is : * (input) -> abuffer -> volume ->
aformat -> abuffersink -> (output)




I changed the code at


choisen_beat_fx = avfilter_get_by_name("volume");



to


choisen_beat_fx = avfilter_get_by_name("aecho");



and removed the line


av_opt_set (choisen_beat_fx_ctx, "volume", AV_STRINGIFY(0.5), AV_OPT_SEARCH_CHILDREN);



everything goes smooth until the last line.

avfilter_graph_config
fails and returns negative value. Functions document :



avfilter_graph_config : Check validity and configure all the links and
formats in the graph.




So my guess is I need extra links to insert aecho to my chain ? How can I insert aecho into my filter chain ?


-
How to prevent gray overlays, transparency issues, and similar shader defects when using gl-transition filters
29 janvier 2021, par Soren WrayI compiled and installed a local build of ffmpeg with support for gl-transition adapted from the official guide for Ubuntu. The build is configured with all relevant packages and seems to be working as intended. See the code samples at the end.


I know the gl-transition filter is installed due to
./ffmpeg -v 0 -filters | grep gltransition
, which outputs :

T.. gltransition VV->V OpenGL blend transitions


All sources were tested with the custom command string :
./ffmpeg -i ~/PATH/TO/INPUT1.mp4 -i ~/PATH/TO/INPUT2.mp4 -filter_complex "gltransition=duration=3:offset=1:source=/PATH/TO/EFFECT.glsl" -y ~/PATH/TO/OUTPUT.mp4
, which is for a 3 second transition effect (duration=3
), starting at 1 second (offset=1
).

I've been testing the code sources for various transition effects listed in the gl-transition gallery and have encountered some unusual gray overlays at the transition points, likely having to do with alpha channel transparency. In many cases, there are also shader or animation defects, e.g. with
windowslice.glsl
rendering only 1 slice when there are supposed to be 10, or again withWaterDrop.glsl
, which simply fades out the clip in place of the intended ripple effect. Most complex animations seem to default to this monotonous gray overlay. I provide a gif example below for theGlitchedMemories.glsl
transition.



I couldn't locate any other reports of this particular issue online. The documentation for gl-transitions is sorely lacking and the Stack Exchange network has very little information about this custom filter. I don't know how to fix the problem. It could have something to do with the codec or pixel format used, or some quirk of my build, but the technical details are beyond me.


Please note my compilation and configuration steps, perhaps the error is there :


sudo apt-get update -qq && sudo apt-get -y install autoconf automake build-essential cmake git-core libass-dev libfreetype6-dev libgnutls28-dev libsdl2-dev libtool libunistring-dev libva-dev libvdpau-dev libvorbis-dev libxcb1-dev libxcb-shm0-dev libxcb-xfixes0-dev pkg-config texinfo wget yasm zlib1g-dev

mkdir -p ~/ffmpeg_sources ~/bin

sudo apt-get install nasm libx264-dev libx265-dev libnuma-dev libvpx-dev libfdk-aac-dev libmp3lame-dev libopus-dev

cd ~/ffmpeg_sources && wget -O ffmpeg-snapshot.tar.bz2 https://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2 && tar xjvf ffmpeg-snapshot.tar.bz2

cd ~/ && git clone https://github.com/transitive-bullshit/ffmpeg-gl-transition.git



Open
~/ffmpeg-gl-transition/vf_gltransition.c
in an editor and delete line :# define GL_TRANSITION_USING_EGL // remove this line if you don't want to use EGL


cd ~/ffmpeg_sources/ffmpeg && cp ~/ffmpeg-gl-transition/vf_gltransition.c libavfilter/

git apply ~/ffmpeg-gl-transition/ffmpeg.diff

PATH="$HOME/bin:$PATH" PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure \
 --prefix="$HOME/ffmpeg_build" \
 --pkg-config-flags="--static" \
 --extra-cflags="-I$HOME/ffmpeg_build/include" \
 --extra-ldflags="-L$HOME/ffmpeg_build/lib" \
 --extra-libs="-lpthread -lm" \
 --bindir="$HOME/bin" \
 --enable-gpl \
 --enable-opengl \
 --enable-gnutls \
 --enable-libass \
 --enable-libfdk-aac \
 --enable-libfreetype \
 --enable-libmp3lame \
 --enable-libopus \
 --enable-libvorbis \
 --enable-libvpx \
 --enable-libx264 \
 --enable-libx265 \
 --disable-shared \
 --enable-static \
 --enable-runtime-cpudetect \
 --enable-filter=gltransition \
 --extra-libs='-lGLEW -lglfw' \
 --enable-nonfree && \
PATH="$HOME/bin:$PATH" make

source ~/.profile