
Recherche avancée
Autres articles (35)
-
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Configuration spécifique d’Apache
4 février 2011, parModules spécifiques
Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
Création d’un (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (4740)
-
Which is better for pixel-level analysis of television (TV) video, OpenCV or ffmpeg ? [closed]
5 décembre 2011, par Randall CookI need to do some pixel-level analysis of television (TV) video. I have used ffmpeg in the past for analyzing video from files, but it wasn't exactly easy. I am thinking of giving OpenCV a try. Any recommendations or advice ?
Let's assume that I am starting with an MPEG-2 transport stream, and the analysis needs to run in real-time on Linux. I was also planning on using Intel's IPP library for some of the number crunching.
-
How to write a video encoder with ffmpeg ?
27 décembre 2013, par SunnyShahI want to write an encoder with ffmpeg which can put iFrames (keyframes) at positions I want. Where can I found tutorials or reference material for it ?
P.S
Is it possible to do this with mencoder or any opensource encoder. I want to encode H263 file. I am writing under & for linux. -
Multiple video sources combined into one
28 septembre 2011, par OdedI am looking for an efficient way to do the following :
Using several source videos (of approximately the same length), I need to generate an output video that is composed of all of the original sources each running in its own area (like a bunch of PIPs in several different sizes). So, the end result is that all the original are running side-by-side, each in its own area/box.
The source and output need to be
flv
and the platform I am using is Windows (dev on Windows 7 64bit, deployment to Windows server 2008).I have looked at avisynth but unfortunately it can't handle
flv
and non of the plugins and flv splitters I have tried worked.My current process uses ffmpeg in the following manner :
- Use ffmpeg to generate 25 png's per second per video, resizing the original as needed.
- Use the
System.Drawing
namespace to combine each set of frames into a new image, starting with a static background, then loading each frame into anImage
and drawing to the backgroundGraphics
object - this gives me the combined frames. - Use ffmpeg to combine the generated images to a video.
All this is very IO intensive (which is my processing bottleneck at the moment) and I feel there must be a more efficient way to reach my goal. I do not have much experience with video processing, and don't know what options are out there.
Can anyone suggest a more efficient way of processing these ?