
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (82)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (13839)
-
Loading and unloading ffmpeg jni library based on when it's needed
30 septembre 2014, par AlinI finally managed to compile ffmpeg for android and I’ve been able to use it in my app.
Here is the scenario of my app :
- I show the user a gridview with thumbnails of images and videos
- the user can click on a cell and it is taken to image/video details where he can see the full image or play the video
- the user can apply an image over an video and this is when ffmpeg is used
So basically, the user might never actually use the watermarking option or he can do it very rare because the amount of available videos is way smaller than images.
I am loading the ffmpeg library, first time it is needed by running :
static {
System.loadLibrary("ffmpeglib");
}Now here are my questions :
- loading the library like this, uses app’s memory and resources ?
- can I unload the library, or better said, is it needed to unload it ? I have not found any java code like System.unloadLibrary to take care of unloading
- Since the library might be used rarely, wouldn’t a load => do encoding => unload be a better approach ? Or maybe having it loaded would allow easy reuse since no loading is necessary.
- If I use an IntentService to load the library and make the encoding, when the service completes the job, does the library gets unloaded ?
-
Decode h264 video
27 juillet 2020, par john bowringI am looking for a way to decode h264 (or indeed any video format) using c#. The ultimate goal is to be able to decode the images and very strictly control the playback in real time. The project I am working on is a non-linear video art piece where the HD footage is required to loop and edit itself on the fly, playing back certain frame ranges and then jumping to the next randomly selected frame range seamlessly.



I have created an app which reads image files (jpegs) in from the disk and plays them on screen in order, I have total control over which frame is loaded and when it is displayed but at full HD res it takes slightly longer than I want to load the images from hard drive (which are about 500k each), I am thinking that using a compressed video format would be smaller and therefore faster to read and decode into a particular frame however I cannot find any readily available way to do this.



Are there any libraries which can do this ? i.e. extract an arbitrary frame from a video file and serve it to my app in less time than it takes to show the frame (running at 25fps), I have looked into the vlc libraries and wrappers for ffmpeg but I don't know which would be better or if there would be another even better option. Also I don't know which codec would be the best choice as some are key frame based making arbitrary frame extraction probably very difficult.



Any advice welcome, thanks


-
VP8 Codec SDK "Aylesbury" Release
28 octobre 2010, par noreply@blogger.com (John Luther)Today we’re making available "Aylesbury," our first named release of libvpx, the VP8 codec SDK. VP8 is the video codec used in WebM. Note that the VP8 specification has not changed, only the SDK.
What’s an Aylesbury ? It’s a breed of duck. We like ducks, so we plan to use duck-related names for each major libvpx release, in alphabetical order. Our goal is to have one named release of libvpx per calendar quarter, each with a theme.
You can download the Aylesbury libvpx release from our Downloads page or check it out of our Git repository and build it yourself. In the coming days Aylesbury will be integrated into all of the WebM project components (DirectShow filters, QuickTime plugins, etc.). We encourage anyone using our components to upgrade to the Aylesbury releases.
For Aylesbury the theme was faster decoder, better encoder. We used our May 19, 2010 launch release of libvpx as the benchmark. We’re very happy with the results (see graphs below) :
- 20-40% (average 28%) improvement in libvpx decoder speed
- Over 7% overall PSNR improvement (6.3% SSIM) in VP8 "best" quality encoding mode, and up to 60% improvement on very noisy, still or slow moving source video.
The main improvements to the decoder are :
- Single-core assembly "hot spot" optimizations, including improved vp8_sixtap_predict() and SSE2 loopfilter functions
- Threading improvements for more efficient use of multiple processor cores
- Improved memory handling and reduced footprint
- Combining IDCT and reconstruction steps
- SSSE3 usage in functions where appropriate
On the encoder front, we concentrated on clips in the 30-45 dB range and saw the biggest gains in higher-quality source clips (greater that 38 dB), low to medium-motion clips, and clips with noisy source material. Many code contributions made this possible, but a few of the highlights were :
- Adaptive width and strength alternate reference frame noise suppression filter with optional motion compensation.
- Transform improvements (improved accuracy and reduction in round trip error)
- Trellis-based quantized coefficient optimization
- Two-pass rate control and quantizer changes
- Rate distortion changes
- Zero bin and rounding changes
- Work on MB-level quality control and bit allocation
We’re targeting Q1 2011 for the next named libvpx release, which we’re calling Bali. The theme for that release will be faster encoder. We are constantly working on improvements to video quality in the encoder, so after Aylesbury we won’t tie that work to specific named releases.
WebM at Streaming Media West
Members of the WebM project will discuss Aylesbury during a session at the Streaming Media West conference on November 3rd (session C203 : WebM Open Video Project Update). For more information, visit www.streamingmedia.com/west.
John Luther is Product Manager of the WebM Project.