
Recherche avancée
Médias (2)
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
Autres articles (38)
-
Qualité du média après traitement
21 juin 2013, parLe bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
Sur d’autres sites (4810)
-
Transcode HLS Segments individually using FFMPEG
27 mai 2013, par rayhI am recording a continuous, live stream to a high-bitrate HLS stream. I then want to asynchronously transcode this to different formats/bitrates. I have this working, mostly, except audio artefacts are appearing between each segment (gaps and pops).
Here is an example ffmpeg command line :
ffmpeg -threads 1 -nostdin -loglevel verbose \
-nostdin -y -i input.ts -c:a libfdk_aac \
-ac 2 -b:a 64k -y -metadata -vn output.tsInspecting an example sound file shows that there is a gap at the end of the audio :
And the start of the file looks suspiciously attenuated (although this may not be an issue) :
My suspicion is that these artefacts are happening because transcoding are occurring without the context of the stream as a whole.
Any ideas on how to convince FFMPEG to produce audio that will fit back into a HLS stream ?
** UPDATE 1 **
Here are the start/end of the original segment. As you can see, the start still appears the same, but the end is cleanly ended at 30s. I expect some degree of padding with lossy encoding, but I there is some way that HLS manages to do gapless playback (is this related to iTunes method with custom metadata ?)
** UPDATED 2 **
So, I converted both the original (128k aac in MPEG2 TS) and the transcoded (64k aac in aac/adts container) to WAV and put the two side-by-side. This is the result :
I'm not sure if this is representative of how a client will play it back, but it seems a bit odd that decoding the transcoded one introduces a gap at the start and makes the segment longer. Given they are both lossy encoding, I would have expected padding to be equally present in both (if at all).
** UPDATE 3 **
According to http://en.wikipedia.org/wiki/Gapless_playback - Only a handful of encoders support gapless - for MP3, I've switched to lame in ffmpeg, and the problem, so far, appears to have gone.
For AAC (see http://en.wikipedia.org/wiki/FAAC), I have tried libfaac (as opposed to libfdk_aac) and it also seems to produce gapless audio. However, the quality of the latter isn't that great and I'd rather use libfdk_aac is possible.
-
Frame Accurate Seeking in WebM
11 janvier 2016, par SapphireSunI’m trying to do a somewhat tricky thing with WebM. I am trying to encode a stack of 256 biological images as a WebM. The time dimension of motion is very similar to the space dimension of the image stack so the compression ratios are insanely good. However, I am having trouble decoding the movie frames. I know that WebM uses an IPB predictive coding scheme, but I was reading several blog posts and discussion groups from WHATWG from 2011, and they said that frame accurate seeking was working in Chrome at that time.
When I do
video.currentTime = 0
, I correctly get this :However, if I do
video.currentTime = 0.34
(for example) I get something that looks like this :It looks like I’m getting a random poorly predicted frame. Am I just encoding the video wrong ? When I play it normally it looks fine.
I encoded the video using 256 pngs using ffmpeg compiled with libvpx using the VP8 codec.
ffmpeg -y -framerate 60 -start_number 0 -pattern_type glob -i '*.png' -qmin 10 -qmax 42 out.webm
References to the WHATWG and some other info from 2011 :
WHATWG discusses frame accuracy :
https://lists.w3.org/Archives/Public/public-whatwg-archive/2011Jan/0372.html
BBC Tech Director talking about frame accuracy :
http://www.bbc.co.uk/blogs/bbcinternet/2011/02/frame_accurate_video_in_html5.html
-
Posthoc connect FFMPEG to opencv-python binary for Google Cloud Dataflow job
16 juillet 2017, par bw4szI would like to run some video processing jobs on linux based compute engines on Google Cloud DataFlow. Cloud DataFlow requires you to build a setup.py file, or supply dependencies in a requirements.txt.
https://cloud.google.com/dataflow/pipelines/dependencies-python
My video process requires opencv in python with FFMPEG support. I would like to avoid building opencv from source, as this takes nearly 35 minutes for each worker to git clone/make/make install.
There is a linux python binary .whl that works great. But its specifically compiled without FFMPEG.
From https://pypi.python.org/pypi/opencv-python
"IMPORTANT NOTE
MacOS and Linux wheels have currently some limitations :
video related functionality is not supported (not compiled with FFmpeg)"
Is it possible to post-hoc connect FFMPEG to the binary ? That is download FFMPEG and its libraries separately and still read video ? I know this is contrived, but are are there any options here besides building opencv from source for every new worker ?