
Recherche avancée
Médias (2)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (40)
-
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...)
Sur d’autres sites (6513)
-
ffmpeg why isn't metadata being printed
13 mars 2020, par akc42I would like to capture real time meta data related to the volume of my audio stream when encoding it into a flac file. Ultimately I want to embed all this is a nodejs based web application. I asked a question a while back using the showvolume filter and got all that working with streaming video, running ffmpeg as a subprocess. I want to repeat this with a text based output such as I believe you can get with the -astats filter.
Here is the command I tried :-
ffmpeg -hide_banner -nostats -f alsa -acodec pcm_s16le -ac:0 2 -ar 480000 -i hw:CARD=Microphone -af astats=metadata=1:length=1:reset=1 -af ametadata=mode=print:key=lavfi.astats.Overall.Peak_level_dB:file=- -acodec flac test.flac 2>log.txt
I expected to see stats on the standard output, but saw nothing. The flac file was fine and the log file didn’t show any problems.
I thought I understood how astats worked ; as configured it should add metadata to the audio stream of 1 second samples of the audio. The second watches the metadata in the stream and when the Peak Level dB is seen it should output it to the file called standard output.
Obviously I have misunderstood something and haven’t really found any examples to check against. Can someone help
-
How to remove silence from an audio file while leaving a bit of the otherwise deleted portion ?
26 février 2020, par A Question AskerPardon the title, it’s a bit difficult to explain concisely !
I have a bunch of short (1s-4s) recordings of people reading sentences. They were recorded themselves. Often, there is a bit of silence at the beginning or end, and I would like to automate removing that.
As a first pass, I used this command :
sox original.mp3 edited.mp3 silence 1 0.1 2% reverse silence 1 0.1 2% reverse
The problem is that when people speak, sometimes the very beginning of their speech is quiet and quickly ramps up. This command thus clipped the very beginning of a number of sentences.
So what I would like to do is essentially the same as what the sox command does, but ideally once it detects the boundary, it leaves 50ms before the boundary, in recognition that there may be some quiet but important sounds.
I think that ffmpeg might be a good tool for this, as I don’t think sox can be configured like that. I’m tool agnostic. I don’t know ffmpeg at all though, so I appreciate any help putting together a command given ffmpeg’s rather arcane syntax !
Another nice to have over the aforementioned sox command is for it only to remove silence at the beginning and end (if it exists) — not from the middle of the recording.
UPDATE : based on time spent with ffmpeg, it looks like I want something like this
ffmpeg -i input.mp3 -af "silenceremove=start_periods=1:start_threshold=0.02:start_silence=0.1:detection=peak,areverse,silenceremove=start_periods=1:start_threshold=0.02:start_silence=0.1:detection=peak,areverse" output.mp3
Oddly, this comes out garbled. I thought that it wasn’t reversing properly, but when I try to do something like
ffmpeg -i input.mp3 -af areverse output.mp3
It doesn’t work — I just get the input back. I’m not sure what is going wrong ?
If I just do the beginning,
ffmpeg -i input.mp3 -af "silenceremove=start_periods=1:start_threshold=0.02:start_silence=0.1:detection=peak" output.mp3
It works fine — for the beginning. But not sure why the reverse trick is resulting in a garbled output.
UPDATE2 : it actually looks like the command works, but it changes the metadata in a way that makes itunes wonky. If I use VLC instead, it works fine.
-
how to create a radio or video "station" continuous TS stream with ffmpeg
25 février 2020, par ZibriI can’t find any answer to this scenarios :
Produce a continuous ts file that initially contains silence (or blank video), and as I add files to a text file I wish them to be queued in the output ts stream.
Think of it as a radio or tv station.
I found everything on how to stream anything to anything,
but not a tv/radio broadcast like stream.A good start seems to be this :
ffmpeg -re -y -nostats -nostdin -hide_banner -loglevel quiet -fflags +genpts -f concat -i list.txt -map 0:a? -map 0:v? -map 0:s? -strict -2 -dn -c copy -hls_flags delete_segments -hls_time 10 -hls_list_size 6 /var/www/html/showname_.m3u8
list.txt example
file '/mnt/vusolo-tv/show/Season 1/S01E01.mp4'
file '/mnt/vusolo-tv/show/Season 1/S01E02.mp4'But i think there can be better ways...
This is not a good answer either : From multiple video files to single output
because "All files must have streams with identical encoding. Timebase for streams should be the same. Duration of all streams within a file should be the same, in order to maintain sync."
And I want to be able to add arbitrary files and produce a TS stream playable by any TV (once broadcasted in dvb).