
Recherche avancée
Autres articles (61)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Utilisation et configuration du script
19 janvier 2011, parInformations spécifiques à la distribution Debian
Si vous utilisez cette distribution, vous devrez activer les dépôts "debian-multimedia" comme expliqué ici :
Depuis la version 0.3.1 du script, le dépôt peut être automatiquement activé à la suite d’une question.
Récupération du script
Le script d’installation peut être récupéré de deux manières différentes.
Via svn en utilisant la commande pour récupérer le code source à jour :
svn co (...)
Sur d’autres sites (6924)
-
FFMPEG Encoding in Multiple resoultions for adaptive streaming
9 janvier 2020, par thatmanI am using the following ffmpeg script for encoding mp4 video into different resolutions for adaptive HLS/DASH streaming :
ffmpeg -y -nostdin -loglevel error -i INPUT.mp4 \
-map 0:v:0 -map 0:v:0 -map 0:v:0 -map 0:v:0 -map 0:v:0 -map 0:v:0 -map 0:a\?:0 \
-maxrate:v:0 350k -bufsize:v:0 700k -c:v:0 libx264 -filter:v:0 "scale=320:-2" \
-maxrate:v:1 1000k -bufsize:v:1 2000k -c:v:1 libx264 -filter:v:1 "scale=640:-2" \
-maxrate:v:2 3000k -bufsize:v:2 6000k -c:v:2 libx264 -filter:v:2 "scale=1280:-2" \
-maxrate:v:3 300k -bufsize:v:3 600k -c:v:3 libvpx-vp9 -filter:v:3 "scale=320:-2" \
-maxrate:v:4 1088k -bufsize:v:4 2176k -c:v:4 libvpx-vp9 -filter:v:4 "scale=640:-2" \
-maxrate:v:5 1500k -bufsize:v:5 3000k -c:v:5 libvpx-vp9 -filter:v:5 "scale=1280:-2" \
-use_timeline 1 -use_template 1 -adaptation_sets "id=0,streams=v id=1,streams=a" \
-threads 8 -seg_duration 5 -hls_init_time 1 -hls_time 5 -hls_playlist true -f dash OUTPUT.mpdBut the script is giving this error :
Only ’-vf scale=320:640’ read, ignoring remaining -vf options : Use ’,’ to separate filters
Only ’-vf scale=640:1280’ read, ignoring remaining -vf options : Use ’,’ to separate filters
Only ’-af (null)’ read, ignoring remaining -af options : Use ’,’ to separate filtersPlease help in resolving the issue. Thanks in advance !
-
How do you determine the end of the file in a stream containing multiple streams ? (nodejs)
26 décembre 2019, par DanielkentI would like to split an audio file into multiple segments using ffmpeg in an AWS Lambda (NodeJS) function.
Due to the limitations of (and to optimise for) the lambda environment I would like to stream the audio into ffmpeg, perform the split on the audio file in the stream and then stream the now multiple smaller files out to s3.
After doing some research I have found the AWS S3 SDK doesn’t support multiple file uploads in one stream. I could resolve this by finding the end of each new segment (file in the output stream) and creating a separate upload to s3.
Is there a way to determine the end of a file in a stream (containing multiple files) ?
(without saving it to the file system or loading it to memory).
I have searched around and I can’t seem to find an answer.
-
Redirect FFMPEG's output to multiple named pipes on Windows
27 août 2024, par tearvisusI am trying to stream video and audio data into two separate named pipes on Windows.



ffmpeg.exe -f dshow -i video="My camera name":audio="My microphone name" -map 0:1 -ac 1 -f f32le \\.\pipe\audioStream -map 0:0 -f mjpeg \\.\pipe\videoStream




The problem is that FFMPEG does not seem to understand that the outputs
\\.\pipe\audioStream
and\\.\pipe\videoStream
are pipes and treats them like files.


- 

- If the pipes are already created when the FFMPEG starts, it wants to overwrite them and fails.
- Otherwise, it complains that the path does not exist and fails.







As far as I understand, specifying the
pipe:
protocol should do the trick, but I can't figure out how to use it properly, even with a single pipe. I have tried :


- 

pipe:pipeName
pipe:pipe\pipeName
pipe:\\.\pipe\pipeName
pipe://pipeName
pipe://pipe\pipeName
pipe://\\.\pipe\pipeName















I always end up with the same result : the output is written to the console and not to the pipe. If the pipe already exists when the FFMPEG starts, nothing connects to the pipe.



Is it possible to use FFMPEG with named pipes on Windows ? If yes, what is the proper way to do this ?