
Recherche avancée
Autres articles (80)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...)
Sur d’autres sites (9881)
-
ffmpeg-python : combine live video/audio streams into one file
3 avril 2021, par GreendrakeI am using python-dvr to pull raw video/audio streams from IP cameras. It parses the binary data into chunks of video and audio and writes them into separate files like this :


with open("file.video", "wb") as v, open("file.audio", "wb") as a:
 def receiver(frame, meta, user):
 if 'frame' in meta:
 v.write(frame)
 if 'type' in meta and meta["type"] == "g711a":
 a.write(frame)
 
 cam.start_monitor(receiver)



I could then use
ffmpeg
binary to combine the two files into one. But I want the script to do it straight away continuously (splitting the combined stream into say 10-min clips but that would be a separate question).

It looks like ffmpeg-python.output could do it. But, with virtually no experience in Python I can't immediately get my head around it. The syntax goes :


ffmpeg.output(stream1[, stream2, stream3…], filename, **ffmpeg_args)



How do I use that in the code above ? I do not have "streams" as such there. Instead, the
receiver
function is called in a loop with frames/chunks of binary data, which could be video or audio. How do I "pipe" them into theffmpeg.output
function ?

-
ffmpeg - using bash variable for multiple inputs. Problem with spaces in filenames [duplicate]
30 août 2019, par drake7This question already has an answer here :
I want to concatenate files with different codecs. My code only works for filenames without spaces.
I have a fair amount of experience with using ffmpeg with bash.
However everything I’ve tried to solve the problem did’t work :Let’s say my videos are "01 video.mov" and "02 video.mp4".
not quoting $i : (This is the current status quo of my script.)
ffmpegInput="$ffmpegInput -i ${i}"
results in :
01: No such file or directory
quoting $i :
ffmpegInput="$ffmpegInput -i \"${i}\""
results in :
"01: No such file or directory
This code only works for filenames without spaces :
ffmpegInput=""
for i in *
do
ffmpegInput="$ffmpegInput -i ${i}"
ffmpeg ${ffmpegInput} -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[outv][outa]" -map "[outv]" -map "[outa]" ../output.mkv
doneI did’t expect to encounter problems with quoting.
But the fact that I store all inputs in a variable or an array makes it difficult.
Althought the variable stores the string :
-i "01 video.mov" -i "02 video.mp4"
ffmpeg doesn’t seem to see the first " as a valid quote.
Since ffmpeg throws the error"01: No such file or directory
it seems as if the " is actually there.
-
Live jpeg sequence image to RTMP fps drop
6 avril 2021, par m0j1I want to encode a sequence of jpg images (captured from my camera in unity) to h264 codec live stream video. I'm using the following command in ffmpeg :


-f image2pipe -vcodec mjpeg -r 25 -i udp://127.0.0.1:6000 -r 25 -preset slow -vcodec libx264 -tune zerolatency -b:v 800k -maxrate 800k -bufsize 400k -f mpegts udp://127.0.0.1:5701



On the other side, I playback this live stream using ffplay (ffplay -fflags nobuffer -flags low_delay -framedrop udp ://192.168.189.112:5701) and everything seems to be fine.

After that, I want to use a media server to restream this to other clients so I chose node media server and with the following ffmpeg comment I stream my image sequence to my media server :

-f image2pipe -vcodec mjpeg -r 25 -i udp://127.0.0.1:6000 -r 25 -preset slow -vcodec libx264 -tune zerolatency -b:v 800k -maxrate 800k -bufsize 400k -f flv rtmp://192.168.189.112/live/STREAM_NAME



But when I use ffplay to playback this live stream using the following command I get a choppy stream that looks like it is in 5fps :


ffplay -fflags nobuffer -flags low_delay -framedroprtmp://192.168.189.112/live/STREAM_NAME

 



I also tried to first record my live stream using node media server and then play it to see if I had received the data right and surprisingly the recorded video is perfect.

I wanted to ask if someone has any tips or experience about this.