
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (56)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Installation en mode standalone
4 février 2011, parL’installation de la distribution MediaSPIP se fait en plusieurs étapes : la récupération des fichiers nécessaires. À ce moment là deux méthodes sont possibles : en installant l’archive ZIP contenant l’ensemble de la distribution ; via SVN en récupérant les sources de chaque modules séparément ; la préconfiguration ; l’installation définitive ;
[mediaspip_zip]Installation de l’archive ZIP de MediaSPIP
Ce mode d’installation est la méthode la plus simple afin d’installer l’ensemble de la distribution (...) -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)
Sur d’autres sites (9509)
-
avfilter/zoompan : add in_time variable
19 juin 2020, par exwmavfilter/zoompan : add in_time variable
Currently, the zoompan filter exposes a 'time' variable (missing from docs) for use in
the 'zoom', 'x', and 'y' expressions. This variable is perhaps better named
'out_time' as it represents the timestamp in seconds of each output frame
produced by zoompan. This patch adds aliases 'out_time' and 'ot' for 'time'.This patch also adds an 'in_time' (alias 'it') variable that provides access
to the timestamp in seconds of each input frame to the zoompan filter.
This helps to design zoompan filters that depend on the input video timestamps.
For example, it makes it easy to zoom in instantly for only some portion of a video.
Both the 'out_time' and 'in_time' variables have been added in the documentation
for zoompan.Example usage of 'in_time' in the zoompan filter to zoom in 2x for the
first second of the input video and 1x for the rest :
zoompan=z='if(between(in_time,0,1),2,1):d=1'V2 : Fix zoompan filter documentation stating that the time variable
would be NAN if the input timestamp is unknown.V3 : Add 'it' alias for 'in_time. Add 'out_time' and 'ot' aliases for 'time'.
Minor corrections to zoompan docs.Signed-off-by : exwm <thighsman@protonmail.com>
-
ffmpeg x11grab moov atom not found
30 mars 2021, par Jintor2 FFMPEG process


(1) generating a ffmpeg x11grab to a .mp4
(2) take the .mp4 and restream it simultaneously to multiple rtmp endpoints


ISSUE the generated file in (1) have this error "moov atom not found"


This is the command that generate (1) :


ffmpeg -re -y -f x11grab -draw_mouse 0 -framerate 30 
-video_size $RESOLUTION -i :$DISPLAY_NUM -c:a aac -c:v libx264 
-movflags +faststart -preset ultrafast -crf 28 -refs 4 -qmin 4 
-pix_fmt yuv420p -filter:v fps=30 file.mp4



in the (2) => when I try to ffmpeg -i file.mp4 output somewhere : I get "moov atom not found" so the (2) can't read or open (1).


What I'm I missing


in (1)
-movflags +faststart
doesn't seem to fix the issue

••••••• EDIT : more details on the context ••••••


I'm using openvidu : webrtc with kurento and coturn.


The record feature creates a .mp4 on the fly as the chat is going on.


To start the recording, there is an API call i can make to my server and it automatically stops when all users leaves the chatroom OR do an other api call to stop. see composed video in this link https://docs.openvidu.io/en/2.17.0/advanced-features/recording/


openvidu have also webhooks.


My problem is not how to stop ffmpeg, but getting FFMPEG to encode while the mp4 or other is being generated "on the fly".


There is 2 options :


OPTION 1 : individual => 1 .webm per camare => this .webm ffmpeg can restream as hls or RTMP => it's working.


OPTION 2 : ** but the issue is with "Composed" video => it's using ffmpeg to x11grab the session... but it's mp4 without moov ato, so ffmpeg don't do anything with this.


see the composed.sh script here
https://github.com/OpenVidu/openvidu/blob/master/openvidu-server/docker/openvidu-recording/scripts/composed.sh


-
ffmpeg - overlay multiple fading texts with different colors
18 novembre 2017, par Abc123I have problem with this ffmpeg command, it works fine if the fading text is in white font color, but if I change the fontcolor to something else (for example black), the fading text will not appear, any ideas ?
ffmpeg -i ./based_video/480/clip3.mp4 -filter_complex "color=black:100x100[c]; [c][0]scale2ref[ct][mv0]; \
[ct]setsar=1,split=3[t1][t2][t3]; \
[t1]drawtext=fontfile=/usr/share/fonts/truetype/roboto/Roboto-Bold.ttf:text='\$30,000.0':fontsize=40:fontcolor=white,split[text1][alpha1]; \
[text1][alpha1]alphamerge,fade=t=in:st=1:d=1:alpha=1,fade=t=out:st=5:d=1:alpha=1[txta1]; \
[t2]drawtext=fontfile=/usr/share/fonts/truetype/roboto/Roboto-Bold.ttf:text='\$30,000.0':fontsize=40:fontcolor=white,split[text2][alpha2]; \
[text2][alpha2]alphamerge,fade=t=in:st=1:d=1:alpha=1,fade=t=out:st=5:d=1:alpha=1[txta2]; \
[t3]drawtext=fontfile=/usr/share/fonts/truetype/roboto/Roboto-Bold.ttf:text='\$30,000.0':fontsize=40:fontcolor=white,split[text3][alpha3]; \
[text3][alpha3]alphamerge,fade=t=in:st=1:d=1:alpha=1,fade=t=out:st=5:d=1:alpha=1[txta3]; \
[mv0][txta1]overlay=x='100':y='200':shortest=1[mv1]; \
[mv1][txta2]overlay=x='300':y='200':shortest=1[mv2]; \
[mv2][txta3]overlay=x='500':y='200':shortest=1" \
-c:v libx264 -c:a copy ./output_video/testnew-clip3-output.mp4full log is here :
https://docs.google.com/document/d/1y9Dnn0Df75J8P_hZ6LjHTX2dk-8z97UnTjlX8dnc0v0/edit?usp=sharingThanks in advance