
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (54)
-
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;
Sur d’autres sites (8470)
-
when ffpmeg drops frames some things aren't played back in real time
8 février 2024, par Alex028502I am trying to run a bunch of ffpmeg processes that act as simulators for cameras, and something funny is happening when I the processor can't keep up with the configured frame rate.


I have replaced the rtsp stream with an output file, and managed to reproduce the issue, so will just show that to keep it simple.


First here is a makefile that creates my source movie :


clock.mp4: Makefile
 rm -f $@
 ffmpeg -f lavfi -i color=c=black:s=4096x2160:r=25 -vf \
"drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf:fontsize=72:fontcolor=white:x=(w-text_w)/2:y=(h-text_h)/2: \
text='%{eif\:trunc(n/25)\:d}':start_number=0:rate=25" \
-t 60 -r 25 $@



that gives me a one minute long movie that prints the second number to the screen. I have tested it out and the seconds are close enough. I put a lot of pixels to make it easier to jam up my CPU.


Here is the script that creates a processes similar to the one I am trying to debug (called
experiment.sh
)

I am actually using H.264, but H.265 is easier to overwhelm the processor with


#! /usr/bin/env bash

set -e

echo starting > message$1.txt

rm -f superclock$1.mp4
ffmpeg -re -stream_loop -1 -i clock.mp4 \
 -an -vcodec libx265 -preset ultrafast -sc_threshold -1 -x265-params repeat-headers=1 \
 -vf "drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf:fontsize=24:fontcolor=white:x=10:y=10:text='%{localtime\:%X}', \
 drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf:fontsize=24:fontcolor=white:x=10:y=(h-text_h-10):textfile=message$1.txt:reload=1, \
 scale=1920x1080,fps=25" \
 -b:v 3M -minrate 3M -maxrate 3M \
 -bufsize 6M -g 25 superclock$1.mp4 &
pid=$!

for x in $(seq 0 10)
do
 echo $x > message$1.txt
 sleep 10
done

kill -INT $pid || true



It should


- 

- put the second in the middle of the screen - 'cause it gets it from the source video
- put the approximate sixth of minute in the lower left corner
(only approximate because of the sleep but close enough)
- put the the wall clock time in the upper left corner








and it works


make clock.mpg
./experiment.sh 0
vlc superclock0.mp4



shows something like this



Now here is the interesting part


If I run the script in four different terminals at the same time


./experiment.sh 1
./experiment.sh 2
./experiment.sh 3
./experiment.sh 4



It can't keep up with the frame rate, and I see this in the output :


frame= 1515 fps= 16 q=0.0 size= 768kB time=00:00:59.96 bitrate= 104.9k



I was hoping the end result would all look ok when I watch it except with fewer frames, but the timestamps of the frames would make it all work as expected


However...


- 

- The time in the middle, that is inherited from the source video, the seconds in the middle of the screen, stays in sync with VLC's clock.
- the wall clock in the upper left seems play at 150% speed
- the every ten seconds incrementor in the lower left seems to increment every 7 seconds
- the video is only 1:25 long even though it was recording for at least 1:40 according to sleeps
- the wall clock in the upper right hand corner makes it more than 1'40" and then counter in the lower left makes it to 10.














Here are four states to compare


| | Start | 30" in | end |
|------------------+----------+----------|----------|
| Video Time | 00:00 | 00:30 | 01:24 |
| Wall Clock Time | 15:05:50 | 15:06:39 | 15:07:39 |
| sixth of minute | 0 | 4 | 10 |
| seconds counter | 0 | 30 | 24 |





So you can see the vlc clock keeps pace with the original clock from the source movie.. even when it is only able to produce frames at 2/3 of the rate. However, the it is taking 50% long to get through the whole source movie I guess ?


I am having trouble coming up with a theory that can explain exactly how this happens.


Does anybody know how I can "correct" this ? (make it so that the movie is played at the same rate that it is recorded)


I am thinking of using a lower frame rate and size as the input video.. but it would be nice to have something that will always work as expected, just with a lower frame rate, no matter how busy the processor is.


-
Thumbnails from S3 Videos using FFMPEG - "No such file or directory : '/bin/ffmpeg'"
28 juin 2022, par NicoI am trying to generate thumbnails from videos in an S3 bucket every x frames by following this documentation : https://aws.amazon.com/blogs/media/processing-user-generated-content-using-aws-lambda-and-ffmpeg/


I am at the point where I'm testing the Lambda code provided in the documentation, but receive this error in CloudWatch Logs :




Here is the portion of the Lambda code associated with this error :




Any help is appreciated. Thanks !


-
How do I use a buffer object as Ffmpeg's source input
6 septembre 2016, par Marwan SulaimanI’m using node.js as a web server (fluent-ffmpeg as the ffmpeg library).
-
I have two videos on amazon s3, when I retrieve them, they come as Buffer objects.
-
I’d like to retrieve them, combine them, and send them to the client, without having to save them as files.
-