
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (21)
-
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Librairies et logiciels spécifiques aux médias
10 décembre 2010, parPour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...)
Sur d’autres sites (7079)
-
Scene detection and concat makes my video longer (FFMPEG)
12 avril 2019, par araujoI’m encoding videos by scenes. At this moment I got two solutions in order to do so. The first one is using a Python application which gives me a list of frames that represent scenes. Like this :
285
378
553
1145
...The first scene begins from the frame 1 to 285, the second from 285 to 378 and so on. So, I made a bash script which encodes all this scenes. Basically what it does is to take the current and previous frames, then convert them to time and finally run the ffmpeg command :
begin=$(awk 'BEGIN{ print "'$previous'"/"'24'" }')
end=$(awk 'BEGIN{ print "'$current'"/"'24'" }')
time=$(awk 'BEGIN{ print "'$end'"-"'$begin'" }')
ffmpeg -i $video -r 24 -c:v libx265 -f mp4 -c:a aac -strict experimental -b:v 1.5M -ss $begin -t $time "output$count.mp4" -nostdinThis works perfect. The second method is using ffmpeg itself. I run this commands and gives me a list of times. Like this :
15.75
23.0417
56.0833
71.2917
...Again I made a bash script that encodes all these times. In this case I don’t have to convert to times because what I got are times :
time=$(awk 'BEGIN{ print "'$current'"-"'$previous'" }')
ffmpeg -i $video -r 24 -c:v libx265 -f mp4 -c:a aac -strict experimental -b:v 1.5M -ss $previous -t $time "output$count.mp4" -nostdinAfter all this explained it comes the problem. Once all the scenes are encoded I need to concat them and for that what I do is to create a list with the video names and then run the ffmpeg command.
list.txt
file 'output1.mp4'
file 'output2.mp4'
file 'output3.mp4'
file 'output4.mp4'command :
ffmpeg -f concat -i list.txt -c copy big_buck_bunny.mp4
The problem is that the "concated" video is longer than the original by 2.11 seconds. The original one lasts 596.45 seconds and the encoded lasts 598.56. I added up every video duration and I got 598.56. So, I think the problem is in the encoding process. Both videos have the same frames number. My goal is to get metrics about the encoding process, when I run VQMT to get the PSNR and SSIM I get weird results, I think is for this problem.
By the way, I’m using the big_buck_bunny video.
-
FFMPEG - frame PTS and DTS increasing faster than it should
20 juillet 2022, par hi im BaconI am pulling footage from an RTSP camera, standardising it, and then segmenting the footage for processing. I am standardising by reducing the resolution and setting the frame rate to 12 fps.


I am encoding in the PTS times the wall time of each frame as the camera is a live source and I'd like to be able to know exactly when a frame is occurring (I'm not fussed that it's not going to be perfectly on, if it's all out by a second or two because of latency that is fine by me)


FFMPEG is run from python subprocessing using the following command :


command = [
 "ffmpeg", "-y",
 "-rtsp_transport", "tcp", URL,
 "-pix_fmt", "yuvj422p",
 "-c:v", "libx264", # Change the video codec to the kinesis required codec
 "-an", # Remove any audio channels
 "-vf", "scale='min(1280,iw)':'min(720,ih)':force_original_aspect_ratio=decrease",
 "-r", "12",
 "-vsync", "cfr",
 "-x264opts", "keyint=12:min-keyint=12",,
 "-f", "segment", # Set the output format as chuncked segments
 "-segment_format", segment_format, # Set each segments format E.G. matroska, mp4
 "-segment_time", str(segment_length), # Set the length of the segment in seconds
 "-initial_offset", str(initial_pts_offset),
 "-strftime", "1", # Use the strformat notication in naming of the video segements
 "{}/%Y-%m-%dT%H%M%S.{}".format(directory, extension) # Define the name and location of the segments,
 '-loglevel', 'error'
]



The problem I am having is that the timestamps of the frames increase at a faster than real time rate. The initial offset is set to the time that FFMPEG is kicked off, the frames received should always be less than right now. I am using a segment length of 30 seconds and after only 5 minutes, finished segments will have a start timestamp greater than the present wall time.


The rate of increase looks around 3-4 times faster than it should.


Why is this the case ? how can I avoid this ? is my understand of
-r
right ?

I believed that
-r
drops extra frames, evens out the frame times creating new frames where needed, but not actually changing the perceived speed of the footage. The final frame time should not be greater than the segment length away from the first frame time.

I have tried using a system (filter) that sets the PTS time according to the consumer wall time
setpts='time(0)/TB'
but this has led to quite choppy footage as the frames can be received/processed at different rates based on the connection.

The quality of the segments is great, all the data is there... just getting the times right is seeming impossible.


-
Slowing down audio using FFMPEG
24 janvier 2020, par Maxim_AFor example I have a source file which is the duration of 6.40 seconds.
I divide this duration into 10 sections. Then each section is slowed down by a certain value. This works great.ffmpeg.exe -i preresult.mp4 -filter_complex
"[0:v]trim=0:0.5,setpts=PTS-STARTPTS[vv0];
[0:v]trim=0.5:1,setpts=PTS-STARTPTS[vv1];
[0:v]trim=1:1.5,setpts=PTS-STARTPTS[vv2];
[0:v]trim=1.5:2,setpts=PTS-STARTPTS[vv3];
[0:v]trim=2:2.5,setpts=PTS-STARTPTS[vv4];
[0:v]trim=2.5:3,setpts=PTS-STARTPTS[vv5];
[0:v]trim=3:3.5,setpts=PTS-STARTPTS[vv6];
[0:v]trim=3.5:4,setpts=PTS-STARTPTS[vv7];
[0:v]trim=4:4.5,setpts=PTS-STARTPTS[vv8];
[0:v]trim=4.5:6.40,setpts=PTS-STARTPTS[vv9];
[vv0]setpts=PTS*2[slowv0];
[vv1]setpts=PTS*4[slowv1];
[vv2]setpts=PTS*5[slowv2];
[vv3]setpts=PTS*2[slowv3];
[vv4]setpts=PTS*3[slowv4];
[vv5]setpts=PTS*6[slowv5];
[vv6]setpts=PTS*3[slowv6];
[vv7]setpts=PTS*5[slowv7];
[vv8]setpts=PTS*2[slowv8];
[vv9]setpts=PTS*6[slowv9];
[slowv0][slowv1][slowv2][slowv3][slowv4][slowv5][slowv6][slowv7][slowv8][slowv9]concat=n=10:v=1:a=0[v1]"
-r 30 -map "[v1]" -y result.mp4Then I needed to slow down along with the video and audio stream. In the documentation I found out about the
atempo
filter. The documentation says that the extreme boundaries of the value of this filter are from 0.5 to 100. To slow down by half, you need to use the value 0.5. I also learned that if you need to slow down the audio by 4 times, then you just need to apply two filters.[aa0]atempo=0.5[aslowv0] //Slowdown x2
[aa0]atempo=0.5, atempo=0.5[aslowv0] //Slowdown x4Question 1 :
How can i slow down audio an odd number of times ? for example, slow down audio by 3.5.7 times. There is no explanation of this point in the documentation.Question 2 :
Do i understand correctly that if you slow down separately the audio stream and the video stream, they will have the same duration ?Thank you all in advance !