Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (108)

  • L’utiliser, en parler, le critiquer

    10 avril 2011

    La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
    Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
    Une liste de discussion est disponible pour tout échange entre utilisateurs.

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Formulaire personnalisable

    21 juin 2013, par

    Cette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
    Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire. (...)

Sur d’autres sites (10539)

  • Slowing down audio using FFMPEG

    24 janvier 2020, par Maxim_A

    For example I have a source file which is the duration of 6.40 seconds.
    I divide this duration into 10 sections. Then each section is slowed down by a certain value. This works great.

    ffmpeg.exe -i preresult.mp4 -filter_complex
    "[0:v]trim=0:0.5,setpts=PTS-STARTPTS[vv0];
    [0:v]trim=0.5:1,setpts=PTS-STARTPTS[vv1];
    [0:v]trim=1:1.5,setpts=PTS-STARTPTS[vv2];
    [0:v]trim=1.5:2,setpts=PTS-STARTPTS[vv3];
    [0:v]trim=2:2.5,setpts=PTS-STARTPTS[vv4];
    [0:v]trim=2.5:3,setpts=PTS-STARTPTS[vv5];
    [0:v]trim=3:3.5,setpts=PTS-STARTPTS[vv6];
    [0:v]trim=3.5:4,setpts=PTS-STARTPTS[vv7];
    [0:v]trim=4:4.5,setpts=PTS-STARTPTS[vv8];
    [0:v]trim=4.5:6.40,setpts=PTS-STARTPTS[vv9];

    [vv0]setpts=PTS*2[slowv0];
    [vv1]setpts=PTS*4[slowv1];
    [vv2]setpts=PTS*5[slowv2];
    [vv3]setpts=PTS*2[slowv3];
    [vv4]setpts=PTS*3[slowv4];
    [vv5]setpts=PTS*6[slowv5];
    [vv6]setpts=PTS*3[slowv6];
    [vv7]setpts=PTS*5[slowv7];
    [vv8]setpts=PTS*2[slowv8];
    [vv9]setpts=PTS*6[slowv9];

    [slowv0][slowv1][slowv2][slowv3][slowv4][slowv5][slowv6][slowv7][slowv8][slowv9]concat=n=10:v=1:a=0[v1]"  
    -r 30 -map "[v1]" -y result.mp4

    Then I needed to slow down along with the video and audio stream. In the documentation I found out about the atempo filter. The documentation says that the extreme boundaries of the value of this filter are from 0.5 to 100. To slow down by half, you need to use the value 0.5. I also learned that if you need to slow down the audio by 4 times, then you just need to apply two filters.

    [aa0]atempo=0.5[aslowv0] //Slowdown x2
    [aa0]atempo=0.5, atempo=0.5[aslowv0] //Slowdown x4

    Question 1 :
    How can i slow down audio an odd number of times ? for example, slow down audio by 3.5.7 times. There is no explanation of this point in the documentation.

    Question 2 :
    Do i understand correctly that if you slow down separately the audio stream and the video stream, they will have the same duration ?

    Thank you all in advance !

  • FFMPEG - frame PTS and DTS increasing faster than it should

    20 juillet 2022, par hi im Bacon

    I am pulling footage from an RTSP camera, standardising it, and then segmenting the footage for processing. I am standardising by reducing the resolution and setting the frame rate to 12 fps.

    


    I am encoding in the PTS times the wall time of each frame as the camera is a live source and I'd like to be able to know exactly when a frame is occurring (I'm not fussed that it's not going to be perfectly on, if it's all out by a second or two because of latency that is fine by me)

    


    FFMPEG is run from python subprocessing using the following command :

    


    command = [
    "ffmpeg", "-y",
    "-rtsp_transport", "tcp", URL,
    "-pix_fmt", "yuvj422p",
    "-c:v", "libx264", # Change the video codec to the kinesis required codec
    "-an", # Remove any audio channels
    "-vf", "scale='min(1280,iw)':'min(720,ih)':force_original_aspect_ratio=decrease",
    "-r", "12",
    "-vsync", "cfr",
    "-x264opts", "keyint=12:min-keyint=12",,
    "-f", "segment",  # Set the output format as chuncked segments
    "-segment_format", segment_format,  # Set each segments format E.G. matroska, mp4
    "-segment_time", str(segment_length),  # Set the length of the segment in seconds
    "-initial_offset", str(initial_pts_offset),
    "-strftime", "1",  # Use the strformat notication in naming of the video segements
    "{}/%Y-%m-%dT%H%M%S.{}".format(directory, extension)  # Define the name and location of the segments,
    '-loglevel', 'error'
]


    


    The problem I am having is that the timestamps of the frames increase at a faster than real time rate. The initial offset is set to the time that FFMPEG is kicked off, the frames received should always be less than right now. I am using a segment length of 30 seconds and after only 5 minutes, finished segments will have a start timestamp greater than the present wall time.

    


    The rate of increase looks around 3-4 times faster than it should.

    


    Why is this the case ? how can I avoid this ? is my understand of -r right ?

    


    I believed that -r drops extra frames, evens out the frame times creating new frames where needed, but not actually changing the perceived speed of the footage. The final frame time should not be greater than the segment length away from the first frame time.

    


    I have tried using a system (filter) that sets the PTS time according to the consumer wall time setpts='time(0)/TB' but this has led to quite choppy footage as the frames can be received/processed at different rates based on the connection.

    


    The quality of the segments is great, all the data is there... just getting the times right is seeming impossible.

    


  • Scene detection and concat makes my video longer (FFMPEG)

    12 avril 2019, par araujo

    I’m encoding videos by scenes. At this moment I got two solutions in order to do so. The first one is using a Python application which gives me a list of frames that represent scenes. Like this :

    285
    378
    553
    1145
    ...

    The first scene begins from the frame 1 to 285, the second from 285 to 378 and so on. So, I made a bash script which encodes all this scenes. Basically what it does is to take the current and previous frames, then convert them to time and finally run the ffmpeg command :

    begin=$(awk 'BEGIN{ print "'$previous'"/"'24'" }')
    end=$(awk 'BEGIN{ print "'$current'"/"'24'" }')
    time=$(awk 'BEGIN{ print "'$end'"-"'$begin'" }')

    ffmpeg -i $video -r 24 -c:v libx265  -f mp4 -c:a aac -strict experimental -b:v 1.5M -ss $begin -t $time "output$count.mp4" -nostdin

    This works perfect. The second method is using ffmpeg itself. I run this commands and gives me a list of times. Like this :

    15.75
    23.0417
    56.0833
    71.2917
    ...

    Again I made a bash script that encodes all these times. In this case I don’t have to convert to times because what I got are times :

    time=$(awk 'BEGIN{ print "'$current'"-"'$previous'" }')
    ffmpeg -i $video -r 24 -c:v libx265  -f mp4 -c:a aac -strict experimental -b:v 1.5M -ss $previous -t $time "output$count.mp4" -nostdin

    After all this explained it comes the problem. Once all the scenes are encoded I need to concat them and for that what I do is to create a list with the video names and then run the ffmpeg command.

    list.txt

    file 'output1.mp4'
    file 'output2.mp4'
    file 'output3.mp4'
    file 'output4.mp4'

    command :

    ffmpeg -f concat -i list.txt -c copy big_buck_bunny.mp4

    The problem is that the "concated" video is longer than the original by 2.11 seconds. The original one lasts 596.45 seconds and the encoded lasts 598.56. I added up every video duration and I got 598.56. So, I think the problem is in the encoding process. Both videos have the same frames number. My goal is to get metrics about the encoding process, when I run VQMT to get the PSNR and SSIM I get weird results, I think is for this problem.

    By the way, I’m using the big_buck_bunny video.