Recherche avancée

Médias (33)

Mot : - Tags -/creative commons

Autres articles (62)

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (9516)

  • ffmpeg : concatinating files creates audio artefacts

    28 octobre 2022, par LML

    I'm currently trying to create a video out of multiple short video files. However, the final video always has audio artefacts, where it sounds like a short high pitch or echo at certain times during the audio. All the audio is a text-to-speech generated voice. No music. The artefacts appear sometimes more, sometimes less. But I would obviously prefer to have 0 of it.

    


    My starting point is a long audio file (mono with audio codec "mp3" according to ffprobe). Within that file are a bunch of short pauses of 4-5 seconds. I detect the silences and create individual audio files from there. Afterwards I create an mp4 file with this audio and a still image. Up to this point, the audio is perfectly fine and sounds the exact same as in the original file.

    


    After this I want to create the final video : each of the individual parts added into one long video. There is a transition between each file to mark the changing of image and audio. But even when skipping the transition and simply adding all of these clips that were generated the same way together, the artefacts are still present.

    


    The commands I use to create the different files.

    


    Create individual audio files :
.\ffmpeg.exe -y -hide_banner -i TTSAudio.mp3 -ss 359.944 -to 372.02479 -c copy partXY.mp3

    


    Create individual video files by using a .png file as the video stream and the partXY.mp3 as the audio stream :
.\ffmpeg.exe -y -hide_banner -framerate 30 -loop 1 -i XY_full.png -i partXY.mp3 -c:v libx265 -c:a copy -shortest partXY.mp4

    


    For concatenating the files :
.\ffmpeg.exe -y -hide_banner -i part000.mp4 -i part001.mp4 -i part002.mp4 -filter_complex "[0:v] [0:a] [1:v] [1:a] [2:v] [2:a] concat=n=3:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" -c:v copy -c:a copy final_video.mp4

    


    I've tried a lot of different things and codecs for the audio, without any luck. I use h265, as using h264 was causing weird video artefacts after uploading the file to YouTube.
I have tried reencoding, instead of copying (-c:a copy) at various stages, especially the final video. All without any luck.
I've used the different concatenation where you provide a list of files, which created a whole different set of problems.

    


    I've managed to filter the artefacts out by using -af "lowpass=f=2800", but that changes the voice a lot. I was also not able to notice the pitch visually when opening the audio in audacity, for example.

    


    Example :
https://soundcloud.com/thelml/sets/ffmpeg-audio-artefacts/s-LNr6UaMPgz9?si=f7b30e1e64bf4333ad055fa1fe21e9ec
Due to the files being so short, I seem to have to sometimes have to replay the bugged file to hear the artefact.

    


    So my question is : how do I fix this, without using a lowpass that basically changes the whole voice ?

    


  • Video editing : faded speedup/slowdown with free software

    25 octobre 2013, par Valio

    I would like to do "faded" speedups or slowdowns within videos.
    To illustrate what I mean here's a short clip of what I call "faded slowdown"

    So in a scene running at normal speed, there is a timeframe where the video slows down (or speeds up) and then runs again at normal speed. However the speed does not drop from 1 to e.g. 0.2 in a millisecond, but there's a smooth transition inbetween (I'm not certain if linear or not).

    I guess my video edtiting/cutting terminology is entirely flawed ; don't hesitate to correct it.

    1. Is anyone aware of a free software (*) video editor that offers such functionality ?
    2. Has anyone tried to implement it themself ? I guess it's not really that hard by using a script that generates necessary calls for ffmpeg or the like.

    over at least one second.
    the have one scene that runs at normal speed and at some point has a short slowdown or speedup. Basically we have one scene that gets
    I have not done any experiments yet, but
    If my terminology is confusing an exa

    (* as in free speech)

  • ffmpeg : create a video of 10 seconds before and 10 seconds after the occurrence of a certain event [closed]

    28 août 2023, par pozzugno

    From a video stream (RTSP) transmitted from an IPCam, I'd like to generate a short video starting from 10 seconds before an event and total duration 20 seconds, so the event is in the middle of the video.

    


    The event is an alarm triggered by an external device. I'd like to see what happened immediately before and after this alarm.

    


    Because I need the video before the event (and I don't know when the event will occur), I am forced to receive the stream continuously. The idea is to use segment mux, so I'm testing the following command :

    


    ffmpeg -hide_banner -rtsp_transport tcp -stimeout 10000000 -i rtsp://... -f segment -segment_time 2 -reset_timestamps 1 -segment_wrap 12 -ccopy -map 0:v /var/ramdisk/C1/%d.mp4


    


    In the output folder I will have 12 files : 0.mp4, 1.mp4,... 11.mp4. Each file is a video of duration 2 seconds.

    


    When my Linux service (always running) detects the alarm from the external device, it waits for 10 seconds and launch a new ffmpeg command that concatenate the short video files in one bigger video file. However I'm in trouble with the right command line to do what I need.