Recherche avancée

Médias (3)

Mot : - Tags -/image

Autres articles (55)

  • Diogene : création de masques spécifiques de formulaires d’édition de contenus

    26 octobre 2010, par

    Diogene est un des plugins ? SPIP activé par défaut (extension) lors de l’initialisation de MediaSPIP.
    A quoi sert ce plugin
    Création de masques de formulaires
    Le plugin Diogène permet de créer des masques de formulaires spécifiques par secteur sur les trois objets spécifiques SPIP que sont : les articles ; les rubriques ; les sites
    Il permet ainsi de définir en fonction d’un secteur particulier, un masque de formulaire par objet, ajoutant ou enlevant ainsi des champs afin de rendre le formulaire (...)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Utilisation et configuration du script

    19 janvier 2011, par

    Informations spécifiques à la distribution Debian
    Si vous utilisez cette distribution, vous devrez activer les dépôts "debian-multimedia" comme expliqué ici :
    Depuis la version 0.3.1 du script, le dépôt peut être automatiquement activé à la suite d’une question.
    Récupération du script
    Le script d’installation peut être récupéré de deux manières différentes.
    Via svn en utilisant la commande pour récupérer le code source à jour :
    svn co (...)

Sur d’autres sites (5787)

  • ffmpeg x11grab moov atom not found

    30 mars 2021, par Jintor

    2 FFMPEG process

    


    (1) generating a ffmpeg x11grab to a .mp4
(2) take the .mp4 and restream it simultaneously to multiple rtmp endpoints

    


    ISSUE the generated file in (1) have this error "moov atom not found"

    


    This is the command that generate (1) :

    


    ffmpeg -re -y -f x11grab -draw_mouse 0 -framerate 30 
-video_size $RESOLUTION -i :$DISPLAY_NUM -c:a aac -c:v libx264 
-movflags +faststart -preset ultrafast -crf 28 -refs 4 -qmin 4 
-pix_fmt yuv420p -filter:v fps=30 file.mp4


    


    in the (2) => when I try to ffmpeg -i file.mp4 output somewhere : I get "moov atom not found" so the (2) can't read or open (1).

    


    What I'm I missing

    


    in (1) -movflags +faststart doesn't seem to fix the issue

    


    ••••••• EDIT : more details on the context ••••••

    


    I'm using openvidu : webrtc with kurento and coturn.

    


    The record feature creates a .mp4 on the fly as the chat is going on.

    


    To start the recording, there is an API call i can make to my server and it automatically stops when all users leaves the chatroom OR do an other api call to stop. see composed video in this link https://docs.openvidu.io/en/2.17.0/advanced-features/recording/

    


    openvidu have also webhooks.

    


    My problem is not how to stop ffmpeg, but getting FFMPEG to encode while the mp4 or other is being generated "on the fly".

    


    There is 2 options :

    


    OPTION 1 : individual => 1 .webm per camare => this .webm ffmpeg can restream as hls or RTMP => it's working.

    


    OPTION 2 : ** but the issue is with "Composed" video => it's using ffmpeg to x11grab the session... but it's mp4 without moov ato, so ffmpeg don't do anything with this.

    


    see the composed.sh script here
https://github.com/OpenVidu/openvidu/blob/master/openvidu-server/docker/openvidu-recording/scripts/composed.sh

    


  • Produce waveform video from audio using FFMPEG

    30 novembre 2020, par RhythmicDevil

    I am trying to create a waveform video from audio. My goal is to produce a video that looks something like this

    


    enter image description here

    


    For my test I have an mp3 that plays a short clipped sound. There are 4 bars of 1/4 notes and 4 bars of 1/8 notes played at 120bpm. I am having some trouble coming up with the right combination of preprocessing and filtering to produce a video that looks like the image. The colors dont have to be exact, I am more concerned with the shape of the beats. I tried a couple of different approaches using showwaves and showspectrum. I cant quite wrap my head around why when using showwaves the beats go past so quickly, but using showspectrum produces a video where I can see each individual beat.

    


    ShowWaves

    


    ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showwaves=s=1280x100:mode=cline:rate=25:scale=sqrt,format=yuv420p[v]" -map "[v]" -map 0:a output_wav.mp4


    


    This link will download the output of that command.

    


    ShowSpectrum

    


    ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showspectrum=s=1280x100:mode=combined:color=intensity:saturation=5:slide=1:scale=cbrt,format=yuv420p[v]" -map "[v]" -an -map 0:a output_spec.mp4


    


    This link will download the output of that command.

    


    I posted the simple examples because I didn't want to confuse the issue by adding all the variations I have tried.

    


    In practice I suppose I can get away with the output from showspectrum but I'd like to understand where/how I am thinking about this incorrectly. Thanks for any advice.

    


    Here is a link to the source audio file.

    


  • ffmpeg png to png quality loss

    13 avril 2018, par kilo

    I did a python script that managed to unshuffle a shuffled (png) image according to a specific pattern, that python script uses ffmpeg and does 12 encodes to unshuffling it (by cropping a specific part and pasting it over the existing picture).
    As such the same file is re-encoded into a new file each time, which shouldn’t be a problem since i am doing png conversion (lossless, right ?), but i still lose quality on it.

    Here are the pictures :

    Notice the loss of quality on the "ONE PUNCH MAN" text. The rest of the picture is, seemingly, literally identical. So the problem seems to be with the colors.

    Here are the ffmpeg commands i ran to get to the output :

    ffmpeg -loglevel panic -y -i "output/001.png" -i "001.png" -qscale:v 2 -filter_complex "[0:v]crop=200:280:200:0[t];[0:v][t]overlay=0:280" "output/001.png"
    ffmpeg -loglevel panic -y -i "output/001.png" -i "001.png" -qscale:v 2 -filter_complex "[0:v]crop=200:280:400:0[t];[0:v][t]overlay=0:560" "output/001.png"
    ffmpeg -loglevel panic -y -i "output/001.png" -i "001.png" -qscale:v 2 -filter_complex "[0:v]crop=200:280:600:0[t];[0:v][t]overlay=0:840" "output/001.png"
    ffmpeg -loglevel panic -y -i "output/001.png" -i "001.png" -qscale:v 2 -filter_complex "[1:v]crop=200:280:0:280[t];[0:v][t]overlay=200:0" "output/001.png"
    ffmpeg -loglevel panic -y -i "output/001.png" -i "001.png" -qscale:v 2 -filter_complex "[0:v]crop=200:280:400:280[t];[0:v][t]overlay=200:560" "output/001.png"
    ffmpeg -loglevel panic -y -i "output/001.png" -i "001.png" -qscale:v 2 -filter_complex "[0:v]crop=200:280:600:280[t];[0:v][t]overlay=200:840" "output/001.png"
    ffmpeg -loglevel panic -y -i "output/001.png" -i "001.png" -qscale:v 2 -filter_complex "[1:v]crop=200:280:0:560[t];[0:v][t]overlay=400:0" "output/001.png"
    ffmpeg -loglevel panic -y -i "output/001.png" -i "001.png" -qscale:v 2 -filter_complex "[1:v]crop=200:280:200:560[t];[0:v][t]overlay=400:280" "output/001.png"
    ffmpeg -loglevel panic -y -i "output/001.png" -i "001.png" -qscale:v 2 -filter_complex "[0:v]crop=200:280:600:560[t];[0:v][t]overlay=400:840" "output/001.png"
    ffmpeg -loglevel panic -y -i "output/001.png" -i "001.png" -qscale:v 2 -filter_complex "[1:v]crop=200:280:0:840[t];[0:v][t]overlay=600:0" "output/001.png"
    ffmpeg -loglevel panic -y -i "output/001.png" -i "001.png" -qscale:v 2 -filter_complex "[1:v]crop=200:280:200:840[t];[0:v][t]overlay=600:280" "output/001.png"
    ffmpeg -loglevel panic -y -i "output/001.png" -i "001.png" -qscale:v 2 -filter_complex "[1:v]crop=200:280:400:840[t];[0:v][t]overlay=600:560" "output/001.png"

    Anyone got any idea why is there this quality loss ?
    Strangely enough, there is no quality loss when i do it in an entirely different way (crop each square into an individual file, then each of them are put into a 1x2 vstack with the next one, then each of the resulting 1x2 files are vstacked with a second file to make a 1x4 file, then each of those are hstacked to make a 2x4 file, and finally we hstack the two resulting file for the resulting 4x4 output), even though there is more than double the amount of encodes.