Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (58)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (6074)

  • With ffmpeg's image to movie feature, is it possible to pass in frames over a period of time versus all at once ?

    2 novembre 2013, par Zack Yoshyaro

    For the purpose of making a time lapse recording of desktop activity, it is possible so "stream" the frame list to ffmpeg over time, rather than all at once in the beginning.

    Currently, it is a two step process.

    1. save individual snapshots to disc

      im = ImageGrab.grab()
      im.save("frame_%s.jpg" % count, 'jpg')

    2. compile those snapshots with ffmpeg via

      ffmpeg -r 1 -pattern_type glob -i '*.jpg' -c:v libx264 out.mp4

    It would be nice if there were a way to merge the two steps so that I'm not flooding my hard drive with thousands of individual snapshots. Is it possible to do this ?

  • Trim H264 Video and wrap in MP4 without re-encoding

    28 janvier 2018, par dompardoe

    So I have a Raspberry Pi app that records output from the on-board camera. These files are recorded as H264. After a user presses a button I want to display a portion of that video with OMXPlayer. OMXPlayer always needs an MP4 container ( it always ignores FPS ).

    I don’t want to wrap the entire H264 into an MP4 as that takes too much time.

    My solution would be trim the last 30 seconds and place into MP4 container. Can I do this in one step without copying the entire content of the H264 into the MP4 first ?

    I don’t want to re-encode this and I’m looking for the fastest operation possible.

  • ffmpeg : use vidstabtransform to overlay it over blurred background

    5 novembre 2023, par konewka

    I am using ffmpeg to concatenate multiple video clips taken from the same object over multiple timeframes. To make sure the videos are properly aligned (and therefore show the object in rougly the same position), I manually identify two points in the first frame each clip, and use that to calculate the scaling and positioning necessary for proper alignment. I'm using Python for this, and it also generates the ffmpeg command for me. When it has calculated that the appropriate scale of the video is less than 100%, that means that some parts of the frame will become black. To counter that, I overlay the scaled and positioned video over a blurred version of the original video (like this effect)

    


    Now, additionally, some of the video clips are a bit shaky, so my flow now first applies the vidstabdetect and vidstabtransform filters, and uses the transformed stabilized version as input for my final command. However, if the shaking is significant, the vidstabtransform will zoom in and therefore I will either lose some of the details around the edges, or a black border is created around the edge. As I am later including the stabilized version of the video in the concatenation, with the possibility of it shrinking, I would rather perform the vidstabtransform step inside my command, and use the output directly into the overlay over the blurred version. That way, I would want to achieve that the clip rotates across the frame as it is stabilized, and it is shown over the blurred background. Is it possible to achieve this using ffmpeg, or am I trying to stretch it too far ?

    


    As a minimal example, these are my commands :

    


    ffmpeg -i video1.mp4 -vf vidstabdetect=output=transform.trf -f null - 

ffmpeg -i video1.mp4 -vf vidstabtransform=input=transform.trf video1_stabilized.mp4

# same for video2.mp4

ffmpeg -i video1_stabilized.mp4 -i video2_stabilized.mp4 -filter_complex "
    [0:v]split=2[v0blur][v0scale];
    [v0blur]gblur=sigma=50[v0blur];  // blur the video
    [v0scale]scale=round(iw*0.8/2)*2:round(ih*0.8/2)*2[v0scale];  // scale the video
    [v0blur][v0scale]overlay=x=100:y=200[v0];  // overlay the scaled video over the blur at a specific location
    [1:v]split=2[v1blur][v1scale];
    [v1blur]gblur=sigma=50[v1blur];
    [v1scale]scale=round(iw*0.9/2)*2:round(ih*0.9/2)*2[v1scale];
    [v1blur][v1scale]overlay=x=150:y=150[v1];
    [v0][v1]concat=n=2  // concatenate the two clips" 
-c:v libx264 -r 30 out.mp4


    


    So, I know I can put the vidstabtransform step into the filter_complex-graph (I'll do the detection in a separate step still), but can I also use it such that I can achieve the stabilization over the blurred background and have the clip move around the frame as it is stabilized ?

    


    EDIT : so to include vidstabtransform into the filter graph, it would then look like this :

    


    ffmpeg -i video1.mp4 -i video2.mp4 -filter_complex "
    [0:v]vidstabtransform=input=transform1.trf[v0stab]
    [v0stab]split=2[v0blur][v0scale];
    [v0blur]gblur=sigma=50[v0blur];
    [v0scale]scale=round(iw*0.8/2)*2:round(ih*0.8/2)*2[v0scale];
    [v0blur][v0scale]overlay=x=100:y=200[v0];
    [1:v]vidstabtransform=input=transform2.trf[v1stab]
    [v1stab]split=2[v1blur][v1scale];
    [v1blur]gblur=sigma=50[v1blur];
    [v1scale]scale=round(iw*0.9/2)*2:round(ih*0.9/2)*2[v1scale];
    [v1blur][v1scale]overlay=x=150:y=150[v1];
    [v0][v1]concat=n=2"
-c:v libx264 -r 30 out.mp4