Recherche avancée

Médias (2)

Mot : - Tags -/plugins

Autres articles (108)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (15097)

  • ffmpeg Get time of frames from trimmed video

    17 novembre 2017, par TheOtherguyz4kj

    I am using FFmpeg in my application to extract frames from a video, the frames will be added to a trim video view where you get an illustration as to what is happening in the video at a specific time within the video. So each frame needs to represent some time within the video.

    I dont quite understand how FFmpeg is producing the frames. Here is my code :

    "-i",
    videoCroppedFile.getAbsolutePath(),
    "-vf",
    "fps=1/" + frameSeperation,
    mediaStorageDir.getAbsolutePath() +
    "/%d.jpg"

    My app allows you to record a video at a max length of 20s. The number of frames extracted from the video depnds on how long the captured video is. frameSeperation is calculated doing the below code.

    String time = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
       long videoLength = Long.parseLong(time) / 1000;
       double frameSeperationDouble = (double) videoLength;
       // Divide by 11 because there is a maximum of 11 frames on trim video view
       frameSeperationDouble /= 11;
       frameSeperationDouble = Math.ceil(frameSeperationDouble);
       int frameSeperation = (int) frameSeperationDouble;

    Maybe the above logic is very bad, if there is a better way please can somebody tell me.

    Anyway I run the code and below are a few test cases :

    • A video captured with a length of 6 seconds has 7 frames.
    • A video captured with a length of 2 seconds has 3 frames.
    • A video captured with a length of 10 seconds has 12 frames.
    • A video captured with a length of 15 seconds has 9 frames.
    • A video captured with a length of 20 seconds has 11 frames.

    There is no consistency, and I find it hard to put timestamps against each frame because of this. I feel like my logic is wrong or im not understanding. Any help is much appreciated

    Update 1

    So I did what you said in comments :

    final FFmpeg ffmpeg = FFmpeg.getInstance(mContext);
           final File mediaStorageDir = new File(Environment.getExternalStorageDirectory()
                   + "/Android/data/"
                   + mContext.getPackageName()
                   + "/vFrames");

       if (!mediaStorageDir.exists()){
           mediaStorageDir.mkdirs();
       }

       MediaMetadataRetriever retriever = new MediaMetadataRetriever();
       retriever.setDataSource(mContext, Uri.fromFile(videoCroppedFile));
       String time = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
       long videoLength = Long.parseLong(time) / 1000;
       double frameSeperationDouble = (double) videoLength / 8;

       retriever.release();

       final String cmd[] = {

               "-i",
               videoCroppedFile.getAbsolutePath(),
               "-vf",
               "fps=1/" + frameSeperationDouble,
               "-vframes," + 8,
               mediaStorageDir.getAbsolutePath() +
               "/%d.jpg"
       };

    I also tried "-vframes=" + 8 at the same point where I put vFrames in cmd. It doesnt seem to work at all now no frames are being extracted from the video

  • Anomalie #4430 : image_reduire gère mal les arrondis

    4 février 2020, par jluc -

    L’image initiale fait 640 × 427 pixels.
    |image_proportions1,1,focus produit correctement une image de 427x427. Il est donc probable qu’on reproduit le pb directement à partir d’une image de 427x427

    En tout cas le résultat final fait réellement 200x201.

  • Optimize ffmpeg overlay and loop filters

    5 novembre 2020, par Miro Barsocchi

    I have a video, video.mp4, of 30 seconds, and I have an audio that can change in length, audio.mp3.

    


    My final idea is to have an output video of a loop of video.mp4 for the total length of the audio.mp3, and an overlay of the waveform of the audio.mp3. What I've done is this, in a bash script :

    


    # calculate length of the audio and of the video
tot=$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 audio.mp3)
vid=$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 video.mp4)
# how many base video we need to loop into the waveform video?
repeattime=`echo "scale=0; ($tot+$vid-1)/$vid" | bc`

# ffmpeg final command
ffmpeg -stream_loop $repeattime -i video.mp4 -i audio.mp3 -filter_complex "[1:a]showwaves=s=1280x100:colors=Red:mode=cline:rate=25:scale=sqrt[outputwave]; [0:v][outputwave] overlay=0:main_h-overlay_h [out]" -map '[out]' -map '1:a' -c:a copy -y output.mp4


    


    Is there a better way to do it in a single ffmpeg command ? I know it exists the loop filter in ffmpeg, but it loops frames and I don't know the number of frames of the video.mp4. Also, using $repeattime can result in a number of loop longer then needed (because math calculation is done round up)