Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (41)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (5523)

  • Optimize ffmpeg overlay and loop filters

    5 novembre 2020, par Miro Barsocchi

    I have a video, video.mp4, of 30 seconds, and I have an audio that can change in length, audio.mp3.

    


    My final idea is to have an output video of a loop of video.mp4 for the total length of the audio.mp3, and an overlay of the waveform of the audio.mp3. What I've done is this, in a bash script :

    


    # calculate length of the audio and of the video
tot=$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 audio.mp3)
vid=$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 video.mp4)
# how many base video we need to loop into the waveform video?
repeattime=`echo "scale=0; ($tot+$vid-1)/$vid" | bc`

# ffmpeg final command
ffmpeg -stream_loop $repeattime -i video.mp4 -i audio.mp3 -filter_complex "[1:a]showwaves=s=1280x100:colors=Red:mode=cline:rate=25:scale=sqrt[outputwave]; [0:v][outputwave] overlay=0:main_h-overlay_h [out]" -map '[out]' -map '1:a' -c:a copy -y output.mp4


    


    Is there a better way to do it in a single ffmpeg command ? I know it exists the loop filter in ffmpeg, but it loops frames and I don't know the number of frames of the video.mp4. Also, using $repeattime can result in a number of loop longer then needed (because math calculation is done round up)

    


  • Anomalie #4430 : image_reduire gère mal les arrondis

    4 février 2020, par jluc -

    L’image initiale fait 640 × 427 pixels.
    |image_proportions1,1,focus produit correctement une image de 427x427. Il est donc probable qu’on reproduit le pb directement à partir d’une image de 427x427

    En tout cas le résultat final fait réellement 200x201.

  • ffmpeg Get time of frames from trimmed video

    17 novembre 2017, par TheOtherguyz4kj

    I am using FFmpeg in my application to extract frames from a video, the frames will be added to a trim video view where you get an illustration as to what is happening in the video at a specific time within the video. So each frame needs to represent some time within the video.

    I dont quite understand how FFmpeg is producing the frames. Here is my code :

    "-i",
    videoCroppedFile.getAbsolutePath(),
    "-vf",
    "fps=1/" + frameSeperation,
    mediaStorageDir.getAbsolutePath() +
    "/%d.jpg"

    My app allows you to record a video at a max length of 20s. The number of frames extracted from the video depnds on how long the captured video is. frameSeperation is calculated doing the below code.

    String time = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
       long videoLength = Long.parseLong(time) / 1000;
       double frameSeperationDouble = (double) videoLength;
       // Divide by 11 because there is a maximum of 11 frames on trim video view
       frameSeperationDouble /= 11;
       frameSeperationDouble = Math.ceil(frameSeperationDouble);
       int frameSeperation = (int) frameSeperationDouble;

    Maybe the above logic is very bad, if there is a better way please can somebody tell me.

    Anyway I run the code and below are a few test cases :

    • A video captured with a length of 6 seconds has 7 frames.
    • A video captured with a length of 2 seconds has 3 frames.
    • A video captured with a length of 10 seconds has 12 frames.
    • A video captured with a length of 15 seconds has 9 frames.
    • A video captured with a length of 20 seconds has 11 frames.

    There is no consistency, and I find it hard to put timestamps against each frame because of this. I feel like my logic is wrong or im not understanding. Any help is much appreciated

    Update 1

    So I did what you said in comments :

    final FFmpeg ffmpeg = FFmpeg.getInstance(mContext);
           final File mediaStorageDir = new File(Environment.getExternalStorageDirectory()
                   + "/Android/data/"
                   + mContext.getPackageName()
                   + "/vFrames");

       if (!mediaStorageDir.exists()){
           mediaStorageDir.mkdirs();
       }

       MediaMetadataRetriever retriever = new MediaMetadataRetriever();
       retriever.setDataSource(mContext, Uri.fromFile(videoCroppedFile));
       String time = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
       long videoLength = Long.parseLong(time) / 1000;
       double frameSeperationDouble = (double) videoLength / 8;

       retriever.release();

       final String cmd[] = {

               "-i",
               videoCroppedFile.getAbsolutePath(),
               "-vf",
               "fps=1/" + frameSeperationDouble,
               "-vframes," + 8,
               mediaStorageDir.getAbsolutePath() +
               "/%d.jpg"
       };

    I also tried "-vframes=" + 8 at the same point where I put vFrames in cmd. It doesnt seem to work at all now no frames are being extracted from the video