Recherche avancée

Médias (0)

Mot : - Tags -/interaction

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (45)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (4820)

  • how to pass custom values into this Android ffmpeg method

    28 août 2014, par kc ochibili

    i am trying to call this method. but i dont know how to pass my own values into it
    Like : the output length, or FrameRate.

    The code uses some letters like "-y" "-i" which they explained without really explaining what the letters meant

    here is the method from https://github.com/guardianproject/android-ffmpeg-java/blob/master/src/org/ffmpeg/android/FfmpegController.java#L488

     /*
        * ffmpeg -y -loop 0 -f image2 -r 0.5 -i image-%03d.jpg -s:v 1280x720 -b:v 1M \
      -i soundtrack.mp3 -t 01:05:00 -map 0:0 -map 1:0 out.avi

      -loop_input – loops the images. Disable this if you want to stop the encoding when all images are used or the soundtrack is finished.

    -r 0.5 – sets the framerate to 0.5, which means that each image will be shown for 2 seconds. Just take the inverse, for example if you want each image to last for 3 seconds, set it to 0.33.

    -i image-%03d.jpg – use these input files. %03d means that there will be three digit numbers for the images.

    -s 1280x720 – sets the output frame size.

    -b 1M – sets the bitrate. You want 500MB for one hour, which equals to 4000MBit in 3600 seconds, thus a bitrate of approximately 1MBit/s should be sufficient.

    -i soundtrack.mp3 – use this soundtrack file. Can be any format.

    -t 01:05:00 – set the output length in hh:mm:ss format.

    out.avi – create this output file. Change it as you like, for example using another container like MP4.
        */

    public Clip createSlideshowFromImagesAndAudio (ArrayList<clip> images, Clip audio,  Clip out, int durationPerSlide, ShellCallback sc) throws Exception
       {

           final String imageBasePath = new File(mFileTemp,"image-").getCanonicalPath();
           final String imageBaseVariablePath = imageBasePath + "%03d.jpg";


           ArrayList<string> cmd = new ArrayList<string>();


           String newImagePath = null;
           int imageCounter = 0;

           Clip imageCover = images.get(0); //add the first image twice

           cmd = new ArrayList<string>();
           cmd.add(mFfmpegBin);
           cmd.add("-y");

           cmd.add("-i");
           cmd.add(new File(imageCover.path).getCanonicalPath());

           if (out.width != -1 &amp;&amp; out.height != -1)
           {
               cmd.add("-s");
               cmd.add(out.width + "x" + out.height);
           }

           newImagePath = imageBasePath + String.format(Locale.US, "%03d", imageCounter++) + ".jpg";
           cmd.add(newImagePath);

           execFFMPEG(cmd, sc);

           for (Clip image : images)
           {
               cmd = new ArrayList<string>();
               cmd.add(mFfmpegBin);
               cmd.add("-y");

               cmd.add("-i");
               cmd.add(new File(image.path).getCanonicalPath());

               if (out.width != -1 &amp;&amp; out.height != -1)
               {
                   cmd.add("-s");
                   cmd.add(out.width + "x" + out.height);
               }

               newImagePath = imageBasePath + String.format(Locale.US, "%03d", imageCounter++) + ".jpg";
               cmd.add(newImagePath);

               execFFMPEG(cmd, sc);


           }

           //then combine them
           cmd = new ArrayList<string>();

           cmd.add(mFfmpegBin);
           cmd.add("-y");

           cmd.add("-loop");
           cmd.add("0");

           cmd.add("-f");
           cmd.add("image2");

           cmd.add("-r");
           cmd.add("1/" + durationPerSlide);

           cmd.add("-i");
           cmd.add(imageBaseVariablePath);

           cmd.add("-strict");
           cmd.add("-2");//experimental

           String fileTempMpg = new File(mFileTemp,"tmp.mpg").getCanonicalPath();

           cmd.add(fileTempMpg);

           execFFMPEG(cmd, sc);

           //now combine and encode
           cmd = new ArrayList<string>();

           cmd.add(mFfmpegBin);
           cmd.add("-y");

           cmd.add("-i");
           cmd.add(fileTempMpg);

           if (audio != null &amp;&amp; audio.path != null)
           {
               cmd.add("-i");
               cmd.add(new File(audio.path).getCanonicalPath());

               cmd.add("-map");
               cmd.add("0:0");

               cmd.add("-map");
               cmd.add("1:0");

               cmd.add(Argument.AUDIOCODEC);
               cmd.add("aac");

               cmd.add(Argument.BITRATE_AUDIO);
               cmd.add("128k");

           }

           cmd.add("-strict");
           cmd.add("-2");//experimental

           cmd.add(Argument.VIDEOCODEC);


           if (out.videoCodec != null)
               cmd.add(out.videoCodec);
           else
               cmd.add("mpeg4");

           if (out.videoBitrate != -1)
           {
               cmd.add(Argument.BITRATE_VIDEO);
               cmd.add(out.videoBitrate + "k");
           }

           cmd.add(new File(out.path).getCanonicalPath());


           execFFMPEG(cmd, sc);

           return out;
       }
    </string></string></string></string></string></string></clip>

    so, say i want an out put video that has a

    framerate -->           2sec
    output frame size -->   480 x 480
    output lenght of-->     02:08:00
    and output file type --> .mp4

    How can i do call this method with these values ?
    and how do they relate to the letters used above.

  • Trimming Video From HTML Input Tag

    19 janvier 2020, par Jordan Epstein

    I’m trying to have a user upload a video file of type mp4 and then be able to change its length (to trim out certain sections of the clip) within the browser. Do any of you have a good way of going about doing this ? I looked into FFMPEG from Javascript but I’m not sure how I can pass in a File Object.

    Thanks !

  • ffmpeg HLS stream audio is out of sync when input video are multiple clips joined using AVComposition

    11 décembre 2019, par Tomáš Kohout

    I am having problems with HLS stream creation that seems to be either on FFMPEG or iOS AVFoundation side.

    I am currently working on a video app which allows user to record, cut and merge multiple clips together. That is done using AVMutableComposition and code like this :

    try videoTrack?.insertTimeRange(clip.range, of: videoAssetTrack, at: accumulatedTime)
    try audioTrack?.insertTimeRange(clip.range, of: audioAssetTrack, at: accumulatedTime)

    and then it gets exported using

    AVAssetExportSession(asset: composition, presetName: preferredPreset)

    Then it gets uploaded to a server which creates a HLS stream out of the video using following code :

    ffmpeg -i bunny.mov -y -g 48 -sc_threshold 0 \
    -map 0:0 -map 0:1 -s:v:0 1280x720 -c:v:0 h264 -b:v:0 4000k \
    -map 0:0 -map 0:1 -s:v:1 854x480 -c:v:1 h264 -b:v:1 2000k \
    -map 0:0 -map 0:1 -s:v:2 640x360 -c:v:2 h264 -b:v:2 1000k \
    -map 0:0 -map 0:1 -s:v:3 480x270 -c:v:3 h264 -b:v:3 500k \
    -c:a copy -var_stream_map "v:0,a:0 v:1,a:1 v:2,a:2 v:3,a:3" \
    -master_pl_name bunny.m3u8 -f hls -hls_time 1 -hls_list_size 0 \
    -hls_segment_filename ~/Downloads/bunny/bunny.%v.%03d.ts \
    -hls_base_url http://tomaskohout.cz/bunny/ ~/Downloads/bunny/bunny.%v.m3u8

    Inputting a video that was cut together using two clips (first a bird and second a bunny) will result in following m3u8 file (open in safari to see it play) : http://tomaskohout.cz/bunny/bunny.m3u8

    The problem is that the audio is overlapping and out of sync after the first cut (notice the sound of nut hitting the bird playing twice) and last second is just a still frame.

    The original video for comparison is here :
    http://tomaskohout.cz/bunny/original.mov

    Seems like it is a problem in AVFoundation in general. If you join two clips using QuickTime (with Edit > Add Clip to End) and use the resulting video as the input for ffmpeg it will also have audio and video out of sync. It just seems it generates some strangely timed videos that only quicktime knows how to play. If you take the original video and try to play it using VLC it will be also out of sync and even show part of the clip that were not meant to be in the original cut. It might be related to the Encoder Delay and Synchronization but I am really out of my element here.

    I have found a workaround which is to first convert the video to mp4 (in my case it’s just converting mp4 to mp4) like this :

    ffmpeg -i input.mp4 -c:v:0 h264 input.preexport.mp4

    and then use the pre-exported version to generate the m3u8 file. This seems to retime the video correctly.

    That is of course not ideal because I have to run through the video twice.

    So I would like to ask whether there’s either an option which allows me to retime the video at the same time while I am generating the hls or even better if there is some other solution to the problem.

    Thank you !

    Tomas