Recherche avancée

Médias (91)

Autres articles (72)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

Sur d’autres sites (10198)

  • ffmpeg Get time of frames from trimmed video

    17 novembre 2017, par TheOtherguyz4kj

    I am using FFmpeg in my application to extract frames from a video, the frames will be added to a trim video view where you get an illustration as to what is happening in the video at a specific time within the video. So each frame needs to represent some time within the video.

    I dont quite understand how FFmpeg is producing the frames. Here is my code :

    "-i",
    videoCroppedFile.getAbsolutePath(),
    "-vf",
    "fps=1/" + frameSeperation,
    mediaStorageDir.getAbsolutePath() +
    "/%d.jpg"

    My app allows you to record a video at a max length of 20s. The number of frames extracted from the video depnds on how long the captured video is. frameSeperation is calculated doing the below code.

    String time = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
       long videoLength = Long.parseLong(time) / 1000;
       double frameSeperationDouble = (double) videoLength;
       // Divide by 11 because there is a maximum of 11 frames on trim video view
       frameSeperationDouble /= 11;
       frameSeperationDouble = Math.ceil(frameSeperationDouble);
       int frameSeperation = (int) frameSeperationDouble;

    Maybe the above logic is very bad, if there is a better way please can somebody tell me.

    Anyway I run the code and below are a few test cases :

    • A video captured with a length of 6 seconds has 7 frames.
    • A video captured with a length of 2 seconds has 3 frames.
    • A video captured with a length of 10 seconds has 12 frames.
    • A video captured with a length of 15 seconds has 9 frames.
    • A video captured with a length of 20 seconds has 11 frames.

    There is no consistency, and I find it hard to put timestamps against each frame because of this. I feel like my logic is wrong or im not understanding. Any help is much appreciated

    Update 1

    So I did what you said in comments :

    final FFmpeg ffmpeg = FFmpeg.getInstance(mContext);
           final File mediaStorageDir = new File(Environment.getExternalStorageDirectory()
                   + "/Android/data/"
                   + mContext.getPackageName()
                   + "/vFrames");

       if (!mediaStorageDir.exists()){
           mediaStorageDir.mkdirs();
       }

       MediaMetadataRetriever retriever = new MediaMetadataRetriever();
       retriever.setDataSource(mContext, Uri.fromFile(videoCroppedFile));
       String time = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
       long videoLength = Long.parseLong(time) / 1000;
       double frameSeperationDouble = (double) videoLength / 8;

       retriever.release();

       final String cmd[] = {

               "-i",
               videoCroppedFile.getAbsolutePath(),
               "-vf",
               "fps=1/" + frameSeperationDouble,
               "-vframes," + 8,
               mediaStorageDir.getAbsolutePath() +
               "/%d.jpg"
       };

    I also tried "-vframes=" + 8 at the same point where I put vFrames in cmd. It doesnt seem to work at all now no frames are being extracted from the video

  • Why iFrame is a good idea

    15 octobre 2009

    I’ve seen some hilariously uninformed posts about the new Apple iFrame specification. Let me take a minute to explain what it actually is.

    First off, as opposed to what the fellow in the Washington Post writes, it’s not really a new format. iFrame is just a way of using formats that we’ve already know and love. As the name suggests, iFrame is just an i-frame only H.264 specification, using AAC audio. An intraframe version of H.264 eh ? Sounds a lot like AVC-Intra, right ? Exactly. And for exactly the same reasons - edit-ability. Whereas AVC-Intra targets the high end, iFrame targets the low end.

    Even when used in intraframe mode, H.264 has some huge advantage over the older intraframe codecs like DV or DVCProHD. For example, significantly better entropy coding, adaptive quantization, and potentially variable bitrates. There are many others. Essentially, it’s what happens when you take DV and spend another 10 years working on making it better. That’s why Panasonic’s AVC-Intra cameras can do DVCProHD quality video at half (or less) the bitrate.

    Why does iFrame matter for editing ? Anyone who’s tried to edit video from one of the modern H.264 cameras without first transcoding to an intraframe format has experienced the huge CPU demands and sluggish performance. Behind the scenes it’s even worse. Because interframe H.264 can have very long GOPs, displaying any single frame can rely on dozens or even hundreds of other frames. Because of the complexity of H.264, building these frames is very high-cost. And it’s a variable cost. Decoding the first frame in a GOP is relatively trivial, while decoding the middle B-frame can be hugely expensive.

    Programs like iMovie mask that from the user in some cases, but at the expensive of high overhead. But, anyone who’s imported AVC-HD video into Final Cut Pro or iMovie knows that there’s a long "importing" step - behind the scenes, the applications are transcoding your video into an intraframe format, like Apple Intermediate or ProRes. It sort of defeats one of the main purposes of a file-based workflow.

    You’ve also probably noticed the amount of time it takes to export a video in an interframe format. Anyone who’s edited HDV in Final Cut Pro has experienced this. With DV, doing an "export to quicktime" is simply a matter of Final Cut Pro rewriting all of the data to disk - it’s essentially a file copy. With HDV, Final Cut Pro has to do a complete reencode of the whole timeline, to fit everything into the new GOP structure. Not only is this time consuming, but it’s essentially a generation loss.

    iFrame solves these issues by giving you an intraframe codec, with modern efficiency, which can be decoded by any of the H.264 decoders that we already know and love.

    Having this as an optional setting on cameras is a huge step forward for folks interested in editing video. Hopefully some of the manufacturers of AVC-HD cameras will adopt this format as well. I’ll gladly trade a little resolution for instant edit-ability.

  • MediaMetadataRetriever setDataSource failed : status = 0xFFFFFFED

    10 octobre 2019, par darja

    I need to fetch frames from video located in the web. That’s how I do this :

    class PlayerWrapper {
       private static final MediaMetadataRetriever mMediaMetadataRetriever = new MediaMetadataRetriever();

       …

       public void initPlayback(final Context context, final VideoSurfaceView videoSurface, final String url) {
           new Thread(new Runnable() {
               @Override
               public void run() {
                   …

                   try {
                       mMediaMetadataRetriever.setDataSource(url, new HashMap());
                   } catch (Exception e) {
                       DPLog(e);
                   }

               }
           }).start();
       }

       public Bitmap getFrameAt(int positionMillis) {
           if (mMediaMetadataRetriever != null) {
               DPLog.d("Creating frame for position [%s]", positionMillis);
               try {
                   return mMediaMetadataRetriever.getFrameAtTime(positionMillis * 1000);
               } catch (Exception e) {
                   …
                   return null;
               }
           } else {
               return null;
           }
       }
    }

    This works fine on several devices with Android 4.4, 5 and 6. But on one device with Android 4.1.2 setDataSource function fails with following stack trace :

    java.lang.RuntimeException: setDataSource failed: status = 0xFFFFFFED
          at android.media.MediaMetadataRetriever._setDataSource(MediaMetadataRetriever.java)
          at android.media.MediaMetadataRetriever.setDataSource(MediaMetadataRetriever.java:99)
          at co.unreel.videoapp.playback.UnreelPlayer$3.run(UnreelPlayer.java:138)
          at java.lang.Thread.run(Thread.java:856)

    Also tried this code on emulator, got almost the same but with status 0x80000000

    Url is got from YouTube API, looks like this :

    https://r2---sn-n3toxu-axqe.googlevideo.com/videoplayback?gcr=ru&sver=3&mm=31&mn=sn-n3toxu-axqe&key=yt6&signature=D1CE7B8615F3D96A58A9D2679057D676E7777E05.5A318AD8C5F48FCED2D1A9149348CD269F466FA9&mt=1456173450&pl=24&mv=m&ms=au&lmt=1455616987173757&itag=22&requiressl=yes&ip=188.242.217.203&source=youtube&dur=848.248&id=o-ABIn_CbRwqIO6qpvlaWe_ekyTZPLVd0w_eM80awd6uRQ&fexp=9408087%2C9416126%2C9419451%2C9420452%2C9421340%2C9422596%2C9423661%2C9423662%2C9425078%2C9425963%2C9427767%2C9427801%2C9428013%2C9428432%2C9428660%2C9429602&mime=video%2Fmp4&upn=P5qxGGbPoc0&sparams=dur%2Cgcr%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Cratebypass%2Crequiressl%2Csource%2Cupn%2Cexpire&expire=1456195127&initcwndbps=3140000&ratebypass=yes&ipbits=0

    Permissions :

    Why this can happen and what to do ?

    I also tried to use FFmpegMediaMetadataRetriever with the same logic, but it doesn’t work at all, setDataSource causes exceptions with status 0xFFFFFFFF on all devices. As I found on SO, that means that url is invalid, but it is ok and played by MediaPlayer without any issues. Also I found that ffmpeg has issues with urls longer than 1024, but that’s not my case too.

    May be there are another way to get frames from video ?