Recherche avancée

Médias (91)

Autres articles (104)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (12667)

  • Why iFrame is a good idea

    15 octobre 2009

    I’ve seen some hilariously uninformed posts about the new Apple iFrame specification. Let me take a minute to explain what it actually is.

    First off, as opposed to what the fellow in the Washington Post writes, it’s not really a new format. iFrame is just a way of using formats that we’ve already know and love. As the name suggests, iFrame is just an i-frame only H.264 specification, using AAC audio. An intraframe version of H.264 eh ? Sounds a lot like AVC-Intra, right ? Exactly. And for exactly the same reasons - edit-ability. Whereas AVC-Intra targets the high end, iFrame targets the low end.

    Even when used in intraframe mode, H.264 has some huge advantage over the older intraframe codecs like DV or DVCProHD. For example, significantly better entropy coding, adaptive quantization, and potentially variable bitrates. There are many others. Essentially, it’s what happens when you take DV and spend another 10 years working on making it better. That’s why Panasonic’s AVC-Intra cameras can do DVCProHD quality video at half (or less) the bitrate.

    Why does iFrame matter for editing ? Anyone who’s tried to edit video from one of the modern H.264 cameras without first transcoding to an intraframe format has experienced the huge CPU demands and sluggish performance. Behind the scenes it’s even worse. Because interframe H.264 can have very long GOPs, displaying any single frame can rely on dozens or even hundreds of other frames. Because of the complexity of H.264, building these frames is very high-cost. And it’s a variable cost. Decoding the first frame in a GOP is relatively trivial, while decoding the middle B-frame can be hugely expensive.

    Programs like iMovie mask that from the user in some cases, but at the expensive of high overhead. But, anyone who’s imported AVC-HD video into Final Cut Pro or iMovie knows that there’s a long "importing" step - behind the scenes, the applications are transcoding your video into an intraframe format, like Apple Intermediate or ProRes. It sort of defeats one of the main purposes of a file-based workflow.

    You’ve also probably noticed the amount of time it takes to export a video in an interframe format. Anyone who’s edited HDV in Final Cut Pro has experienced this. With DV, doing an "export to quicktime" is simply a matter of Final Cut Pro rewriting all of the data to disk - it’s essentially a file copy. With HDV, Final Cut Pro has to do a complete reencode of the whole timeline, to fit everything into the new GOP structure. Not only is this time consuming, but it’s essentially a generation loss.

    iFrame solves these issues by giving you an intraframe codec, with modern efficiency, which can be decoded by any of the H.264 decoders that we already know and love.

    Having this as an optional setting on cameras is a huge step forward for folks interested in editing video. Hopefully some of the manufacturers of AVC-HD cameras will adopt this format as well. I’ll gladly trade a little resolution for instant edit-ability.

  • ffmpeg Get time of frames from trimmed video

    17 novembre 2017, par TheOtherguyz4kj

    I am using FFmpeg in my application to extract frames from a video, the frames will be added to a trim video view where you get an illustration as to what is happening in the video at a specific time within the video. So each frame needs to represent some time within the video.

    I dont quite understand how FFmpeg is producing the frames. Here is my code :

    "-i",
    videoCroppedFile.getAbsolutePath(),
    "-vf",
    "fps=1/" + frameSeperation,
    mediaStorageDir.getAbsolutePath() +
    "/%d.jpg"

    My app allows you to record a video at a max length of 20s. The number of frames extracted from the video depnds on how long the captured video is. frameSeperation is calculated doing the below code.

    String time = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
       long videoLength = Long.parseLong(time) / 1000;
       double frameSeperationDouble = (double) videoLength;
       // Divide by 11 because there is a maximum of 11 frames on trim video view
       frameSeperationDouble /= 11;
       frameSeperationDouble = Math.ceil(frameSeperationDouble);
       int frameSeperation = (int) frameSeperationDouble;

    Maybe the above logic is very bad, if there is a better way please can somebody tell me.

    Anyway I run the code and below are a few test cases :

    • A video captured with a length of 6 seconds has 7 frames.
    • A video captured with a length of 2 seconds has 3 frames.
    • A video captured with a length of 10 seconds has 12 frames.
    • A video captured with a length of 15 seconds has 9 frames.
    • A video captured with a length of 20 seconds has 11 frames.

    There is no consistency, and I find it hard to put timestamps against each frame because of this. I feel like my logic is wrong or im not understanding. Any help is much appreciated

    Update 1

    So I did what you said in comments :

    final FFmpeg ffmpeg = FFmpeg.getInstance(mContext);
           final File mediaStorageDir = new File(Environment.getExternalStorageDirectory()
                   + "/Android/data/"
                   + mContext.getPackageName()
                   + "/vFrames");

       if (!mediaStorageDir.exists()){
           mediaStorageDir.mkdirs();
       }

       MediaMetadataRetriever retriever = new MediaMetadataRetriever();
       retriever.setDataSource(mContext, Uri.fromFile(videoCroppedFile));
       String time = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
       long videoLength = Long.parseLong(time) / 1000;
       double frameSeperationDouble = (double) videoLength / 8;

       retriever.release();

       final String cmd[] = {

               "-i",
               videoCroppedFile.getAbsolutePath(),
               "-vf",
               "fps=1/" + frameSeperationDouble,
               "-vframes," + 8,
               mediaStorageDir.getAbsolutePath() +
               "/%d.jpg"
       };

    I also tried "-vframes=" + 8 at the same point where I put vFrames in cmd. It doesnt seem to work at all now no frames are being extracted from the video

  • Anomalie #4430 : image_reduire gère mal les arrondis

    4 février 2020, par jluc -

    L’image initiale fait 640 × 427 pixels.
    |image_proportions1,1,focus produit correctement une image de 427x427. Il est donc probable qu’on reproduit le pb directement à partir d’une image de 427x427

    En tout cas le résultat final fait réellement 200x201.