
Recherche avancée
Autres articles (66)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (8256)
-
Why iFrame is a good idea
15 octobre 2009I’ve seen some hilariously uninformed posts about the new Apple iFrame specification. Let me take a minute to explain what it actually is.
First off, as opposed to what the fellow in the Washington Post writes, it’s not really a new format. iFrame is just a way of using formats that we’ve already know and love. As the name suggests, iFrame is just an i-frame only H.264 specification, using AAC audio. An intraframe version of H.264 eh ? Sounds a lot like AVC-Intra, right ? Exactly. And for exactly the same reasons - edit-ability. Whereas AVC-Intra targets the high end, iFrame targets the low end.
Even when used in intraframe mode, H.264 has some huge advantage over the older intraframe codecs like DV or DVCProHD. For example, significantly better entropy coding, adaptive quantization, and potentially variable bitrates. There are many others. Essentially, it’s what happens when you take DV and spend another 10 years working on making it better. That’s why Panasonic’s AVC-Intra cameras can do DVCProHD quality video at half (or less) the bitrate.
Why does iFrame matter for editing ? Anyone who’s tried to edit video from one of the modern H.264 cameras without first transcoding to an intraframe format has experienced the huge CPU demands and sluggish performance. Behind the scenes it’s even worse. Because interframe H.264 can have very long GOPs, displaying any single frame can rely on dozens or even hundreds of other frames. Because of the complexity of H.264, building these frames is very high-cost. And it’s a variable cost. Decoding the first frame in a GOP is relatively trivial, while decoding the middle B-frame can be hugely expensive.
Programs like iMovie mask that from the user in some cases, but at the expensive of high overhead. But, anyone who’s imported AVC-HD video into Final Cut Pro or iMovie knows that there’s a long "importing" step - behind the scenes, the applications are transcoding your video into an intraframe format, like Apple Intermediate or ProRes. It sort of defeats one of the main purposes of a file-based workflow.
You’ve also probably noticed the amount of time it takes to export a video in an interframe format. Anyone who’s edited HDV in Final Cut Pro has experienced this. With DV, doing an "export to quicktime" is simply a matter of Final Cut Pro rewriting all of the data to disk - it’s essentially a file copy. With HDV, Final Cut Pro has to do a complete reencode of the whole timeline, to fit everything into the new GOP structure. Not only is this time consuming, but it’s essentially a generation loss.
iFrame solves these issues by giving you an intraframe codec, with modern efficiency, which can be decoded by any of the H.264 decoders that we already know and love.
Having this as an optional setting on cameras is a huge step forward for folks interested in editing video. Hopefully some of the manufacturers of AVC-HD cameras will adopt this format as well. I’ll gladly trade a little resolution for instant edit-ability.
-
ffmpeg Get time of frames from trimmed video
17 novembre 2017, par TheOtherguyz4kjI am using FFmpeg in my application to extract frames from a video, the frames will be added to a trim video view where you get an illustration as to what is happening in the video at a specific time within the video. So each frame needs to represent some time within the video.
I dont quite understand how FFmpeg is producing the frames. Here is my code :
"-i",
videoCroppedFile.getAbsolutePath(),
"-vf",
"fps=1/" + frameSeperation,
mediaStorageDir.getAbsolutePath() +
"/%d.jpg"My app allows you to record a video at a max length of 20s. The number of frames extracted from the video depnds on how long the captured video is. frameSeperation is calculated doing the below code.
String time = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
long videoLength = Long.parseLong(time) / 1000;
double frameSeperationDouble = (double) videoLength;
// Divide by 11 because there is a maximum of 11 frames on trim video view
frameSeperationDouble /= 11;
frameSeperationDouble = Math.ceil(frameSeperationDouble);
int frameSeperation = (int) frameSeperationDouble;Maybe the above logic is very bad, if there is a better way please can somebody tell me.
Anyway I run the code and below are a few test cases :
- A video captured with a length of 6 seconds has 7 frames.
- A video captured with a length of 2 seconds has 3 frames.
- A video captured with a length of 10 seconds has 12 frames.
- A video captured with a length of 15 seconds has 9 frames.
- A video captured with a length of 20 seconds has 11 frames.
There is no consistency, and I find it hard to put timestamps against each frame because of this. I feel like my logic is wrong or im not understanding. Any help is much appreciated
Update 1
So I did what you said in comments :
final FFmpeg ffmpeg = FFmpeg.getInstance(mContext);
final File mediaStorageDir = new File(Environment.getExternalStorageDirectory()
+ "/Android/data/"
+ mContext.getPackageName()
+ "/vFrames");
if (!mediaStorageDir.exists()){
mediaStorageDir.mkdirs();
}
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
retriever.setDataSource(mContext, Uri.fromFile(videoCroppedFile));
String time = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
long videoLength = Long.parseLong(time) / 1000;
double frameSeperationDouble = (double) videoLength / 8;
retriever.release();
final String cmd[] = {
"-i",
videoCroppedFile.getAbsolutePath(),
"-vf",
"fps=1/" + frameSeperationDouble,
"-vframes," + 8,
mediaStorageDir.getAbsolutePath() +
"/%d.jpg"
};I also tried
"-vframes=" + 8
at the same point where I put vFrames in cmd. It doesnt seem to work at all now no frames are being extracted from the video -
Anomalie #4430 : image_reduire gère mal les arrondis
4 février 2020, par jluc -L’image initiale fait 640 × 427 pixels.
|image_proportions1,1,focus produit correctement une image de 427x427. Il est donc probable qu’on reproduit le pb directement à partir d’une image de 427x427En tout cas le résultat final fait réellement 200x201.