
Recherche avancée
Autres articles (77)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (14099)
-
Crop video into a 4x4 grid/tiles/matrix efficiently via command-line ffmpeg ?
22 avril 2017, par DylanHello Stackoverflow community !
I dread having to ask questions, but there seems to be no efficient way to take a single input video and apply a matrix transformation/split the video into equal sized pieces, preferably 4x4=16 segments per input.
I tried using all the libraries such as ffmpeg and mencoder, but having 16 outputs can be as slow as 0.15x. The goal of my project is the split the video into 16 segments, rearrange those segments and combine back into a final video ; later reversing the process in HTML5 canvas. Here is a picture to help you understand what I am talking about :
the source but also the final destination after reorganizing the piecesI do not believe you can do this all in one command, so my goal is to crop into 16 mapped outputs quickly, then reassemble them in a different order. But I can do the other parts myself. Ideally there would be a way to move pixel blocks eg 100x100 and just move them around. My math is not strong enough..
I really appreciate the work you guys do !admin@dr.com
-
fate/cbs : Add an SEI test
8 mai 2018, par Mark Thompsonfate/cbs : Add an SEI test
The artificial sample file sei-1.h264 contains five frames (IDR P B I B)
and the following SEI message types :
* Buffering period
* Picture timing
* Pan-scan rectangle (display as 4:3)
* User data registered, containing A/53 closed captions (captions match
frame content, including reordering)
* Recovery point (at the I frame)
* Display orientation (identity transformation)
* Mastering display (with arbitrary contents)
* Undefined SEI type 1234 (containing ascending bytes) -
How to synchronize audio and video using ffmpeg libraries ?
21 octobre 2013, par jsp99Stuck writing a very basic media player in C, using SDL and ffmpeg libraries. Initially, followed the theory in this page to get an idea about the entire program and the usage of libraries. After coding from scratch, thanks to that tutorial and many other resources, finally I made my code work, using the latest libraries of ffmpeg and SDL (2.0). But my code lacks a proper synchronization mechanism (actually it lacks a sync mechanism !).
I still don't have a clear idea on how to synchronize the audio and video together as the theory provided in the link is only very partially correct (atleast when it comes to using the latest dev libraries).
For example, a sentence in this page is as follows :However, ffmpeg reorders the packets so that the DTS of the packet being processed by avcodec_decode_video() will always be the same as the PTS of the frame it returns.
I am using avcodec_decode_video2() and the DTS of the packet is definitely not the same as pts of the frame it decodes (in general).
I read this very informative BBC report and it makes complete sense. I have a clear idea about PTS and DTS. But the PTS and DTS values that ffmpeg is using for packets and decoded frames is confusing. I wish there were some documentation on that aspect.
Can someone explain the steps to synchronize audio and video ? I only need the steps. I am quite comfortable implementing them. Any help is greatly appreciated. Thanks !
PS : Here's a screenshot of what I am talking about :
The huge negative value is, I assume AV_NOPTS_VALUE.