
Recherche avancée
Autres articles (90)
-
À propos des documents
21 juin 2013, parQue faire quand un document ne passe pas en traitement, dont le rendu ne correspond pas aux attentes ?
Document bloqué en file d’attente ?
Voici une liste d’actions ordonnée et empirique possible pour tenter de débloquer la situation : Relancer le traitement du document qui ne passe pas Retenter l’insertion du document sur le site MédiaSPIP Dans le cas d’un média de type video ou audio, retravailler le média produit à l’aide d’un éditeur ou un transcodeur. Convertir le document dans un format (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)
Sur d’autres sites (13594)
-
Is their any way to split video into multiple segment using ffmpeg in android ?
22 janvier 2020, par Ayaz QureshiI want to trim video into 30 seconds multiple segments in android studio.I have used the following command to trim video into 30 seconds but this is not the way i want,its just trimming video into 30 sec.Is there any FFmpeg command available to split video into multiple segments in android studio of specific duration like 30 secs ?
-
Tools to determine video orientation
6 avril 2014, par JayLevI receive videos from different devices and want to encode them using the correct orientation.
I've seen some of examples of how to determine the orientation of a video from a iPhone.
With exiftool and mediainfo I can indeed tell if an iPhone video has to be rotated.
However, for android videos, both portrait and landscape videos have the same rotation and matrix structure as each other.
Maybe this is just with my phone, I'm trying to find videos taken from newer droid phones.
My question however is whether there's other tools or a different way to determine the orientation that'll work with all devices.
EDIT :
I just checked a video from a Samsung Galaxy S II, and I can get the orientation from exiftool. So it's not a problem with all android phones.
My android phone is a HTC Desire running on android 2.2.And actually (I didn't even notice before) a portrait video will not be correctly oriented even when playing on the phone. So I guess it's not about the tools, the orientation data just doesn't seem to be correct at all.
-
Connect external cameras to iOS and decompress to a usable form
27 septembre 2017, par Ping ChenI want to create a 2 camera setup which can send 1 of the camera views out as an RTMP stream depending on the motion intensity detected. The chosen camera view can change if motion intensity on the views changes.
I imagine that I could use an iPhone/iPad as encoding/streaming hub as well as 1 of the cameras. And connect a WiFi camera to the iPad/iPhone to feed the 2nd camera view.
My goals for the iOS side are :
- Connect with a WiFi camera on the local network
- Decode the data and run motion intensity detection on the WiFi camera feed AND the iPhone/iPad’s own camera feed with Brad Larson’s GPUImage framework https://github.com/BradLarson/GPUImage
- Stream out the chosen camera view. depending on motion detected
Larson’s GPUImage framework works with an AVCaptureSession subclass. I’m only familiar with AVFoundation objects, but am a complete noob with it comes to VideoToolbox and some of the lower level iOS video stuff. Through googling, I kind of know that VTDecompressionSession is what I’d get from the WiFi camera. I have no clue how I can manipulate that to a usable form for my purposes.
I’ve dug through stackoverflow answers such as : https://stackoverflow.com/a/29525001/7097455
Very informative, but maybe I don’t even know to ask the correct questions