
Recherche avancée
Autres articles (104)
-
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...) -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;
Sur d’autres sites (9782)
-
Is there any open source solution to display a remote stream inside a Hololens2 UWP Vuforia application ?
19 avril 2023, par T777What do we need ?


We are trying to develop an application for quality management in which we show an hologram on a metal part as an assitance marking. (using Hololen2 + Vuforia + ModleTargets) The employee uses an sensor to follow this assitance marking and the data will be analyzed live by a test device. The results are outputed on a screen / are visible at an closed source application of the manufacturer of the test device.


Capturing of the video output :
The current plan is to capture the video stream of the test device via capture card. Add a via mrtk2 videopanel inside the vuforia app and stream the captured video to the Hololens2 using obs or an OpenCV python script for screen recording.


What we have tried so far


1) Sending Raw udp stream
via RMTP and decoding + converting with gstreamer server and writing an own library in Unity for Receiving
Result : Temporary stopped, because receiving the udp streams needs connection/ session management (signalling) frame syncing and agreement on video size, color format, frame rate etc.. and we have no solution.
An own implementation of any of this would have high complexity is consuming a lot of time.


2) Using available protocols that i can find on the web
Actually there are some protocols already developed for session creation and streaming :


- 

- HTTP streaming (HLS) (Transport + Session)
- RTMP (Transport + Session),
- RTP (Transport) + RTPS (Session),
- WebRTC : Is possible with different protocol stacks
RTMP/TCP/UDP (Transport) + SDP (standardized format for video paramaters) + ICE (p2p)/ WHIP (http, client-server) / Websocket(client-server) (signaling protocols) that can be used and some good open source streaming servers (gstreamer, mediamtx and srs)










When using these the video will be encoded typcially with xh264 and need to be decoded on the HoloLens 2. There are APIs to C/C++ native (hardware) decoding libraries like unity-vlc and ffmpeg.NET that needing media library ffmpeg. I could figure out (not tested) that there is an hardware h264 decoder on the HoloLens2 but I have no clue how to access it. Since there I couldnt disvocer any information about HoloLens2 media libraries.


3) Using Unity packages


- 

-
Unit package WebRTC (https://docs.unity3d.com/Packages/com.unity.webrtc@2.4/manual/index.html) supports multiple transport protocols seem to have no signaling mechanism and


-
Unity package Render streaming (https://docs.unity3d.com/Packages/com.unity.renderstreaming@3.1/manual/index.html) is a fully integrated unity to unity and unit to browser streaming package with integrated stream Server with web GUI. It offers various streaming protocols (TCP, UDP,rmtp) and signaling mechanism over websocket, http (seems custom and not whip) or Furiouus.
BUT it doesnt support UWP as noted into the documentation. Implementing an example application we could demonstrate an working example with Vuforia, but it fails on build with target UWP on missing libraries.
Similar to : https://www.youtube.com/watch?v=nHRC0uGBnn8








Will be testing other compile options tomorrow..


- 

- Mixed Reality WebRTC (https://github.com/microsoft/MixedReality-WebRTC) :
Various protocol support, Microsoft brought Webrtc specifically to HoloLens.
Deprecated, as fas as I can see just support for Hololens1 and ARM32. So i can not evaluate if trying it with this is worth it.




What are the next options ?


- 

- Developing a raw udp streaming library with untiy directly.
- Rebuilding the application with visionlib (ARM32) compatible and MixedRealityWebRTC (ARM32)
- Porting ffmpeg + API to UWP ?
- Also there seem some affords to make WebRTC in general available to UWP platforms : https://github.com/microsoft/winrtc










The questions


- 

- Does Vuforia support ARM32 ?
- How to access hardware decoder of Hololens2 via Unity Code ?






-
How to test ffmpeg for streaming encoding at 1x ? [closed]
7 mai 2023, par Public NameI would like to test ffmpeg for encoding a stream on my VM to see how much CPU % it uses, and how many cores. I don't have streams going, but I plan to use webcams to provide the stream in the future. How should I go about doing this ?


I have test mp4 files I could provide.


Should I :


- 

- Is there a way to tell ffmpeg to only encode at 1x speed (ie only do 30fps per second for encoding) ?
- Or do I have to create a stream first and have ffmpeg encode the stream ? I found SRS (Simple Realtime Server) https://github.com/ossrs/srs. I was going to start a stream from there and have ffmpeg ingest it. But it seems complicated I was wondering if there was an easier way by doing #1 ? Or is there an easier way to do #2 ?






So far I have tried to get ffmpeg running, but have encountered some errors. The SRS is complicated to setup, so I have not tried it yet.


-
How to send encoded video (or audio) data from server to client in a way that's decodable by webcodecs API using minimal latency and data overhead
11 janvier 2023, par Tiger YangMy question (read entire post for context) :


Given the unique circumstance of only ever decoding data from a specifically-configured encoder, what is the best way I can send the encoded bitstream along with the bare minimum extra bytes required to properly configure the decoder on the client's end (including only things that change per stream, and omitting things that don't, such as resolution) ? I'm a sucker for zero compromises, and I think I am willing to design my own minimal container format to accomplish this.


Context and problem :


I'm working on a remote desktop implementation that consists of a server that captures and encodes the display and speakers using FFmpeg and forwards it via pipe to a go (language) program which sends it on two unidirectional webtransport streams to my client, which I plan to decode using the webcodecs API. According to MDN, the video decoder needs to be fed via .configure() an object containing the following : https://developer.mozilla.org/en-US/docs/Web/API/VideoDecoder/configure before it's able to decode anything.


same goes for the audio decoder : https://developer.mozilla.org/en-US/docs/Web/API/AudioDecoder/configure


What I've tried so far :


Because this remote desktop will be for my personal use only, it would only ever receive streams from a specific encoder configured in a specific way encoding video at a specific resolution, framerate, color space, etc.. Therefore, I took my video capture FFmpeg command...


videoString := []string{
 "ffmpeg",
 "-init_hw_device", "d3d11va",
 "-filter_complex", "ddagrab=video_size=1920x1080:framerate=60",
 "-vcodec", "hevc_nvenc",
 "-tune", "ll",
 "-preset", "p7",
 "-spatial_aq", "1",
 "-temporal_aq", "1",
 "-forced-idr", "1",
 "-rc", "cbr",
 "-b:v", "500K",
 "-no-scenecut", "1",
 "-g", "216000",
 "-f", "hevc", "-",
 }



...and instructed it to write to an mp4 file instead of outputting to pipe, and then I had this webcodecs demo https://w3c.github.io/webcodecs/samples/video-decode-display/ demux it using mp4box.js. Knowing that the demo outputs a proper .configure() object, I blindly copied it and had my client configure using that every time. Sadly, it didn't work, and I since noticed that the "description" part of the configure object changes despite the encoder and parameters being the same.


I knew that mp4 files worked via mp4box, but they can't be streamed with low latency over a network, and additionally, ffmpeg's -f parameters specifies the muxer to use, but there are so many different types.


At this point, I think I'm completely out of my depth, so :


Given the unique circumstance of only ever decoding data from a specifically-configured encoder, what is the best way I can send the encoded bitstream along with the bare minimum extra bytes required to properly configure the decoder on the client's end (including only things that change per stream, and omitting things that don't, such as resolution) ? I'm a sucker for zero compromises, and I think I am willing to design my own minimal container format to accomplish this. (copied above)