Recherche avancée

Médias (91)

Autres articles (103)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (16893)

  • asm SIMD sniffer

    1er août 2023, par Андрей Тернити

    There is x264.
It use a lot of x86 asm files. For example pixel-32.asm.
This files can use different SIMD instruction set : mmx, 3DNow !, sse family, others

    


    I need the simple way to automatically analyze every file. I want get which SIMD family in which file are used. How ?

    


    I think every asm file must contain information about which SIMD family it use (or information that no SIMD). Without this information it is very bad idea try to use this files...
    
I am angry, my x86 CPU support mmx and 3DNow ! only, but x264 try call sse, so I get "Illegal instruction" sometimes. I plan to make patch for x264.

    


    P.S. If you can make issues in official repo let me know.

    


    P.P.S. This thread on Doom9 (mirror).

    


  • Is there any open source solution to display a remote stream inside a Hololens2 UWP Vuforia application ?

    19 avril 2023, par T777

    What do we need ?

    


    We are trying to develop an application for quality management in which we show an hologram on a metal part as an assitance marking. (using Hololen2 + Vuforia + ModleTargets) The employee uses an sensor to follow this assitance marking and the data will be analyzed live by a test device. The results are outputed on a screen / are visible at an closed source application of the manufacturer of the test device.

    


    Capturing of the video output :
The current plan is to capture the video stream of the test device via capture card. Add a via mrtk2 videopanel inside the vuforia app and stream the captured video to the Hololens2 using obs or an OpenCV python script for screen recording.

    


    What we have tried so far

    


    1) Sending Raw udp stream
via RMTP and decoding + converting with gstreamer server and writing an own library in Unity for Receiving
Result : Temporary stopped, because receiving the udp streams needs connection/ session management (signalling) frame syncing and agreement on video size, color format, frame rate etc.. and we have no solution.
An own implementation of any of this would have high complexity is consuming a lot of time.

    


    2) Using available protocols that i can find on the web
Actually there are some protocols already developed for session creation and streaming :

    


      

    • HTTP streaming (HLS) (Transport + Session)
    • 


    • RTMP (Transport + Session),
    • 


    • RTP (Transport) + RTPS (Session),
    • 


    • WebRTC : Is possible with different protocol stacks
RTMP/TCP/UDP (Transport) + SDP (standardized format for video paramaters) + ICE (p2p)/ WHIP (http, client-server) / Websocket(client-server) (signaling protocols) that can be used and some good open source streaming servers (gstreamer, mediamtx and srs)
    • 


    


    When using these the video will be encoded typcially with xh264 and need to be decoded on the HoloLens 2. There are APIs to C/C++ native (hardware) decoding libraries like unity-vlc and ffmpeg.NET that needing media library ffmpeg. I could figure out (not tested) that there is an hardware h264 decoder on the HoloLens2 but I have no clue how to access it. Since there I couldnt disvocer any information about HoloLens2 media libraries.

    


    3) Using Unity packages

    


    


    Will be testing other compile options tomorrow..

    


      

    • Mixed Reality WebRTC (https://github.com/microsoft/MixedReality-WebRTC) :
Various protocol support, Microsoft brought Webrtc specifically to HoloLens.
Deprecated, as fas as I can see just support for Hololens1 and ARM32. So i can not evaluate if trying it with this is worth it.
    • 


    


    What are the next options ?

    


      

    • Developing a raw udp streaming library with untiy directly.
    • 


    • Rebuilding the application with visionlib (ARM32) compatible and MixedRealityWebRTC (ARM32)
    • 


    • Porting ffmpeg + API to UWP ?
    • 


    • Also there seem some affords to make WebRTC in general available to UWP platforms : https://github.com/microsoft/winrtc
    • 


    


    The questions

    


      

    • Does Vuforia support ARM32 ?
    • 


    • How to access hardware decoder of Hololens2 via Unity Code ?
    • 


    


  • How to test ffmpeg for streaming encoding at 1x ? [closed]

    7 mai 2023, par Public Name

    I would like to test ffmpeg for encoding a stream on my VM to see how much CPU % it uses, and how many cores. I don't have streams going, but I plan to use webcams to provide the stream in the future. How should I go about doing this ?

    


    I have test mp4 files I could provide.

    


    Should I :

    


      

    1. Is there a way to tell ffmpeg to only encode at 1x speed (ie only do 30fps per second for encoding) ?
    2. 


    3. Or do I have to create a stream first and have ffmpeg encode the stream ? I found SRS (Simple Realtime Server) https://github.com/ossrs/srs. I was going to start a stream from there and have ffmpeg ingest it. But it seems complicated I was wondering if there was an easier way by doing #1 ? Or is there an easier way to do #2 ?
    4. 


    


    So far I have tried to get ffmpeg running, but have encountered some errors. The SRS is complicated to setup, so I have not tried it yet.