Recherche avancée

Médias (3)

Mot : - Tags -/collection

Autres articles (41)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • ANNEXE : Les extensions, plugins SPIP des canaux

    11 février 2010, par

    Un plugin est un ajout fonctionnel au noyau principal de SPIP. MediaSPIP consiste en un choix délibéré de plugins existant ou pas auparavant dans la communauté SPIP, qui ont pour certains nécessité soit leur création de A à Z, soit des ajouts de fonctionnalités.
    Les extensions que MediaSPIP nécessite pour fonctionner
    Depuis la version 2.1.0, SPIP permet d’ajouter des plugins dans le répertoire extensions/.
    Les "extensions" ne sont ni plus ni moins que des plugins dont la particularité est qu’ils se (...)

Sur d’autres sites (4981)

  • What transcoding services can people recommend ? [closed]

    3 mai 2013, par Adrian Lynch

    A client of mine needs to accept a bunch of different video files and convert them to FLV. My experience with FFMEG on a previous project has highlighted that there will be some troublesome files.

    Depending on the price my client will pay for a professional service.

    What are people using and how are you finding the service ?

    Thanks.

  • Real time livestreaming - RPI

    24 avril 2022, par Victor

    I work at a telehealth company and we are using connected medical devices in order to provide the doctor with real time information from these equipements, the equipements are used by a trained health Professional.

    


    Those devices work with video and audio. Right now, we are using them with peerjs (so peer to peer connection) but we are trying to move away from that and have a RPI with his only job to stream data (so streaming audio and video).

    


    Because the equipements are supposed to be used with instructions from a doctor we need the doctor to receive the data in real time.

    


    But we also need the trained health professional to see what he is doing (so we need a local feed from the equipement)

    


    How do we capture audio and video

    


    We are using ffmpeg with a go client that is in charge of managing the ffmpeg clients and stream them to a SRS server.
This works but we are having a 2-3 sec delay when streaming the data. (rtmp from ffmpeg and flv on the front end)

    


    ffmpeg settings :

    


    ("ffmpeg", "-f", "v4l2", `-i`, "*/video0", "-f", "flv", "-vcodec", "libx264", "-x264opts", "keyint=15", "-preset", "ultrafast", "-tune", "zerolatency", "-fflags", "nobuffer", "-b:a", "160k", "-threads", "0", "-g", "0", "rtmp://srs-url")


    


    My questions

    


      

    • Is there a way for this set up to achieve low latency (<1 sec) (for the nurse and for the doctor) ?
    • &#xA;

    • Is the way I want to achieve this good ? Is there a batter way ?
    • &#xA;

    &#xA;

    Flow schema

    &#xA;

    Data exchange and use case flow

    &#xA;

  • Cloaked Archive Wiki

    16 mai 2011, par Multimedia Mike — General

    Google’s Chrome browser has made me phenomenally lazy. I don’t even attempt to type proper, complete URLs into the address bar anymore. I just type something vaguely related to the address and let the search engine take over. I saw something weird when I used this method to visit Archive Team’s site :



    There’s greater detail when you elect to view more results from the site :



    As the administrator of a MediaWiki installation like the one that archiveteam.org runs on, I was a little worried that they might have a spam problem. However, clicking through to any of those out-of-place pages does not indicate anything related to pharmaceuticals. Viewing source also reveals nothing amiss.

    I quickly deduced that this is a textbook example of website cloaking. This is when a website reports different content to a search engine than it reports to normal web browsers (humans, presumably). General pseudocode :

    C :
    1. if (web_request.user_agent_string == CRAWLER_USER_AGENT)
    2.  return cloaked_data ;
    3. else
    4.  return real_data ;

    You can verify this for yourself using the wget command line utility :

    <br />
    $ wget --quiet --user-agent="<strong>Mozilla/5.0</strong>" \<br />
     http://www.archiveteam.org/index.php?title=Geocities -O - | grep \&lt;title\&gt;<br />
    &lt;title&gt;GeoCities - Archiveteam&lt;/title&gt;

    $ wget —quiet —user-agent="Googlebot/2.1"
    http://www.archiveteam.org/index.php?title=Geocities -O - | grep \<title\>
    <title>Cheap xanax | Online Drug Store, Big Discounts</title>

    I guess the little web prank worked because the phaux-pharma stuff got indexed. It makes we wonder if there’s a MediaWiki plugin that does this automatically.

    For extra fun, here’s a site called the CloakingDetector which purports to be able to detect whether a page employs cloaking. This is just one humble observer’s opinion, but I don’t think the site works too well :