Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (90)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

Sur d’autres sites (10848)

  • Count frames in H.264 bitstream

    12 novembre 2013, par user620297

    How to count/detect frames (pictures) in raw H.264 bitstream ? I know there are 5 VCL NALU types but I don't know how to rec(k)ognize sequence of them as access unit. I suppose detect a frame means detect an access unit as access unit is

    A set of NAL units that are consecutive in decoding order and contain
    exactly one primary coded picture. In addition to the primary coded
    picture, an access unit may also contain one or more redundant coded
    pictures, one auxiliary coded picture, or other NAL units not
    containing slices or slice data partitions of a coded picture. The
    decoding of an access unit always results in a decoded picture.

    I want it to know what is the FPS of live stream out to server.

  • Merge commit ’46430fd47c6239ef8742d0a34f9412d5060fa798’

    15 mai 2013, par Michael Niedermayer
    Merge commit ’46430fd47c6239ef8742d0a34f9412d5060fa798’
    

    * commit ’46430fd47c6239ef8742d0a34f9412d5060fa798’ :
    vc1dec : Don’t attempt error concealment on field pictures
    vc1dec : fieldtx is only valid for interlaced frame pictures
    aacenc : Fix erasure of surround channels
    aacenc : Fix target bitrate for twoloop quantiser search

    Conflicts :
    libavcodec/vc1dec.c

    Merged-by : Michael Niedermayer <michaelni@gmx.at>

    • [DH] libavcodec/vc1dec.c
  • What is the most efficient way to broadcast a live stream ? [closed]

    3 août 2020, par Harsh

    I want to build a live streaming system for a classroom. The amount on information on this subject is so confusing. These are the features/requirements that I want to have in my app :

    &#xA;

      &#xA;
    1. Room type system.
    2. &#xA;

    3. One teacher - N students (N<200).
    4. &#xA;

    5. Broadcast video/audio. This needs to be only 1 way. (1T ---> 200S)
    6. &#xA;

    7. Audio chat should be possible if a teacher allows a student to speak.
    8. &#xA;

    9. Need not to record the session, though it would be a great feature to have.
    10. &#xA;

    &#xA;

    Now, from my research I have established there are many ways to go about it. The best one to me seems using WebRTC. In that case I do not have to worry about the platform that much.&#xA;WebRTC needs a STUN/TURN server, that can be easily set-up using the coturn project.&#xA;I'll also need a SFU which forwards my stream to the client, like Janus or Mediasoup.&#xA;But that's where I'm getting confused.

    &#xA;

    Can I not directly use a live stream, send it to the server, transcode it in real time using ffmpeg to HLS/DASH and publish it to a S3 bucket from where the users can access it. Wouldn't that be more efficient and able to handle much more students easily.

    &#xA;

    For the audio part I could just use the p2p functionality of webrtc in the browser itself, so no need to route that through the server.

    &#xA;

    That is how far I've come to understand the system. I still don't completely understand how SFU works and I'm confused about how many live streams can one server handle (say a 4C/8GB). Or if using ffmpeg on VPS is a bad thing and I should use the AWS services instead ?

    &#xA;

    Can someone please help me understand this ?

    &#xA;

    Thanks !

    &#xA;