Recherche avancée

Médias (1)

Mot : - Tags -/pirate bay

Autres articles (56)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (7773)

  • Is it possible to stream video over RTP without transcoding or compressing input file before transmitting using FFMpeg commandline ?

    11 avril 2017, par Souvik Das

    FFmpeg supports 2 type of RTP payload type : MPEGTS/MP2T (PT 33) & Dynamic (PT 96). Dynamic PT requires explicit SDP presence at receiver while MPEGTS/MP2T doesn’t.
    I used FFmpeg as both transmitter and receiver (with Loopback/localhost) and compared PSNR of the respective streams :

    Case 1 : FFmpeg Dynamic RTP

    Sender:

       ffmpeg -re -i 'sample.avi' -c:a copy -c:v copy -f rtp -y 'rtp://@225.0.0.1:5555' > sample.sdp

    Receiver:

       ffmpeg -protocol_whitelist file,udp,rtcp,rtp -i sample.sdp -y rec.ts

    Result:

       PSNR avg. = 38

    This means that in idle condition, we are still not getting a perfect stream. I suspect, it's because Transcoding still takes place which downgrades the quality of video before transmitting at sender.

    Case 2 : FFmpeg MPEGTS RTP

    Sender:

       ffmpeg -re -i 'sample.avi' -c:a copy -c:v copy -f rtp_mpegts -y 'rtp://@225.0.0.1:5555'

    Receiver:


       ffmpeg -protocol_whitelist file,udp,rtcp,rtp -i sample.sdp -f mpegts -y rec.ts

    Result:

       Large # Frame Losses!

    So, at Receiver, I used VLC for recording the streams. Although there was no/negligible frame loss, but PSNR avg. = 18 !!

    Earlier in a dedicated VLC Streamer & Recorder test, when the same video was streamed, PSNR avg. = Infinity (No Quality Loss). I want to shift to FFMpeg alternative for streaming because, I want to introduce some programmability factors for a sophisticated research work.

    Hence, It would be really great, if somebody could provide me some input as to how I can achieve uncompressed & lossless video streaming using FFMpeg over RTP.

    Notes :

    1. I must use RTP only. I can't use RTSP or other streaming methods incl. direct UDP (udp://)
    2. VLC Media Player / Libvlc used in this case, also used RTP for all cases.
    3. It can assumed that Streamer and Recorder are present on same disk or has same access to storage.
    4. Must support multicast!
  • Webcam Serverless Live stream

    23 juillet 2021, par curiouscoder

    I'm trying to live stream my webcam in a serverless way in the following flow :

    


    webcam browser >> s3 bucket >> lambda/ffmpeg encoding >> s3 output bucket >> dash player

    


    This is working really good so far but I'm facing the following problem :

    


    ffmpeg will only encode those seconds received (I stream the webcam to s3 each X seconds with some 300kb .webm file). So the .mpd file generated by ffmpeg encoder will have the type 'static' when ffmpeg finishes encoding and not the 'dynamic' type desired. Therefore, the dash player won't request the other files from s3 and the streaming will stop. For example, if I let the webcam streaming running for 15 seconds, the viewer is able to watch the 15 minutes. But if I keep sending the streams each 2 seconds the viewer will be able to watch only the first 2 seconds because browser won't request any other .m4s files.

    


    So, I have the following question :

    


    Is there a way to force the dash player to reload the .mpd file that is stored in s3 even when the type is static instead of dynamic ?

    


    Thanks in advance !

    


  • Live streaming video on multiple platforms [closed]

    4 mars 2019, par Nikolay Nikolov

    I want to build an application similar to Twitch/YouTube, which mainly offers two things (and a couple of other, but they are not related to the question) - to record and send live streams and to watch other people’s live streams. Basically, if I wanted to build Twitch, where would I start in terms of protocols and back-end libraries for the processing and sending/receiving of video segments (more detailed questions follow) ? I am new to the video streaming software development and need a bit of guidance on where/how to start.

    Here are the details/requirements :

    • Video and audio
    • Scalability and low latency are more important than supreme quality
    • Adaptive bit-rate
    • No services like Wowza and such (I am willing to build the whole structure)
    • Has to work on iOS and Android (Desktop support is not as important)
    • The users should be able to watch every stream and every user should be able to stream through his camera
    • VOD is not as important
    • Going back in the stream is not as important

    If I have it right, this is how the whole process should work :

    • Android/iOS camera records video
    • Simultaneously the app saves every x seconds as a single segment and sends it to the server
    • Server processes the video in different bit rates and saves them
    • Another user requests stream based on the bandwidth of its internet connection
    • Server responds with a playlist of segments and sends each new chunk of video to the user

    Questions :

    • What protocols should I be using (HLS, MPEG-DASH, WebRTC, RTSP, etc.) and do these protocols have implementations on Android/iOS or do I have to implement them myself ?
    • What books/other resources would you recommend ?

    Thank you very much for reading my question and I look forward to reading your answers !