Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (40)

  • Qu’est ce qu’un masque de formulaire

    13 juin 2013, par

    Un masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
    Chaque formulaire de publication d’objet peut donc être personnalisé.
    Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
    Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Les images

    15 mai 2013

Sur d’autres sites (8140)

  • Livestream WebVTT with HLS

    10 novembre 2022, par kltye

    I've implemented an HLS service with ffmpeg (which pulls a live stream from nginx-rtmp). That all works fine, but now I'm wondering what kind of programming pattern I should be using to get live captioning to work.

    


    I'm planning on using ffmpeg to output the incoming mp4 stream to multiple WAV chunks (i.e., the same way HLS fMP4 parts are created), and then sending those chunks over to Azure Cognitive Services for speech-to-text recognition. My question is, what do I do when I receive the speech results ? Do I dump that vtt file into the same directory as my HLS chunks, and then serve that up using a single m3u8 file (with audio/video tracks along with the text track) ?

    


    Currently ffmpeg is updating the m3u8 playlist for HLS clients ; would it be possible for me to create the m3u8 playlist just for the vtt files, and serve that concurrently with the "regular" HLS playlist ? Also, time synchronization would seem to be difficult, because I'll be sending discrete WAV files over to Azure, so the vtt timestamps are going to be relative to the chunk I'm sending.

    


    Help ! I've done searches online, and I grasp the various issues, but I'm not sure how to plumb them all together.

    


  • Is it possible to fetch some key-frames of a video by using the HTTP Range header

    9 décembre 2020, par pvd

    I've read the SO problem , and it seems not applying to my specific case.

    


    Is it possible to fetch some key-frames of a video from web server by the HTTP Range header ? For example, for a 30 seconds duration video, we'd like to analysis the I-frame around 00:00:02, 00:00:15, 00:00:28.

    


    I need to analysis the videos from internal web server to detect if there's specific watermarks in it and some other analysis.

    


    Since the first I-frame might be invalid sometimes(Logo for example), we were planning to extract the I-frame from the 00:00:02, the middle I-frame, and the last 2nd second I-frame.

    


    For example, for a 30 seconds duration video, we'd like to analysis the I-frame around 00:00:02, 00:00:15, 00:00:28.

    


    We could make it works while download the whole video, since most of the data we downloaded from the server are not being used. I was wondering if maybe we could only use the HTTP Range header to download partial data and analysis it ?

    


  • Get image from webcam, convert that image into something else, and returning it back to the client

    29 janvier 2023, par immigration9

    I have some questions on choosing the right architectural decision to solve my problem.

    


    I am planning to create an app, which takes the

    


      

    1. input from a client's (a browser) webcam,

      


    2. 


    3. sending the input to the server (whether frame by frame, or just live stream video)

      


    4. 


    5. getting each frame from the server (into a image)

      


    6. 


    7. convert the image using some technology (let's say like a tiktok filter)

      


    8. 


    9. returning the image back to the client in real time.

      


    10. 


    


    Except for the phase 4 which the technology can only be applied on an image,
Everything else can be changed.

    


    I'm targeting 30fps (or at least 20) with 1080p quality.

    


    The language or framework is completely agnostic as I do not have any preference. Right now, I am thinking of using React with Node, but I'm opened to other options as well. (eg. Python maybe. Language doesn't matter)

    


    If anyone have some prior experiences, can you teach me the best way ?

    


    I've tried to create the image blob from client and send it to the server using socket.io but it seemed too slow to use when targeted at 30fps on 1080p image.

    


    I'm currently looking at WebRTC with fluent-ffmpeg, but not sure if it's the right way.

    


    Any kind of help will be appreciated.