Recherche avancée

Médias (0)

Mot : - Tags -/serveur

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (79)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

Sur d’autres sites (5521)

  • How to use ffmpeg to transcode many live streamed videos ? [closed]

    21 septembre 2020, par user14258924

    PREMISE

    


    As a pet project, I am writing a live video streaming service, in Go, that can consume video streams from OBS via SRT(TS -> h264/aac) and RTMP(FLV -> h264/aac) protocols and am planning to support streaming video from web browser as well, captured from a web camera via JS. This ingress server will receive many video streams in various containers and codecs and I need to normalize them into single container and codec and then create multiple versions for various bitrates(ie. 240p, 360p, 480p, 720p, 1080p...) to pass along where needed in the application. Each stream is split into 2 second GOP segments, separate for audio and video track, that will produce fragmented MP4 as the end result - which can be consumed by web browser.

    


    The issue is that I am using Go which has no libraries for transcoding video so I need to use either ffmpeg or vlc, which is a C code. I have decided to avoid the CGo route and use ffmpeg/vlc as standalone binaries.

    


    QUESTION

    


    My question is how to use either of these project in the most efficient way - avoiding the use of files in favour of unix sockets/streams and also the performance aspect - handling hundreds of video segments in any one time and in sufficient time to avoid creating too much of a lag beteen producer and consumer.

    


    So let's say I will pick the most used one - ffmpeg, how should I actually use it to achieve what I have described ? How would you set it up and which flags/config to use with it ?

    


    Can the performance be even achieved or is it just too much and I will need some sort of ffmpeg cluser to even come close to some useful performance/low delay ?

    


  • How can I check whether an RTMP live stream is working on the server side ?

    10 juin 2015, par Moyersy

    I have a site where users can broadcast their own live streams. They are provided with a URL to push their stream to over RTMP. Viewers have an embedded flash player to watch each stream.

    I need to be able to determine whether a particular stream is broadcasting. It doesn’t need to do any analysis, simply to check that there’s an actual stream at the RTMP URL.

    It seems that VLC doesn’t support RMTP.
    I’ve tried ffmpeg but not managed to solve the problem yet.

    Server is running Ubuntu.

    Bonus : How can I update the database based on the results of this stream test ?

  • how can I set a specific duration for my speech using Google Text to Speech

    2 avril, par Alexandre Silkin

    I went through the documentation of Google Text to Speech SSML. https://developers.google.com/assistant/actions/reference/ssml#prosody

    


    So there is a tag called which as per the documentation of W3 Specification can accept an attribute called duration which is a value in seconds or milliseconds for the desired time to take to read the contained text.

    


    So <speak><prosody duration="&#x27;6s&#x27;">Hello, How are you?</prosody></speak> should take 3 seconds for google text to speech to speak this ! But when i try it here https://cloud.google.com/text-to-speech/ , its not working and also I tried it in rest API.

    &#xA;

    How can I get the time of each speech segment generated by the Google Text to Speech SSML that´s a little different from the original .srt from witch it was generated from, I was looking for a way to do so with ffmpeg so then I could divide the (tts_generated_speech/original_duration) in order to get the speech_rate_percentage witch I could use for each speech segment, so it would match the original_duration time.

    &#xA;

    Original post : Is there a way to make Google Text to Speech, speak text for a desired duration ?

    &#xA;