Recherche avancée

Médias (1)

Mot : - Tags -/ogg

Autres articles (46)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

  • MediaSPIP Player : les contrôles

    26 mai 2010, par

    Les contrôles à la souris du lecteur
    En plus des actions au click sur les boutons visibles de l’interface du lecteur, il est également possible d’effectuer d’autres actions grâce à la souris : Click : en cliquant sur la vidéo ou sur le logo du son, celui ci se mettra en lecture ou en pause en fonction de son état actuel ; Molette (roulement) : en plaçant la souris sur l’espace utilisé par le média (hover), la molette de la souris n’exerce plus l’effet habituel de scroll de la page, mais diminue ou (...)

Sur d’autres sites (5311)

  • WebRTC predictions for 2016

    17 février 2016, par silvia

    I wrote these predictions in the first week of January and meant to publish them as encouragement to think about where WebRTC still needs some work. I’d like to be able to compare the state of WebRTC in the browser a year from now. Therefore, without further ado, here are my thoughts.

    WebRTC Browser support

    I’m quite optimistic when it comes to browser support for WebRTC. We have seen Edge bring in initial support last year and Apple looking to hire engineers to implement WebRTC. My prediction is that we will see the following developments in 2016 :

    • Edge will become interoperable with Chrome and Firefox, i.e. it will publish VP8/VP9 and H.264/H.265 support
    • Firefox of course continues to support both VP8/VP9 and H.264/H.265
    • Chrome will follow the spec and implement H.264/H.265 support (to add to their already existing VP8/VP9 support)
    • Safari will enter the WebRTC space but only with H.264/H.265 support

    Codec Observations

    With Edge and Safari entering the WebRTC space, there will be a larger focus on H.264/H.265. It will help with creating interoperability between the browsers.

    However, since there are so many flavours of H.264/H.265, I expect that when different browsers are used at different endpoints, we will get poor quality video calls because of having to negotiate a common denominator. Certainly, baseline will work interoperably, but better encoding quality and lower bandwidth will only be achieved if all endpoints use the same browser.

    Thus, we will get to the funny situation where we buy ourselves interoperability at the cost of video quality and bandwidth. I’d call that a “degree of interoperability” and not the best possible outcome.

    I’m going to go out on a limb and say that at this stage, Google is going to consider strongly to improve the case of VP8/VP9 by improving its bandwidth adaptability : I think they will buy themselves some SVC capability and make VP9 the best quality codec for live video conferencing. Thus, when Safari eventually follows the standard and also implements VP8/VP9 support, the interoperability win of H.264/H.265 will become only temporary overshadowed by a vastly better video quality when using VP9.

    The Enterprise Boundary

    Like all video conferencing technology, WebRTC is having a hard time dealing with the corporate boundary : firewalls and proxies get in the way of setting up video connections from within an enterprise to people outside.

    The telco world has come up with the concept of SBCs (session border controller). SBCs come packed with functionality to deal with security, signalling protocol translation, Quality of Service policing, regulatory requirements, statistics, billing, and even media service like transcoding.

    SBCs are a total overkill for a world where a large number of Web applications simply want to add a WebRTC feature – probably mostly to provide a video or audio customer support service, but it could be a live training session with call-in, or an interest group conference all.

    We cannot install a custom SBC solution for every WebRTC service provider in every enterprise. That’s like saying we need a custom Web proxy for every Web server. It doesn’t scale.

    Cloud services thrive on their ability to sell directly to an individual in an organisation on their credit card without that individual having to ask their IT department to put special rules in place. WebRTC will not make progress in the corporate environment unless this is fixed.

    We need a solution that allows all WebRTC services to get through an enterprise firewall and enterprise proxy. I think the WebRTC standards have done pretty well with firewalls and connecting to a TURN server on port 443 will do the trick most of the time. But enterprise proxies are the next frontier.

    What it takes is some kind of media packet forwarding service that sits on the firewall or in a proxy and allows WebRTC media packets through – maybe with some configuration that is necessary in the browsers or the Web app to add this service as another type of TURN server.

    I don’t have a full understanding of the problems involved, but I think such a solution is vital before WebRTC can go mainstream. I expect that this year we will see some clever people coming up with a solution for this and a new type of product will be born and rolled out to enterprises around the world.

    Summary

    So these are my predictions. In summary, they address the key areas where I think WebRTC still has to make progress : interoperability between browsers, video quality at low bitrates, and the enterprise boundary. I’m really curious to see where we stand with these a year from now.

    It’s worth mentioning Philipp Hancke’s tweet reply to my post :

    — we saw some clever people come up with a solution already. Now it needs to be implemented 🙂

    The post WebRTC predictions for 2016 first appeared on ginger’s thoughts.

  • How to best decide what VM to use on google cloud ? Any best practices ? [closed]

    2 juillet 2024, par Prabhjot Kaur

    I have a script that reads google sheet for urls and then records those url videos, then merges it with my "test" video. both videos are about 3 minutes long. I am using e2-standard-8 Instance with ubuntu on it. Then running my script in node using puppeteer for recording and ffmpeg for merging videos. It takes 5 minutes for every video.

    


    My question is that should I run concurrent processed and use a stronger VM that will complete it in lesser time, or should i use a slow one ? It doesnt have to run 24/7, because I only have to generate certain amount of videos every week.

    


    Please provide the guidance that I need. Thanks in advance.

    


    I tried creating instance with more CPUs with free credits and ran out with them fairly quickly. I wonder if there is some other service i could use that will make the process faster ?

    


  • Google Speech API + Go - Transcribing Audio Stream of Unknown Length

    14 février 2018, par Josh

    I have an rtmp stream of a video call and I want to transcribe it. I have created 2 services in Go and I’m getting results but it’s not very accurate and a lot of data seems to get lost.

    Let me explain.

    I have a transcode service, I use ffmpeg to transcode the video to Linear16 audio and place the output bytes onto a PubSub queue for a transcribe service to handle. Obviously there is a limit to the size of the PubSub message, and I want to start transcribing before the end of the video call. So, I chunk the transcoded data into 3 second clips (not fixed length, just seems about right) and put them onto the queue.

    The data is transcoded quite simply :

    var stdout Buffer

    cmd := exec.Command("ffmpeg", "-i", url, "-f", "s16le", "-acodec", "pcm_s16le", "-ar", "16000", "-ac", "1", "-")
    cmd.Stdout = &stdout

    if err := cmd.Start(); err != nil {
       log.Fatal(err)
    }

    ticker := time.NewTicker(3 * time.Second)

    for {
       select {
       case <-ticker.C:
           bytesConverted := stdout.Len()
           log.Infof("Converted %d bytes", bytesConverted)

           // Send the data we converted, even if there are no bytes.
           topic.Publish(ctx, &pubsub.Message{
               Data: stdout.Bytes(),
           })

           stdout.Reset()
       }
    }

    The transcribe service pulls messages from the queue at a rate of 1 every 3 seconds, helping to process the audio data at about the same rate as it’s being created. There are limits on the Speech API stream, it can’t be longer than 60 seconds so I stop the old stream and start a new one every 30 seconds so we never hit the limit, no matter how long the video call lasts for.

    This is how I’m transcribing it :

    stream := prepareNewStream()
    clipLengthTicker := time.NewTicker(30 * time.Second)
    chunkLengthTicker := time.NewTicker(3 * time.Second)

    cctx, cancel := context.WithCancel(context.TODO())
    err := subscription.Receive(cctx, func(ctx context.Context, msg *pubsub.Message) {

       select {
       case <-clipLengthTicker.C:
           log.Infof("Clip length reached.")
           log.Infof("Closing stream and starting over")

           err := stream.CloseSend()
           if err != nil {
               log.Fatalf("Could not close stream: %v", err)
           }

           go getResult(stream)
           stream = prepareNewStream()

       case <-chunkLengthTicker.C:
           log.Infof("Chunk length reached.")

           bytesConverted := len(msg.Data)

           log.Infof("Received %d bytes\n", bytesConverted)

           if bytesConverted > 0 {
               if err := stream.Send(&speechpb.StreamingRecognizeRequest{
                   StreamingRequest: &speechpb.StreamingRecognizeRequest_AudioContent{
                       AudioContent: transcodedChunk.Data,
                   },
               }); err != nil {
                   resp, _ := stream.Recv()
                   log.Errorf("Could not send audio: %v", resp.GetError())
               }
           }

           msg.Ack()
       }
    })

    I think the problem is that my 3 second chunks don’t necessarily line up with starts and end of phrases or sentences so I suspect that the Speech API is a recurrent neural network which has been trained on full sentences rather than individual words. So starting a clip in the middle of a sentence loses some data because it can’t figure out the first few words up to the natural end of a phrase. Also, I lose some data in changing from an old stream to a new stream. There’s some context lost. I guess overlapping clips might help with this.

    I have a couple of questions :

    1) Does this architecture seem appropriate for my constraints (unknown length of audio stream, etc.) ?

    2) What can I do to improve accuracy and minimise lost data ?

    (Note I’ve simplified the examples for readability. Point out if anything doesn’t make sense because I’ve been heavy handed in cutting the examples down.)