Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (53)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (5810)

  • How to best decide what VM to use on google cloud ? Any best practices ? [closed]

    2 juillet 2024, par Prabhjot Kaur

    I have a script that reads google sheet for urls and then records those url videos, then merges it with my "test" video. both videos are about 3 minutes long. I am using e2-standard-8 Instance with ubuntu on it. Then running my script in node using puppeteer for recording and ffmpeg for merging videos. It takes 5 minutes for every video.

    


    My question is that should I run concurrent processed and use a stronger VM that will complete it in lesser time, or should i use a slow one ? It doesnt have to run 24/7, because I only have to generate certain amount of videos every week.

    


    Please provide the guidance that I need. Thanks in advance.

    


    I tried creating instance with more CPUs with free credits and ran out with them fairly quickly. I wonder if there is some other service i could use that will make the process faster ?

    


  • Google Speech API + Go - Transcribing Audio Stream of Unknown Length

    14 février 2018, par Josh

    I have an rtmp stream of a video call and I want to transcribe it. I have created 2 services in Go and I’m getting results but it’s not very accurate and a lot of data seems to get lost.

    Let me explain.

    I have a transcode service, I use ffmpeg to transcode the video to Linear16 audio and place the output bytes onto a PubSub queue for a transcribe service to handle. Obviously there is a limit to the size of the PubSub message, and I want to start transcribing before the end of the video call. So, I chunk the transcoded data into 3 second clips (not fixed length, just seems about right) and put them onto the queue.

    The data is transcoded quite simply :

    var stdout Buffer

    cmd := exec.Command("ffmpeg", "-i", url, "-f", "s16le", "-acodec", "pcm_s16le", "-ar", "16000", "-ac", "1", "-")
    cmd.Stdout = &stdout

    if err := cmd.Start(); err != nil {
       log.Fatal(err)
    }

    ticker := time.NewTicker(3 * time.Second)

    for {
       select {
       case <-ticker.C:
           bytesConverted := stdout.Len()
           log.Infof("Converted %d bytes", bytesConverted)

           // Send the data we converted, even if there are no bytes.
           topic.Publish(ctx, &pubsub.Message{
               Data: stdout.Bytes(),
           })

           stdout.Reset()
       }
    }

    The transcribe service pulls messages from the queue at a rate of 1 every 3 seconds, helping to process the audio data at about the same rate as it’s being created. There are limits on the Speech API stream, it can’t be longer than 60 seconds so I stop the old stream and start a new one every 30 seconds so we never hit the limit, no matter how long the video call lasts for.

    This is how I’m transcribing it :

    stream := prepareNewStream()
    clipLengthTicker := time.NewTicker(30 * time.Second)
    chunkLengthTicker := time.NewTicker(3 * time.Second)

    cctx, cancel := context.WithCancel(context.TODO())
    err := subscription.Receive(cctx, func(ctx context.Context, msg *pubsub.Message) {

       select {
       case <-clipLengthTicker.C:
           log.Infof("Clip length reached.")
           log.Infof("Closing stream and starting over")

           err := stream.CloseSend()
           if err != nil {
               log.Fatalf("Could not close stream: %v", err)
           }

           go getResult(stream)
           stream = prepareNewStream()

       case <-chunkLengthTicker.C:
           log.Infof("Chunk length reached.")

           bytesConverted := len(msg.Data)

           log.Infof("Received %d bytes\n", bytesConverted)

           if bytesConverted > 0 {
               if err := stream.Send(&speechpb.StreamingRecognizeRequest{
                   StreamingRequest: &speechpb.StreamingRecognizeRequest_AudioContent{
                       AudioContent: transcodedChunk.Data,
                   },
               }); err != nil {
                   resp, _ := stream.Recv()
                   log.Errorf("Could not send audio: %v", resp.GetError())
               }
           }

           msg.Ack()
       }
    })

    I think the problem is that my 3 second chunks don’t necessarily line up with starts and end of phrases or sentences so I suspect that the Speech API is a recurrent neural network which has been trained on full sentences rather than individual words. So starting a clip in the middle of a sentence loses some data because it can’t figure out the first few words up to the natural end of a phrase. Also, I lose some data in changing from an old stream to a new stream. There’s some context lost. I guess overlapping clips might help with this.

    I have a couple of questions :

    1) Does this architecture seem appropriate for my constraints (unknown length of audio stream, etc.) ?

    2) What can I do to improve accuracy and minimise lost data ?

    (Note I’ve simplified the examples for readability. Point out if anything doesn’t make sense because I’ve been heavy handed in cutting the examples down.)

  • Instagram Live API using Graph API

    16 août 2020, par Deepak Sharma

    I see Facebook has new graph API for live video. But I am not sure if it can used to go live on Instagram as well. I see third party softwares such as Yellow Duck being able to go live on Instagram. Not only that, a lot of softwares support streaming to any destination by just using an RTMP link. So does that mean any service that can generate an RTMP stream can broadcast to Instagram (with/without login to Instagram) ? How does Instagram live work if one can generate an RTMP stream ? Finally, if I can generate an RTMP/RTMPS stream locally on my desktop or phone using ffmpeg libraries, can I stream to Instagram ?