Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (49)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

Sur d’autres sites (7036)

  • Google Speech - Streaming Request Returns EOF Error

    16 octobre 2017, par Josh

    Using Go, I’m taking a RTMP stream, transcoding it to FLAC (using ffmpeg) and attempting to stream to Google’s Speech API to transcribe the audio. However, I keep getting EOF errors when sending the data. I can’t find any information on this error in the docs so I’m not exactly sure what’s causing it.

    I’m chunking the received data into 3s clips (length isn’t relevant as long as it’s less than the maximum length of a streaming recognition request).

    Here is the core of my code :

    func main() {

       done := make(chan os.Signal)
       received := make(chan []byte)

       go receive(received)
       go transcribe(received)

       signal.Notify(done, os.Interrupt, syscall.SIGTERM)

       select {
       case <-done:
           os.Exit(0)
       }
    }

    func receive(received chan<- []byte) {
       var b bytes.Buffer
       stdout := bufio.NewWriter(&b)

       cmd := exec.Command("ffmpeg", "-i", "rtmp://127.0.0.1:1935/live/key", "-f", "flac", "-ar", "16000", "-")
       cmd.Stdout = stdout

       if err := cmd.Start(); err != nil {
           log.Fatal(err)
       }

       duration, _ := time.ParseDuration("3s")
       ticker := time.NewTicker(duration)

       for {
           select {
           case <-ticker.C:
               stdout.Flush()
               log.Printf("Received %d bytes", b.Len())
               received <- b.Bytes()
               b.Reset()
           }
       }
    }

    func transcribe(received <-chan []byte) {
       ctx := context.TODO()

       client, err := speech.NewClient(ctx)
       if err != nil {
           log.Fatal(err)
       }

       stream, err := client.StreamingRecognize(ctx)
       if err != nil {
           log.Fatal(err)
       }

       // Send the initial configuration message.
       if err = stream.Send(&speechpb.StreamingRecognizeRequest{
           StreamingRequest: &speechpb.StreamingRecognizeRequest_StreamingConfig{
               StreamingConfig: &speechpb.StreamingRecognitionConfig{
                   Config: &speechpb.RecognitionConfig{
                       Encoding:        speechpb.RecognitionConfig_FLAC,
                       LanguageCode:    "en-GB",
                       SampleRateHertz: 16000,
                   },
               },
           },
       }); err != nil {
           log.Fatal(err)
       }

       for {
           select {
           case data := <-received:
               if len(data) > 0 {
                   log.Printf("Sending %d bytes", len(data))
                   if err := stream.Send(&speechpb.StreamingRecognizeRequest{
                       StreamingRequest: &speechpb.StreamingRecognizeRequest_AudioContent{
                           AudioContent: data,
                       },
                   }); err != nil {
                       log.Printf("Could not send audio: %v", err)
                   }
               }
           }
       }
    }

    Running this code gives this output :

    2017/10/09 16:05:00 Received 191704 bytes
    2017/10/09 16:05:00 Saving 191704 bytes
    2017/10/09 16:05:00 Sending 191704 bytes
    2017/10/09 16:05:00 Could not send audio: EOF

    2017/10/09 16:05:03 Received 193192 bytes
    2017/10/09 16:05:03 Saving 193192 bytes
    2017/10/09 16:05:03 Sending 193192 bytes
    2017/10/09 16:05:03 Could not send audio: EOF

    2017/10/09 16:05:06 Received 193188 bytes
    2017/10/09 16:05:06 Saving 193188 bytes
    2017/10/09 16:05:06 Sending 193188 bytes // Notice that this doesn't error

    2017/10/09 16:05:09 Received 191704 bytes
    2017/10/09 16:05:09 Saving 191704 bytes
    2017/10/09 16:05:09 Sending 191704 bytes
    2017/10/09 16:05:09 Could not send audio: EOF

    Notice that not all of the Sends fail.

    Could anyone point me in the right direction here ? Is it something to do with the FLAC headers or something ? I also wonder if maybe resetting the buffer causes some of the data to be dropped (i.e. it’s a non-trivial operation that actually takes some time to complete) and it doesn’t like this missing information ?

    Any help would be really appreciated.

  • Error -138 returns "Error number -138 occurred"

    29 avril 2016, par bot1131357

    I am trying to create a program that listens for a period of time, and then times out so that it can return to work on other tasks and retry again later. Here is the code I am testing with :

    AVFormatContext *pFormatCtx = NULL;
    AVCodecContext *codecCtx = NULL;
    AVCodec *codec;
    int ret = 0;

    // Register all formats and codecs
    av_register_all();
    avformat_network_init(); // for network streaming

    AVDictionary *d = NULL;           // "create" an empty dictionary
    av_dict_set(&d, "timeout", "5", 0); // add an entry
    av_dict_set(&d, "rtsp_flags", "listen", 0); // add an entry

    char filename[100];
    sprintf_s(filename, sizeof(filename), "%s", "rtsp://127.0.0.1:8554/demo");


    //:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    printf_s("Open video file.\n");
    // Open video file
    ret = avformat_open_input(&pFormatCtx, filename, NULL, &d);   // Returns -138 here
    if (ret <0)
    {
       printf_s("Failed: cannot open input.\n");
       av_strerror(ret, errbuf, ERRBUFFLEN);
       fprintf(stderr, "avformat_open_input() fail: %s\n", errbuf);
       continue;
       //return -1; // Couldn't find stream information
    }

    In the listening mode, avformat_open_input() returns -138. Using av_strerror() gives the following explanation : "Error number -138 occurred"

    Is this an Easter egg ? What does -138 stand for ?

  • FFMPEG : sws_scale returns error : Slice parameters 0, 2160 are invalid

    21 janvier 2020, par Matthew Czarnek

    I’m trying to follow a tutorial to display ffmpeg AVFrame output in SDL. The tutorial(and all examples I’m seeing online) are still using ’sws_getContext’, which has been deprecated and removed from the newest version of ffmpeg. Trying to change current pixel format from whatever it currently is to PIX_FMT_YUV420P, so I can display it. I believe I need the sws_scale function to accomplish this.

    However, sws_scale is the function that causes a command line error of :
    Slice parameters 0, 2160 are invalid

    Here is all my code associate with swsContext :

    struct SwsContext* av_sws_ctx = NULL;
    av_sws_ctx = sws_alloc_context();

    sws_init_context(av_sws_ctx, NULL, NULL);

    sws_scale(av_sws_ctx, (uint8_t const* const*)av_frame->data,
                           av_frame->linesize, 0, av_codec_context->height,
                           av_frame->data, av_frame->linesize);

    Further complicating the matter, SwsContext is only defined internal to ffmpeg, externally I can’t set/get any variables or even view them in the debugger.

    int sws_scale(struct SwsContext *c, const uint8_t *const srcSlice[], const int srcStride[], int srcSliceY, int srcSliceH, uint8_t *const dst[], const int dstStride[] )

    The vales of other parameters, other than av_sws_ctx :

    srcSlice: av_frame->data =
        8 arrays, first is filled with "\x10\x10\x10\x10\x10..."
        second and third are "€€€€€€€€€€€€€€€€€..."
        rest are NULL
    linesize(av_frame->linesize) is an array:
       3840,1920,1920,0,0,0,0,0
    srcSliceY:0
    srcSliceH:2160
    dest: same as second parameter (av_frame->data)
    dstStride: av_frame->linesize again

    If I drill into sws_scale source code, I find that this error is thrown by this chunk of code :

    if ((srcSliceY & (macro_height-1)) ||
           ((srcSliceH& (macro_height-1)) && srcSliceY + srcSliceH != c->srcH) ||
           srcSliceY + srcSliceH > c->srcH) {
           av_log(c, AV_LOG_ERROR, "Slice parameters %d, %d are invalid\n", srcSliceY, srcSliceH);
           return AVERROR(EINVAL);
       }

    I assume that the issue therefore is that the height of my video is bigger than sws_context(4k video). But can’t figure out how to tell sws_context what it’s height should be using sws_alloc_context or sws_init_context or any other function.

    See something I’m missing ? Thank you.