Recherche avancée

Médias (91)

Autres articles (62)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (10244)

  • Google Speech - Streaming Request Returns EOF Error

    16 octobre 2017, par Josh

    Using Go, I’m taking a RTMP stream, transcoding it to FLAC (using ffmpeg) and attempting to stream to Google’s Speech API to transcribe the audio. However, I keep getting EOF errors when sending the data. I can’t find any information on this error in the docs so I’m not exactly sure what’s causing it.

    I’m chunking the received data into 3s clips (length isn’t relevant as long as it’s less than the maximum length of a streaming recognition request).

    Here is the core of my code :

    func main() {

       done := make(chan os.Signal)
       received := make(chan []byte)

       go receive(received)
       go transcribe(received)

       signal.Notify(done, os.Interrupt, syscall.SIGTERM)

       select {
       case <-done:
           os.Exit(0)
       }
    }

    func receive(received chan<- []byte) {
       var b bytes.Buffer
       stdout := bufio.NewWriter(&b)

       cmd := exec.Command("ffmpeg", "-i", "rtmp://127.0.0.1:1935/live/key", "-f", "flac", "-ar", "16000", "-")
       cmd.Stdout = stdout

       if err := cmd.Start(); err != nil {
           log.Fatal(err)
       }

       duration, _ := time.ParseDuration("3s")
       ticker := time.NewTicker(duration)

       for {
           select {
           case <-ticker.C:
               stdout.Flush()
               log.Printf("Received %d bytes", b.Len())
               received <- b.Bytes()
               b.Reset()
           }
       }
    }

    func transcribe(received <-chan []byte) {
       ctx := context.TODO()

       client, err := speech.NewClient(ctx)
       if err != nil {
           log.Fatal(err)
       }

       stream, err := client.StreamingRecognize(ctx)
       if err != nil {
           log.Fatal(err)
       }

       // Send the initial configuration message.
       if err = stream.Send(&speechpb.StreamingRecognizeRequest{
           StreamingRequest: &speechpb.StreamingRecognizeRequest_StreamingConfig{
               StreamingConfig: &speechpb.StreamingRecognitionConfig{
                   Config: &speechpb.RecognitionConfig{
                       Encoding:        speechpb.RecognitionConfig_FLAC,
                       LanguageCode:    "en-GB",
                       SampleRateHertz: 16000,
                   },
               },
           },
       }); err != nil {
           log.Fatal(err)
       }

       for {
           select {
           case data := <-received:
               if len(data) > 0 {
                   log.Printf("Sending %d bytes", len(data))
                   if err := stream.Send(&speechpb.StreamingRecognizeRequest{
                       StreamingRequest: &speechpb.StreamingRecognizeRequest_AudioContent{
                           AudioContent: data,
                       },
                   }); err != nil {
                       log.Printf("Could not send audio: %v", err)
                   }
               }
           }
       }
    }

    Running this code gives this output :

    2017/10/09 16:05:00 Received 191704 bytes
    2017/10/09 16:05:00 Saving 191704 bytes
    2017/10/09 16:05:00 Sending 191704 bytes
    2017/10/09 16:05:00 Could not send audio: EOF

    2017/10/09 16:05:03 Received 193192 bytes
    2017/10/09 16:05:03 Saving 193192 bytes
    2017/10/09 16:05:03 Sending 193192 bytes
    2017/10/09 16:05:03 Could not send audio: EOF

    2017/10/09 16:05:06 Received 193188 bytes
    2017/10/09 16:05:06 Saving 193188 bytes
    2017/10/09 16:05:06 Sending 193188 bytes // Notice that this doesn't error

    2017/10/09 16:05:09 Received 191704 bytes
    2017/10/09 16:05:09 Saving 191704 bytes
    2017/10/09 16:05:09 Sending 191704 bytes
    2017/10/09 16:05:09 Could not send audio: EOF

    Notice that not all of the Sends fail.

    Could anyone point me in the right direction here ? Is it something to do with the FLAC headers or something ? I also wonder if maybe resetting the buffer causes some of the data to be dropped (i.e. it’s a non-trivial operation that actually takes some time to complete) and it doesn’t like this missing information ?

    Any help would be really appreciated.

  • Error -138 returns "Error number -138 occurred"

    29 avril 2016, par bot1131357

    I am trying to create a program that listens for a period of time, and then times out so that it can return to work on other tasks and retry again later. Here is the code I am testing with :

    AVFormatContext *pFormatCtx = NULL;
    AVCodecContext *codecCtx = NULL;
    AVCodec *codec;
    int ret = 0;

    // Register all formats and codecs
    av_register_all();
    avformat_network_init(); // for network streaming

    AVDictionary *d = NULL;           // "create" an empty dictionary
    av_dict_set(&d, "timeout", "5", 0); // add an entry
    av_dict_set(&d, "rtsp_flags", "listen", 0); // add an entry

    char filename[100];
    sprintf_s(filename, sizeof(filename), "%s", "rtsp://127.0.0.1:8554/demo");


    //:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    printf_s("Open video file.\n");
    // Open video file
    ret = avformat_open_input(&pFormatCtx, filename, NULL, &d);   // Returns -138 here
    if (ret <0)
    {
       printf_s("Failed: cannot open input.\n");
       av_strerror(ret, errbuf, ERRBUFFLEN);
       fprintf(stderr, "avformat_open_input() fail: %s\n", errbuf);
       continue;
       //return -1; // Couldn't find stream information
    }

    In the listening mode, avformat_open_input() returns -138. Using av_strerror() gives the following explanation : "Error number -138 occurred"

    Is this an Easter egg ? What does -138 stand for ?

  • FFMPEG : sws_scale returns error : Slice parameters 0, 2160 are invalid

    21 janvier 2020, par Matthew Czarnek

    I’m trying to follow a tutorial to display ffmpeg AVFrame output in SDL. The tutorial(and all examples I’m seeing online) are still using ’sws_getContext’, which has been deprecated and removed from the newest version of ffmpeg. Trying to change current pixel format from whatever it currently is to PIX_FMT_YUV420P, so I can display it. I believe I need the sws_scale function to accomplish this.

    However, sws_scale is the function that causes a command line error of :
    Slice parameters 0, 2160 are invalid

    Here is all my code associate with swsContext :

    struct SwsContext* av_sws_ctx = NULL;
    av_sws_ctx = sws_alloc_context();

    sws_init_context(av_sws_ctx, NULL, NULL);

    sws_scale(av_sws_ctx, (uint8_t const* const*)av_frame->data,
                           av_frame->linesize, 0, av_codec_context->height,
                           av_frame->data, av_frame->linesize);

    Further complicating the matter, SwsContext is only defined internal to ffmpeg, externally I can’t set/get any variables or even view them in the debugger.

    int sws_scale(struct SwsContext *c, const uint8_t *const srcSlice[], const int srcStride[], int srcSliceY, int srcSliceH, uint8_t *const dst[], const int dstStride[] )

    The vales of other parameters, other than av_sws_ctx :

    srcSlice: av_frame->data =
        8 arrays, first is filled with "\x10\x10\x10\x10\x10..."
        second and third are "€€€€€€€€€€€€€€€€€..."
        rest are NULL
    linesize(av_frame->linesize) is an array:
       3840,1920,1920,0,0,0,0,0
    srcSliceY:0
    srcSliceH:2160
    dest: same as second parameter (av_frame->data)
    dstStride: av_frame->linesize again

    If I drill into sws_scale source code, I find that this error is thrown by this chunk of code :

    if ((srcSliceY & (macro_height-1)) ||
           ((srcSliceH& (macro_height-1)) && srcSliceY + srcSliceH != c->srcH) ||
           srcSliceY + srcSliceH > c->srcH) {
           av_log(c, AV_LOG_ERROR, "Slice parameters %d, %d are invalid\n", srcSliceY, srcSliceH);
           return AVERROR(EINVAL);
       }

    I assume that the issue therefore is that the height of my video is bigger than sws_context(4k video). But can’t figure out how to tell sws_context what it’s height should be using sws_alloc_context or sws_init_context or any other function.

    See something I’m missing ? Thank you.