Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (106)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Pas question de marché, de cloud etc...

    10 avril 2011

    Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
    sur le web 2.0 et dans les entreprises qui en vivent.
    Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
    Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
    le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
    Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)

Sur d’autres sites (12864)

  • Output black when I decode h264 720p with ffmpeg

    6 décembre 2017, par José Marqueses Saxo

    First, sorry for my english. When I decode h264 720p in ardrone2.0 my output is black and I cant see anything.

    I have try to change the value of pCodecCtx->pix_fmt = AV_PIX_FMT_BGR24; to pCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P; and the value of pCodecCtxH264->pix_fmt = AV_PIX_FMT_BGR24; to pCodecCtxH264->pix_fmt = AV_PIX_FMT_YUV420P; but my program crash. What am I doing wrong ?. Thank you, see part of my code :

    av_register_all();
    avcodec_register_all();
    avformat_network_init();

    // 1.2. Open video file
    if(avformat_open_input(&pFormatCtx, drone_addr, NULL, NULL) != 0) {
     mexPrintf("No conecct with Drone");
     EndVideo();
     return;
    }

    pCodec    = avcodec_find_decoder(AV_CODEC_ID_H264);

    pCodecCtx = avcodec_alloc_context3(pCodec);
    pCodecCtx->pix_fmt = AV_PIX_FMT_BGR24;
    pCodecCtx->skip_frame = AVDISCARD_DEFAULT;
    pCodecCtx->error_concealment = FF_EC_GUESS_MVS | FF_EC_DEBLOCK;
    pCodecCtx->err_recognition = AV_EF_CAREFUL;
    pCodecCtx->skip_loop_filter = AVDISCARD_DEFAULT;
    pCodecCtx->workaround_bugs = FF_BUG_AUTODETECT;
    pCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
    pCodecCtx->codec_id = AV_CODEC_ID_H264;
    pCodecCtx->skip_idct = AVDISCARD_DEFAULT;
    pCodecCtx->width = 1280;
    pCodecCtx->height = 720;

    pCodecH264 = avcodec_find_decoder(AV_CODEC_ID_H264);
    pCodecCtxH264 = avcodec_alloc_context3(pCodecH264);


    pCodecCtxH264->pix_fmt = AV_PIX_FMT_BGR24;
    pCodecCtxH264->skip_frame = AVDISCARD_DEFAULT;
    pCodecCtxH264->error_concealment = FF_EC_GUESS_MVS | FF_EC_DEBLOCK;
    pCodecCtxH264->err_recognition = AV_EF_CAREFUL;
    pCodecCtxH264->skip_loop_filter = AVDISCARD_DEFAULT;
    pCodecCtxH264->workaround_bugs = FF_BUG_AUTODETECT;
    pCodecCtxH264->codec_type = AVMEDIA_TYPE_VIDEO;
    pCodecCtxH264->codec_id = AV_CODEC_ID_H264;
    pCodecCtxH264->skip_idct = AVDISCARD_DEFAULT;

    if(avcodec_open2(pCodecCtxH264, pCodecH264, &optionsDict) < 0)
    {
      mexPrintf("Error opening H264 codec");
      return ;
    }

    pFrame_BGR24 = av_frame_alloc();


    if(pFrame_BGR24 == NULL) {
      mexPrintf("Could not allocate pFrame_BGR24\n");
      return ;
    }

    // Determine required buffer size and allocate buffer

    buffer_BGR24 =
    (uint8_t *)av_mallocz(av_image_get_buffer_size(AV_PIX_FMT_BGR24,
    pCodecCtx->width, ((pCodecCtx->height == 720) ? 720 : pCodecCtx->height) *
    sizeof(uint8_t)*3,1));

    // Assign buffer to image planes

    av_image_fill_arrays(pFrame_BGR24->data, pFrame_BGR24->linesize,
    buffer_BGR24,AV_PIX_FMT_BGR24, pCodecCtx->width, pCodecCtx->height,1);

    // format conversion context
    pConvertCtx_BGR24 = sws_getContext(pCodecCtx->width, pCodecCtx->height,
    pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,  AV_PIX_FMT_BGR24,
                                    SWS_BILINEAR | SWS_ACCURATE_RND, 0, 0, 0);

    // 1.6. get video frames
    pFrame = av_frame_alloc();

    av_init_packet(&packet);
    packet.data = NULL;
    packet.size = 0;
    }

    //Captura un frame
    void video::capture(mxArray *plhs[]) {

     if(av_read_frame(pFormatCtx, &packet) < 0){
         mexPrintf("Error al leer frame");
         return;
     }
      do {
          do {
             rest = avcodec_send_packet(pCodecCtxH264, &packet);
          } while(rest == AVERROR(EAGAIN));

          if(rest == AVERROR_EOF || rest == AVERROR(EINVAL)) {
                   printf("AVERROR(EAGAIN): %d, AVERROR_EOF: %d,
                   AVERROR(EINVAL): %d\n", AVERROR(EAGAIN), AVERROR_EOF,
                   AVERROR(EINVAL));
               printf("fe_read_frame: Frame getting error (%d)!\n", rest);
               return;
          }

          rest = avcodec_receive_frame(pCodecCtxH264, pFrame);
      } while(rest == AVERROR(EAGAIN));

      if(rest == AVERROR_EOF || rest == AVERROR(EINVAL)) {

       // An error or EOF occured,index break out and return what
       // we have so far.
         printf("AVERROR(EAGAIN): %d, AVERROR_EOF: %d, AVERROR(EINVAL): %d\n",
         AVERROR(EAGAIN), AVERROR_EOF, AVERROR(EINVAL));
           printf("fe_read_frame: EOF or some othere decoding error (%d)!\n",
           rest);
           return;
      }


      // 2.1.1. convert frame to GRAYSCALE [or BGR] for OpenCV
      sws_scale(pConvertCtx_BGR24,   (const uint8_t* const*)pFrame->data,
          pFrame->linesize, 0,pCodecCtx->height,   pFrame_BGR24->data,  
                pFrame_BGR24->linesize);
    //}
      av_packet_unref(&packet);
      av_init_packet(&packet);
      mwSize dims[] = {(pCodecCtx->width)*((pCodecCtx->height == 720) ? 720 :
      pCodecCtx->height)*sizeof(uint8_t)*3};
      plhs[0] = mxCreateNumericArray(1,dims,mxUINT8_CLASS, mxREAL);
       //plhs[0]=mxCreateDoubleMatrix(pCodecCtx->height,pCodecCtx-
       >width,mxREAL);
      point=mxGetPr(plhs[0]);
      memcpy(point, pFrame_BGR24->data[0],(pCodecCtx->width)*(pCodecCtx-
       >height)*sizeof(uint8_t)*3);
    }
  • How to use Google's Cloud Speech-to-Text API to transcribe a video using the REST API

    8 juin 2018, par mrb

    I’d like to have the transcript of 2 people speaking in a video, but I get an empty response from the Cloud Speech-to-Text API

    Approach :

    I have a 56 minute video file containing a conversation between two people. I would like to have the transcript of that conversation, and I would like to use Google’s Cloud Speech-to-Text API to get that.

    To save a little on my Google Cloud Storage I converted to video to audio first by using mmpeg.

    First I’d tried to figure out the audio codec by using the command below, and it looks like AAC.
    ffmpeg -i video.mp4

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'videoplayback.mp4':
     Metadata:
       major_brand     : mp42
       minor_version   : 0
       compatible_brands: isommp42
       creation_time   : 2015-12-30T08:17:14.000000Z
     Duration: 00:56:03.99, start: 0.000000, bitrate: 362 kb/s
       Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 490x360 [SAR 1:1 DAR 49:36], 264 kb/s,     29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 96 kb/s (default)
       Metadata:
         creation_time   : 2015-12-30T08:17:31.000000Z
         handler_name    : IsoMedia File Produced by Google, 5-11-2011    

    So I took that from the video by using :
    ffmpeg -i video.mp4 -vn -acodec copy myaudio.aac

    Details so far :
    ffmpeg -i myaudio.aac
    Outputs :

    Input #0, aac, from 'myaudio.aac':
     Duration: 00:56:47.49, bitrate: 97 kb/s
       Stream #0:0: Audio: aac (LC), 44100 Hz, stereo, fltp, 97 kb/s

    After that I converted it to opus because I’m told that opus is better
    ffmpeg -i myaudio.aac -acodec libopus -b:a 97k -vbr on -compression_level 10 myaudio.opus

    Info so far :
    opusinfo myaudio.opus

    User comments section follows...
       encoder=Lavc58.18.100 libopus
    Opus stream 1:
       Pre-skip: 312
       Playback gain: 0 dB
       Channels: 2
       Original sample rate: 48000Hz
       Packet duration:   20.0ms (max),   20.0ms (avg),   20.0ms (min)
       Page duration:   1000.0ms (max), 1000.0ms (avg), 1000.0ms (min)
       Total data length: 29956714 bytes (overhead: 0.872%)
       Playback length: 56m:03.990s
       Average bitrate: 71.24 kb/s, w/o overhead: 70.62 kb/s

    I this point I uploaded the myaudio.opus to the Google Cloud Storage.

    curl POST 1
    I started the speech recognition by doing a POST with curl :

    curl --request POST  --header "Content-Type: application/json" --url 'https://speech.googleapis.com/v1/speech:longrunningrecognize?fields=done%2Cerror%2Cmetadata%2Cname%2Cresponse&key={MY_API_KEY}' --data '{"audio": {"uri": "gs://{MY_BUCKET}/myaudio.opus"},"config": {"encoding": "OGG_OPUS", "sampleRateHertz": 48000, "languageCode": "en-US"}}'

    Response : {"name": "123456789"}
    123456789 was not the actual value.

    curl GET 1
    Now I wanted to have the results :

    curl --request GET --url 'https://speech.googleapis.com/v1/operations/123456789?fields=done%2Cerror%2Cmetadata%2Cname%2Cresponse&key={MY_API_KEY}'

    This gave me the error : Error : Unable to recognize speech, possible error in encoding or channel config. Please correct the config and retry the request.

    So I updated the encoding configuration from OGG_OPUS to LINEAR16.

    curl POST 2
    Did the post again :

    curl --request POST  --header "Content-Type: application/json" --url 'https://speech.googleapis.com/v1/speech:longrunningrecognize?fields=done%2Cerror%2Cmetadata%2Cname%2Cresponse&key={MY_API_KEY}' --data '{"audio": {"uri": "gs://{MY_BUCKET}/myaudio.opus"},"config": {"encoding": "LINEAR16", "sampleRateHertz": 48000, "languageCode": "en-US"}}'

    Response : {"name": "987654321"}

    curl GET 2

    curl --request GET --url 'https://speech.googleapis.com/v1/operations/987654321?fields=done%2Cerror%2Cmetadata%2Cname%2Cresponse&key={MY_API_KEY}'

    Response :

    {
     "name": "987654321",
     "metadata": {
       "@type": "type.googleapis.com/google.cloud.speech.v1.LongRunningRecognizeMetadata",
       "progressPercent": 100,
       "startTime": "2018-06-08T11:01:24.596504Z",
       "lastUpdateTime": "2018-06-08T11:01:51.825882Z"
     },
     "done": true
    }

    The problem is that I don’t get the actual transcription. According the the documentation there should be a response key in the response containing the data.

    Since I’m kinda stuck here I’d like to know if I’m doing something completely wrong. I don’t have any technical or resource limitation so all suggestions are very welcome ! Also happy to change my approach.

    Thanks in advance ! Cheers

  • How to use Google's Cloud Speech-to-Text REST API to transcribe a video

    24 juillet 2018, par mrb

    I’d like to have the transcript of 2 people speaking in a video, but I get an empty response from the Cloud Speech-to-Text API

    Approach :

    I have a 56 minute video file containing a conversation between two people. I would like to have the transcript of that conversation, and I would like to use Google’s Cloud Speech-to-Text API to get that.

    To save a little on my Google Cloud Storage I converted to video to audio first by using mmpeg.

    First I’d tried to figure out the audio codec by using the command below, and it looks like AAC.
    ffmpeg -i video.mp4

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'videoplayback.mp4':
     Metadata:
       major_brand     : mp42
       minor_version   : 0
       compatible_brands: isommp42
       creation_time   : 2015-12-30T08:17:14.000000Z
     Duration: 00:56:03.99, start: 0.000000, bitrate: 362 kb/s
       Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 490x360 [SAR 1:1 DAR 49:36], 264 kb/s,     29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 96 kb/s (default)
       Metadata:
         creation_time   : 2015-12-30T08:17:31.000000Z
         handler_name    : IsoMedia File Produced by Google, 5-11-2011    

    So I took that from the video by using :
    ffmpeg -i video.mp4 -vn -acodec copy myaudio.aac

    Details so far :
    ffmpeg -i myaudio.aac
    Outputs :

    Input #0, aac, from 'myaudio.aac':
     Duration: 00:56:47.49, bitrate: 97 kb/s
       Stream #0:0: Audio: aac (LC), 44100 Hz, stereo, fltp, 97 kb/s

    After that I converted it to opus because I’m told that opus is better
    ffmpeg -i myaudio.aac -acodec libopus -b:a 97k -vbr on -compression_level 10 myaudio.opus

    Info so far :
    opusinfo myaudio.opus

    User comments section follows...
       encoder=Lavc58.18.100 libopus
    Opus stream 1:
       Pre-skip: 312
       Playback gain: 0 dB
       Channels: 2
       Original sample rate: 48000Hz
       Packet duration:   20.0ms (max),   20.0ms (avg),   20.0ms (min)
       Page duration:   1000.0ms (max), 1000.0ms (avg), 1000.0ms (min)
       Total data length: 29956714 bytes (overhead: 0.872%)
       Playback length: 56m:03.990s
       Average bitrate: 71.24 kb/s, w/o overhead: 70.62 kb/s

    I this point I uploaded the myaudio.opus to the Google Cloud Storage.

    curl POST 1
    I started the speech recognition by doing a POST with curl :

    curl --request POST  --header "Content-Type: application/json" --url 'https://speech.googleapis.com/v1/speech:longrunningrecognize?fields=done%2Cerror%2Cmetadata%2Cname%2Cresponse&key={MY_API_KEY}' --data '{"audio": {"uri": "gs://{MY_BUCKET}/myaudio.opus"},"config": {"encoding": "OGG_OPUS", "sampleRateHertz": 48000, "languageCode": "en-US"}}'

    Response : {"name": "123456789"}
    123456789 was not the actual value.

    curl GET 1
    Now I wanted to have the results :

    curl --request GET --url 'https://speech.googleapis.com/v1/operations/123456789?fields=done%2Cerror%2Cmetadata%2Cname%2Cresponse&key={MY_API_KEY}'

    This gave me the error : Error : Unable to recognize speech, possible error in encoding or channel config. Please correct the config and retry the request.

    So I updated the encoding configuration from OGG_OPUS to LINEAR16.

    curl POST 2
    Did the post again :

    curl --request POST  --header "Content-Type: application/json" --url 'https://speech.googleapis.com/v1/speech:longrunningrecognize?fields=done%2Cerror%2Cmetadata%2Cname%2Cresponse&key={MY_API_KEY}' --data '{"audio": {"uri": "gs://{MY_BUCKET}/myaudio.opus"},"config": {"encoding": "LINEAR16", "sampleRateHertz": 48000, "languageCode": "en-US"}}'

    Response : {"name": "987654321"}

    curl GET 2

    curl --request GET --url 'https://speech.googleapis.com/v1/operations/987654321?fields=done%2Cerror%2Cmetadata%2Cname%2Cresponse&key={MY_API_KEY}'

    Response :

    {
     "name": "987654321",
     "metadata": {
       "@type": "type.googleapis.com/google.cloud.speech.v1.LongRunningRecognizeMetadata",
       "progressPercent": 100,
       "startTime": "2018-06-08T11:01:24.596504Z",
       "lastUpdateTime": "2018-06-08T11:01:51.825882Z"
     },
     "done": true
    }

    The problem is that I don’t get the actual transcription. According the the documentation there should be a response key in the response containing the data.

    Since I’m kinda stuck here I’d like to know if I’m doing something completely wrong. I don’t have any technical or resource limitation so all suggestions are very welcome ! Also happy to change my approach.

    Thanks in advance ! Cheers