Recherche avancée

Médias (91)

Autres articles (33)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (3454)

  • Anomalie #2989 : Problème avec le fichier tmp/cache/charger_plugins_fonctions.php

    29 mai 2013, par - Equipement

    Bonjour,

    Quelques informations complémentaires :
    - juste après une erreur, identique à celle citée dans mon message précédent, il a été constaté que le fichier charger_plugins_fonctions.php est effectivement
    manquant.
    - l’examen des logs d’accès d’Apache montre que juste avant que cette erreur se produise, un vidage de cache de SPIP a été effectué au moment où le site était parcouru par plusieurs robots d’indexation.

    Cordialement
    Equipement

  • How to encode resampled PCM-audio to AAC using ffmpeg-API when input pcm samples count not equal 1024

    26 septembre 2016, par Aleksei2414904

    I am working on capturing and streaming audio to RTMP server at a moment. I work under MacOS (in Xcode), so for capturing audio sample-buffer I use AVFoundation-framework. But for encoding and streaming I need to use ffmpeg-API and libfaac encoder. So output format must be AAC (for supporting stream playback on iOS-devices).

    And I faced with such problem : audio-capturing device (in my case logitech camera) gives me sample-buffer with 512 LPCM samples, and I can select input sample-rate from 16000, 24000, 36000 or 48000 Hz. When I give these 512 samples to AAC-encoder (configured for appropriate sample-rate), I hear a slow and jerking audio (seems as like pice of silence after each frame).

    I figured out (maybe I am wrong), that libfaac encoder accepts audio frames only with 1024 samples. When I set input samplerate to 24000 and resample input sample-buffer to 48000 before encoding, I obtain 1024 resampled samples. After encoding these 1024 sampels to AAC, I hear proper sound on output. But my web-cam produce 512 samples in buffer for any input samplerate, when output sample-rate must be 48000 Hz. So I need to do resampling in any case, and I will not obtain exactly 1024 samples in buffer after resampling.

    Is there a way to solve this problem within ffmpeg-API functionality ?

    I would be grateful for any help.

    PS :
    I guess that I can accumulate resampled buffers until count of samples become 1024, and then encode it, but this is stream so there will be troubles with resulting timestamps and with other input devices, and such solution is not suitable.

    The current issue came out of the problem described in [question] : How to fill audio AVFrame (ffmpeg) with the data obtained from CMSampleBufferRef (AVFoundation) ?

    Here is a code with audio-codec configs (there also was video stream but video work fine) :

       /*global variables*/
       static AVFrame *aframe;
       static AVFrame *frame;
       AVOutputFormat *fmt;
       AVFormatContext *oc;
       AVStream *audio_st, *video_st;
    Init ()
    {
       AVCodec *audio_codec, *video_codec;
       int ret;

       avcodec_register_all();  
       av_register_all();
       avformat_network_init();
       avformat_alloc_output_context2(&oc, NULL, "flv", filename);
       fmt = oc->oformat;
       oc->oformat->video_codec = AV_CODEC_ID_H264;
       oc->oformat->audio_codec = AV_CODEC_ID_AAC;
       video_st = NULL;
       audio_st = NULL;
       if (fmt->video_codec != AV_CODEC_ID_NONE)
         { //…  /*init video codec*/}
       if (fmt->audio_codec != AV_CODEC_ID_NONE) {
       audio_codec= avcodec_find_encoder(fmt->audio_codec);

       if (!(audio_codec)) {
           fprintf(stderr, "Could not find encoder for '%s'\n",
                   avcodec_get_name(fmt->audio_codec));
           exit(1);
       }
       audio_st= avformat_new_stream(oc, audio_codec);
       if (!audio_st) {
           fprintf(stderr, "Could not allocate stream\n");
           exit(1);
       }
       audio_st->id = oc->nb_streams-1;

       //AAC:
       audio_st->codec->sample_fmt  = AV_SAMPLE_FMT_S16;
       audio_st->codec->bit_rate    = 32000;
       audio_st->codec->sample_rate = 48000;
       audio_st->codec->profile=FF_PROFILE_AAC_LOW;
       audio_st->time_base = (AVRational){1, audio_st->codec->sample_rate };
       audio_st->codec->channels    = 1;
       audio_st->codec->channel_layout = AV_CH_LAYOUT_MONO;      


       if (oc->oformat->flags & AVFMT_GLOBALHEADER)
           audio_st->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       if (video_st)
       {
       //   …
       /*prepare video*/
       }
       if (audio_st)
       {
       aframe = avcodec_alloc_frame();
       if (!aframe) {
           fprintf(stderr, "Could not allocate audio frame\n");
           exit(1);
       }
       AVCodecContext *c;
       int ret;

       c = audio_st->codec;


       ret = avcodec_open2(c, audio_codec, 0);
       if (ret < 0) {
           fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));
           exit(1);
       }

       //…
    }

    And resampling and encoding audio :

    if (mType == kCMMediaType_Audio)
    {
       CMSampleTimingInfo timing_info;
       CMSampleBufferGetSampleTimingInfo(sampleBuffer, 0, &timing_info);
       double  pts=0;
       double  dts=0;
       AVCodecContext *c;
       AVPacket pkt = { 0 }; // data and size must be 0;
       int got_packet, ret;
        av_init_packet(&pkt);
       c = audio_st->codec;
         CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);

       NSUInteger channelIndex = 0;

       CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
       size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));
       size_t lengthAtOffset = 0;
       size_t totalLength = 0;
       SInt16 *samples = NULL;
       CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));

       const AudioStreamBasicDescription *audioDescription = CMAudioFormatDescriptionGetStreamBasicDescription(CMSampleBufferGetFormatDescription(sampleBuffer));

       SwrContext *swr = swr_alloc();

       int in_smprt = (int)audioDescription->mSampleRate;
       av_opt_set_int(swr, "in_channel_layout",  AV_CH_LAYOUT_MONO, 0);

       av_opt_set_int(swr, "out_channel_layout", audio_st->codec->channel_layout,  0);

       av_opt_set_int(swr, "in_channel_count", audioDescription->mChannelsPerFrame,  0);
       av_opt_set_int(swr, "out_channel_count", audio_st->codec->channels,  0);

       av_opt_set_int(swr, "out_channel_layout", audio_st->codec->channel_layout,  0);
       av_opt_set_int(swr, "in_sample_rate",     audioDescription->mSampleRate,0);

       av_opt_set_int(swr, "out_sample_rate",    audio_st->codec->sample_rate,0);

       av_opt_set_sample_fmt(swr, "in_sample_fmt",  AV_SAMPLE_FMT_S16, 0);

       av_opt_set_sample_fmt(swr, "out_sample_fmt", audio_st->codec->sample_fmt,  0);

       swr_init(swr);
       uint8_t **input = NULL;
       int src_linesize;
       int in_samples = (int)numSamples;
       ret = av_samples_alloc_array_and_samples(&input, &src_linesize, audioDescription->mChannelsPerFrame,
                                                in_samples, AV_SAMPLE_FMT_S16P, 0);


       *input=(uint8_t*)samples;
       uint8_t *output=NULL;


       int out_samples = av_rescale_rnd(swr_get_delay(swr, in_smprt) +in_samples, (int)audio_st->codec->sample_rate, in_smprt, AV_ROUND_UP);

       av_samples_alloc(&output, NULL, audio_st->codec->channels, out_samples, audio_st->codec->sample_fmt, 0);
       in_samples = (int)numSamples;
       out_samples = swr_convert(swr, &output, out_samples, (const uint8_t **)input, in_samples);


       aframe->nb_samples =(int) out_samples;


       ret = avcodec_fill_audio_frame(aframe, audio_st->codec->channels, audio_st->codec->sample_fmt,
                                (uint8_t *)output,
                                (int) out_samples *
                                av_get_bytes_per_sample(audio_st->codec->sample_fmt) *
                                audio_st->codec->channels, 1);

       aframe->channel_layout = audio_st->codec->channel_layout;
       aframe->channels=audio_st->codec->channels;
       aframe->sample_rate= audio_st->codec->sample_rate;

       if (timing_info.presentationTimeStamp.timescale!=0)
           pts=(double) timing_info.presentationTimeStamp.value/timing_info.presentationTimeStamp.timescale;

       aframe->pts=pts*audio_st->time_base.den;
       aframe->pts = av_rescale_q(aframe->pts, audio_st->time_base, audio_st->codec->time_base);

       ret = avcodec_encode_audio2(c, &pkt, aframe, &got_packet);

       if (ret < 0) {
           fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));
           exit(1);
       }
       swr_free(&swr);
       if (got_packet)
       {
           pkt.stream_index = audio_st->index;

           pkt.pts = av_rescale_q(pkt.pts, audio_st->codec->time_base, audio_st->time_base);
           pkt.dts = av_rescale_q(pkt.dts, audio_st->codec->time_base, audio_st->time_base);

           // Write the compressed frame to the media file.
          ret = av_interleaved_write_frame(oc, &pkt);
          if (ret != 0) {
               fprintf(stderr, "Error while writing audio frame: %s\n",
                       av_err2str(ret));
               exit(1);
           }

    }
  • How to stop ffmpeg that runs through java process

    9 janvier 2014, par Ruben

    I am running ffmpeg in Java. Using p = Runtime.getRuntime().exec(command); It is used to stream video through a red5 server.

    My problem is that ffmpeg requires "q" to be pressed in order to stop. How can I do that ? How can I send the q character to the running process so it will execute p.destroy(); or something similar ? At the moment it runs forever until I kill the process in the task manager. I am using Windows7.