Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (10)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

Sur d’autres sites (3002)

  • How to encode resampled PCM-audio to AAC using ffmpeg-API when input pcm samples count not equal 1024

    26 septembre 2016, par Aleksei2414904

    I am working on capturing and streaming audio to RTMP server at a moment. I work under MacOS (in Xcode), so for capturing audio sample-buffer I use AVFoundation-framework. But for encoding and streaming I need to use ffmpeg-API and libfaac encoder. So output format must be AAC (for supporting stream playback on iOS-devices).

    And I faced with such problem : audio-capturing device (in my case logitech camera) gives me sample-buffer with 512 LPCM samples, and I can select input sample-rate from 16000, 24000, 36000 or 48000 Hz. When I give these 512 samples to AAC-encoder (configured for appropriate sample-rate), I hear a slow and jerking audio (seems as like pice of silence after each frame).

    I figured out (maybe I am wrong), that libfaac encoder accepts audio frames only with 1024 samples. When I set input samplerate to 24000 and resample input sample-buffer to 48000 before encoding, I obtain 1024 resampled samples. After encoding these 1024 sampels to AAC, I hear proper sound on output. But my web-cam produce 512 samples in buffer for any input samplerate, when output sample-rate must be 48000 Hz. So I need to do resampling in any case, and I will not obtain exactly 1024 samples in buffer after resampling.

    Is there a way to solve this problem within ffmpeg-API functionality ?

    I would be grateful for any help.

    PS :
    I guess that I can accumulate resampled buffers until count of samples become 1024, and then encode it, but this is stream so there will be troubles with resulting timestamps and with other input devices, and such solution is not suitable.

    The current issue came out of the problem described in [question] : How to fill audio AVFrame (ffmpeg) with the data obtained from CMSampleBufferRef (AVFoundation) ?

    Here is a code with audio-codec configs (there also was video stream but video work fine) :

       /*global variables*/
       static AVFrame *aframe;
       static AVFrame *frame;
       AVOutputFormat *fmt;
       AVFormatContext *oc;
       AVStream *audio_st, *video_st;
    Init ()
    {
       AVCodec *audio_codec, *video_codec;
       int ret;

       avcodec_register_all();  
       av_register_all();
       avformat_network_init();
       avformat_alloc_output_context2(&oc, NULL, "flv", filename);
       fmt = oc->oformat;
       oc->oformat->video_codec = AV_CODEC_ID_H264;
       oc->oformat->audio_codec = AV_CODEC_ID_AAC;
       video_st = NULL;
       audio_st = NULL;
       if (fmt->video_codec != AV_CODEC_ID_NONE)
         { //…  /*init video codec*/}
       if (fmt->audio_codec != AV_CODEC_ID_NONE) {
       audio_codec= avcodec_find_encoder(fmt->audio_codec);

       if (!(audio_codec)) {
           fprintf(stderr, "Could not find encoder for '%s'\n",
                   avcodec_get_name(fmt->audio_codec));
           exit(1);
       }
       audio_st= avformat_new_stream(oc, audio_codec);
       if (!audio_st) {
           fprintf(stderr, "Could not allocate stream\n");
           exit(1);
       }
       audio_st->id = oc->nb_streams-1;

       //AAC:
       audio_st->codec->sample_fmt  = AV_SAMPLE_FMT_S16;
       audio_st->codec->bit_rate    = 32000;
       audio_st->codec->sample_rate = 48000;
       audio_st->codec->profile=FF_PROFILE_AAC_LOW;
       audio_st->time_base = (AVRational){1, audio_st->codec->sample_rate };
       audio_st->codec->channels    = 1;
       audio_st->codec->channel_layout = AV_CH_LAYOUT_MONO;      


       if (oc->oformat->flags & AVFMT_GLOBALHEADER)
           audio_st->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       if (video_st)
       {
       //   …
       /*prepare video*/
       }
       if (audio_st)
       {
       aframe = avcodec_alloc_frame();
       if (!aframe) {
           fprintf(stderr, "Could not allocate audio frame\n");
           exit(1);
       }
       AVCodecContext *c;
       int ret;

       c = audio_st->codec;


       ret = avcodec_open2(c, audio_codec, 0);
       if (ret < 0) {
           fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));
           exit(1);
       }

       //…
    }

    And resampling and encoding audio :

    if (mType == kCMMediaType_Audio)
    {
       CMSampleTimingInfo timing_info;
       CMSampleBufferGetSampleTimingInfo(sampleBuffer, 0, &timing_info);
       double  pts=0;
       double  dts=0;
       AVCodecContext *c;
       AVPacket pkt = { 0 }; // data and size must be 0;
       int got_packet, ret;
        av_init_packet(&pkt);
       c = audio_st->codec;
         CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);

       NSUInteger channelIndex = 0;

       CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
       size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));
       size_t lengthAtOffset = 0;
       size_t totalLength = 0;
       SInt16 *samples = NULL;
       CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));

       const AudioStreamBasicDescription *audioDescription = CMAudioFormatDescriptionGetStreamBasicDescription(CMSampleBufferGetFormatDescription(sampleBuffer));

       SwrContext *swr = swr_alloc();

       int in_smprt = (int)audioDescription->mSampleRate;
       av_opt_set_int(swr, "in_channel_layout",  AV_CH_LAYOUT_MONO, 0);

       av_opt_set_int(swr, "out_channel_layout", audio_st->codec->channel_layout,  0);

       av_opt_set_int(swr, "in_channel_count", audioDescription->mChannelsPerFrame,  0);
       av_opt_set_int(swr, "out_channel_count", audio_st->codec->channels,  0);

       av_opt_set_int(swr, "out_channel_layout", audio_st->codec->channel_layout,  0);
       av_opt_set_int(swr, "in_sample_rate",     audioDescription->mSampleRate,0);

       av_opt_set_int(swr, "out_sample_rate",    audio_st->codec->sample_rate,0);

       av_opt_set_sample_fmt(swr, "in_sample_fmt",  AV_SAMPLE_FMT_S16, 0);

       av_opt_set_sample_fmt(swr, "out_sample_fmt", audio_st->codec->sample_fmt,  0);

       swr_init(swr);
       uint8_t **input = NULL;
       int src_linesize;
       int in_samples = (int)numSamples;
       ret = av_samples_alloc_array_and_samples(&input, &src_linesize, audioDescription->mChannelsPerFrame,
                                                in_samples, AV_SAMPLE_FMT_S16P, 0);


       *input=(uint8_t*)samples;
       uint8_t *output=NULL;


       int out_samples = av_rescale_rnd(swr_get_delay(swr, in_smprt) +in_samples, (int)audio_st->codec->sample_rate, in_smprt, AV_ROUND_UP);

       av_samples_alloc(&output, NULL, audio_st->codec->channels, out_samples, audio_st->codec->sample_fmt, 0);
       in_samples = (int)numSamples;
       out_samples = swr_convert(swr, &output, out_samples, (const uint8_t **)input, in_samples);


       aframe->nb_samples =(int) out_samples;


       ret = avcodec_fill_audio_frame(aframe, audio_st->codec->channels, audio_st->codec->sample_fmt,
                                (uint8_t *)output,
                                (int) out_samples *
                                av_get_bytes_per_sample(audio_st->codec->sample_fmt) *
                                audio_st->codec->channels, 1);

       aframe->channel_layout = audio_st->codec->channel_layout;
       aframe->channels=audio_st->codec->channels;
       aframe->sample_rate= audio_st->codec->sample_rate;

       if (timing_info.presentationTimeStamp.timescale!=0)
           pts=(double) timing_info.presentationTimeStamp.value/timing_info.presentationTimeStamp.timescale;

       aframe->pts=pts*audio_st->time_base.den;
       aframe->pts = av_rescale_q(aframe->pts, audio_st->time_base, audio_st->codec->time_base);

       ret = avcodec_encode_audio2(c, &pkt, aframe, &got_packet);

       if (ret < 0) {
           fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));
           exit(1);
       }
       swr_free(&swr);
       if (got_packet)
       {
           pkt.stream_index = audio_st->index;

           pkt.pts = av_rescale_q(pkt.pts, audio_st->codec->time_base, audio_st->time_base);
           pkt.dts = av_rescale_q(pkt.dts, audio_st->codec->time_base, audio_st->time_base);

           // Write the compressed frame to the media file.
          ret = av_interleaved_write_frame(oc, &pkt);
          if (ret != 0) {
               fprintf(stderr, "Error while writing audio frame: %s\n",
                       av_err2str(ret));
               exit(1);
           }

    }
  • h263 : make default color black, like flv

    20 novembre 2011, par Michael Niedermayer

    h263 : make default color black, like flv

  • iPad Doesn't Render H.264 Video with HTML5

    5 octobre 2011, par jgoldberg

    I have some H.264-encoded videos which render in HTML5 correctly in the web browser, but do not render correctly on the iPad. When I use a H.264 video I downloaded off the internet, my video renders correctly on the iPad, so it is not an HTML problem.

    Here is the ffmpeg info about my videos —

    My original .mov video :

    Seems stream 1 codec frame rate differs from container frame rate : 6000.00 (6000/1) -> 30.00 (30/1)

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'a_video.mp4' :

    Metadata :

    major_brand     : qt  
    minor_version   : 537199360
    compatible_brands: qt  

    Duration : 00:00:42.74, start : 0.000000, bitrate : 220 kb/s

    Stream #0.0(eng): Audio: aac, 44100 Hz, stereo, s16, 94 kb/s
    Stream #0.1(eng): Video: h264, yuv420p, 762x464, 122 kb/s, 30 fps, 30 tbr, 3k tbn, 6k tbc

    After using Handbrake to convert my .mov to a mp4, yet doesn't render on the iPad :

    Seems stream 0 codec frame rate differs from container frame rate : 180000.00 (180000/1) -> 29.97 (30000/1001)

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'a_video.m4v' :

    Metadata :

    major_brand     : mp42  
    minor_version   : 0
    compatible_brands: mp42isomavc1  
    encoder: HandBrake 0.9.5 2011010300  

    Duration : 00:00:42.77, start : 0.000000, bitrate : 169 kb/s

    Stream #0.0(und) : Video : h264, yuv420p, 752x464 [PAR 381:376 DAR 381:232], 35 kb/s, PAR 145161:141376 DAR 145161:87232, 29.97 fps, 29.97 tbr, 90k tbn, 180k tbc
    Stream #0.1(eng) : Audio : aac, 44100 Hz, stereo, s16, 128 kb/s

    Here is a .mp4 I found online which does render on the iPad :

    Seems stream 1 codec frame rate differs from container frame rate : 180000.00 (180000/1) -> 25.00 (25/1)

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'a_video_3_emu.mp4' :

    Metadata :
    major_brand : M4VP
    minor_version : 1
    compatible_brands : M4VPM4A mp42isom
    encoder : CoreMediaAuthoring 677, CoreMedia 420.17, i386

    Duration : 00:01:38.01, start : 0.000000, bitrate : 1023 kb/s

    Stream #0.0(und) : Audio : aac, 32000 Hz, mono, s16, 97 kb/s
    Stream #0.1(und) : Video : h264, yuv420p, 480x360 [PAR 1:1 DAR 4:3], 914 kb/s, 25 fps, 25 tbr, 90k tbn, 180k tbc

    Does anyone see something wrong with the way I am encoded my videos ?

    Edit

    At first my theory was that the iPad was sensitive to different container formats ; but that appears not to be the case. I took a video which does render correctly on the iPad and converted it to a .mov, and it still played correctly on the iPad. So there must be a problem with how the iPad deals with the underlying H.264 stream.