Recherche avancée

Médias (91)

Autres articles (91)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (10839)

  • Win32 : Only use large buffers when writing to disk

    11 décembre 2015, par Erik de Castro Lopo
    Win32 : Only use large buffers when writing to disk
    

    Windows can suffer quite badly from disk fragmentations. To avoid
    this, on Windows, the FILE* buffer size was set to 10Meg. However,
    this huge buffer is undesireable when writing to a eg a pipe.

    This patch updates the behaviour to only use the huge buffer when
    writing to disk.

    Patch-from : lvqcl <lvqcl.mail@gmail.com>
    Closes : https://sourceforge.net/p/flac/feature-requests/114/

    • [DH] src/libFLAC/stream_encoder.c
  • Prepending generated audio silence when merging audio w/ non-zero starting PTS and video with zero-based PTS for equal duration, aligned streams

    20 juillet 2021, par hedgehog90

    When extracting segments from a media file with video and audio streams without re-encoding (-c copy), it is likely that the requested seek & end time specified will not land precisely on a keyframe in the source.

    &#xA;

    In this case, ffmpeg will grab the nearest keyframe of each track and position them with differing starting PTS values so that they remain in sync.

    &#xA;

    Video keyframes tend to be a lot more spaced apart, so you can often end up with something like this :

    &#xA;

    enter image description here

    &#xA;

    Viewing the clip in VLC, the audio will start at 5 seconds in.

    &#xA;

    However, in other video players or video editors I've noticed this can lead to some playback issues or a/v desync.

    &#xA;

    A solution would be to re-encode both streams when extracting the clip, allowing ffmpeg to precisely seek to the specified seek time and generating equal length & synced audio and video tracks.

    &#xA;

    However, in my case I do not want to re-encode the video, it is costly and produces lower quality video and/or greater file sizes. I would prefer to only re-encode the audio, filling the initial gap with generated silence.

    &#xA;

    This should be simple, but everything I've tried has failed to generate silence before the audio stream begins.

    &#xA;

    I've tried apad, aresample=sync=1, and using amerge to combine the audio with anullsrc. None of it works.

    &#xA;

    All I can think to possibly get around this is to use ffprobe on the misaligned source to retrieve the first audio PTS, and in a second ffmpeg process apply this value as a negative -itoffset, then concatting the audio track with generated silence lasting the duration of silence... But surely there's a better way, with just one instance of ffmpeg ?

    &#xA;

    Any ideas ?

    &#xA;

  • How to encode resampled PCM-audio to AAC using ffmpeg-API when input pcm samples count not equal 1024

    22 février 2023, par Aleksei2414904

    I am working on capturing and streaming audio to RTMP server at a moment. I work under MacOS (in Xcode), so for capturing audio sample-buffer I use AVFoundation-framework. But for encoding and streaming I need to use ffmpeg-API and libfaac encoder. So output format must be AAC (for supporting stream playback on iOS-devices).

    &#xA;&#xA;

    And I faced with such problem : audio-capturing device (in my case logitech camera) gives me sample-buffer with 512 LPCM samples, and I can select input sample-rate from 16000, 24000, 36000 or 48000 Hz. When I give these 512 samples to AAC-encoder (configured for appropriate sample-rate), I hear a slow and jerking audio (seems as like pice of silence after each frame).

    &#xA;&#xA;

    I figured out (maybe I am wrong), that libfaac encoder accepts audio frames only with 1024 samples. When I set input samplerate to 24000 and resample input sample-buffer to 48000 before encoding, I obtain 1024 resampled samples. After encoding these 1024 sampels to AAC, I hear proper sound on output. But my web-cam produce 512 samples in buffer for any input samplerate, when output sample-rate must be 48000 Hz. So I need to do resampling in any case, and I will not obtain exactly 1024 samples in buffer after resampling.

    &#xA;&#xA;

    Is there a way to solve this problem within ffmpeg-API functionality ?

    &#xA;&#xA;

    I would be grateful for any help.

    &#xA;&#xA;

    PS :&#xA;I guess that I can accumulate resampled buffers until count of samples become 1024, and then encode it, but this is stream so there will be troubles with resulting timestamps and with other input devices, and such solution is not suitable.

    &#xA;&#xA;

    The current issue came out of the problem described in [question] : How to fill audio AVFrame (ffmpeg) with the data obtained from CMSampleBufferRef (AVFoundation) ?

    &#xA;&#xA;

    Here is a code with audio-codec configs (there also was video stream but video work fine) :

    &#xA;&#xA;

        /*global variables*/&#xA;    static AVFrame *aframe;&#xA;    static AVFrame *frame;&#xA;    AVOutputFormat *fmt; &#xA;    AVFormatContext *oc; &#xA;    AVStream *audio_st, *video_st;&#xA;Init ()&#xA;{&#xA;    AVCodec *audio_codec, *video_codec;&#xA;    int ret;&#xA;&#xA;    avcodec_register_all();  &#xA;    av_register_all();&#xA;    avformat_network_init();&#xA;    avformat_alloc_output_context2(&amp;oc, NULL, "flv", filename);&#xA;    fmt = oc->oformat;&#xA;    oc->oformat->video_codec = AV_CODEC_ID_H264;&#xA;    oc->oformat->audio_codec = AV_CODEC_ID_AAC;&#xA;    video_st = NULL;&#xA;    audio_st = NULL;&#xA;    if (fmt->video_codec != AV_CODEC_ID_NONE) &#xA;      { //…  /*init video codec*/}&#xA;    if (fmt->audio_codec != AV_CODEC_ID_NONE) {&#xA;    audio_codec= avcodec_find_encoder(fmt->audio_codec);&#xA;&#xA;    if (!(audio_codec)) {&#xA;        fprintf(stderr, "Could not find encoder for &#x27;%s&#x27;\n",&#xA;                avcodec_get_name(fmt->audio_codec));&#xA;        exit(1);&#xA;    }&#xA;    audio_st= avformat_new_stream(oc, audio_codec);&#xA;    if (!audio_st) {&#xA;        fprintf(stderr, "Could not allocate stream\n");&#xA;        exit(1);&#xA;    }&#xA;    audio_st->id = oc->nb_streams-1;&#xA;&#xA;    //AAC:&#xA;    audio_st->codec->sample_fmt  = AV_SAMPLE_FMT_S16;&#xA;    audio_st->codec->bit_rate    = 32000;&#xA;    audio_st->codec->sample_rate = 48000;&#xA;    audio_st->codec->profile=FF_PROFILE_AAC_LOW;&#xA;    audio_st->time_base = (AVRational){1, audio_st->codec->sample_rate };&#xA;    audio_st->codec->channels    = 1;&#xA;    audio_st->codec->channel_layout = AV_CH_LAYOUT_MONO;      &#xA;&#xA;&#xA;    if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;        audio_st->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;&#xA;    }&#xA;&#xA;    if (video_st)&#xA;    {&#xA;    //   …&#xA;    /*prepare video*/&#xA;    }&#xA;    if (audio_st)&#xA;    {&#xA;    aframe = avcodec_alloc_frame();&#xA;    if (!aframe) {&#xA;        fprintf(stderr, "Could not allocate audio frame\n");&#xA;        exit(1);&#xA;    }&#xA;    AVCodecContext *c;&#xA;    int ret;&#xA;&#xA;    c = audio_st->codec;&#xA;&#xA;&#xA;    ret = avcodec_open2(c, audio_codec, 0);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    //…&#xA;}&#xA;

    &#xA;&#xA;

    And resampling and encoding audio :

    &#xA;&#xA;

    if (mType == kCMMediaType_Audio)&#xA;{&#xA;    CMSampleTimingInfo timing_info;&#xA;    CMSampleBufferGetSampleTimingInfo(sampleBuffer, 0, &amp;timing_info);&#xA;    double  pts=0;&#xA;    double  dts=0;&#xA;    AVCodecContext *c;&#xA;    AVPacket pkt = { 0 }; // data and size must be 0;&#xA;    int got_packet, ret;&#xA;     av_init_packet(&amp;pkt);&#xA;    c = audio_st->codec;&#xA;      CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);&#xA;&#xA;    NSUInteger channelIndex = 0;&#xA;&#xA;    CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);&#xA;    size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));&#xA;    size_t lengthAtOffset = 0;&#xA;    size_t totalLength = 0;&#xA;    SInt16 *samples = NULL;&#xA;    CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &amp;lengthAtOffset, &amp;totalLength, (char **)(&amp;samples));&#xA;&#xA;    const AudioStreamBasicDescription *audioDescription = CMAudioFormatDescriptionGetStreamBasicDescription(CMSampleBufferGetFormatDescription(sampleBuffer));&#xA;&#xA;    SwrContext *swr = swr_alloc();&#xA;&#xA;    int in_smprt = (int)audioDescription->mSampleRate;&#xA;    av_opt_set_int(swr, "in_channel_layout",  AV_CH_LAYOUT_MONO, 0);&#xA;&#xA;    av_opt_set_int(swr, "out_channel_layout", audio_st->codec->channel_layout,  0);&#xA;&#xA;    av_opt_set_int(swr, "in_channel_count", audioDescription->mChannelsPerFrame,  0);&#xA;    av_opt_set_int(swr, "out_channel_count", audio_st->codec->channels,  0);&#xA;&#xA;    av_opt_set_int(swr, "out_channel_layout", audio_st->codec->channel_layout,  0);&#xA;    av_opt_set_int(swr, "in_sample_rate",     audioDescription->mSampleRate,0);&#xA;&#xA;    av_opt_set_int(swr, "out_sample_rate",    audio_st->codec->sample_rate,0);&#xA;&#xA;    av_opt_set_sample_fmt(swr, "in_sample_fmt",  AV_SAMPLE_FMT_S16, 0);&#xA;&#xA;    av_opt_set_sample_fmt(swr, "out_sample_fmt", audio_st->codec->sample_fmt,  0);&#xA;&#xA;    swr_init(swr);&#xA;    uint8_t **input = NULL;&#xA;    int src_linesize;&#xA;    int in_samples = (int)numSamples;&#xA;    ret = av_samples_alloc_array_and_samples(&amp;input, &amp;src_linesize, audioDescription->mChannelsPerFrame,&#xA;                                             in_samples, AV_SAMPLE_FMT_S16P, 0);&#xA;&#xA;&#xA;    *input=(uint8_t*)samples;&#xA;    uint8_t *output=NULL;&#xA;&#xA;&#xA;    int out_samples = av_rescale_rnd(swr_get_delay(swr, in_smprt) &#x2B;in_samples, (int)audio_st->codec->sample_rate, in_smprt, AV_ROUND_UP);&#xA;&#xA;    av_samples_alloc(&amp;output, NULL, audio_st->codec->channels, out_samples, audio_st->codec->sample_fmt, 0);&#xA;    in_samples = (int)numSamples;&#xA;    out_samples = swr_convert(swr, &amp;output, out_samples, (const uint8_t **)input, in_samples);&#xA;&#xA;&#xA;    aframe->nb_samples =(int) out_samples;&#xA;&#xA;&#xA;    ret = avcodec_fill_audio_frame(aframe, audio_st->codec->channels, audio_st->codec->sample_fmt,&#xA;                             (uint8_t *)output,&#xA;                             (int) out_samples *&#xA;                             av_get_bytes_per_sample(audio_st->codec->sample_fmt) *&#xA;                             audio_st->codec->channels, 1);&#xA;&#xA;    aframe->channel_layout = audio_st->codec->channel_layout;&#xA;    aframe->channels=audio_st->codec->channels;&#xA;    aframe->sample_rate= audio_st->codec->sample_rate;&#xA;&#xA;    if (timing_info.presentationTimeStamp.timescale!=0)&#xA;        pts=(double) timing_info.presentationTimeStamp.value/timing_info.presentationTimeStamp.timescale;&#xA;&#xA;    aframe->pts=pts*audio_st->time_base.den;&#xA;    aframe->pts = av_rescale_q(aframe->pts, audio_st->time_base, audio_st->codec->time_base);&#xA;&#xA;    ret = avcodec_encode_audio2(c, &amp;pkt, aframe, &amp;got_packet);&#xA;&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;    swr_free(&amp;swr);&#xA;    if (got_packet)&#xA;    {&#xA;        pkt.stream_index = audio_st->index;&#xA;&#xA;        pkt.pts = av_rescale_q(pkt.pts, audio_st->codec->time_base, audio_st->time_base);&#xA;        pkt.dts = av_rescale_q(pkt.dts, audio_st->codec->time_base, audio_st->time_base);&#xA;&#xA;        // Write the compressed frame to the media file.&#xA;       ret = av_interleaved_write_frame(oc, &amp;pkt);&#xA;       if (ret != 0) {&#xA;            fprintf(stderr, "Error while writing audio frame: %s\n",&#xA;                    av_err2str(ret));&#xA;            exit(1);&#xA;        }&#xA;&#xA;}&#xA;

    &#xA;