
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (60)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (7402)
-
Paid Advertising Performance – target the right customers and invest confidently
-
12 ways Matomo Analytics helps you to protect your visitor’s privacy
-
FFmpeg Opus choppy sound UPDATED DESCRIPTION
2 juin 2020, par easy_breezyI'm using FFmpeg and try to encode and decode a raw PCM sound to Opus using a built-in FFmpeg "opus" codec. My input samples are raw PCM 8000 Hz 16 bit mono, in AV_SAMPLE_FMT_S16 format. Since Opus requires sample format AV_SAMPLE_FMT_FLTP and sample rate 48000 Hz only, so I resample my samples before encode them.



I have two instances of
ResamplerAudio
class that does the work of resampling audio samples and has a member ofSwrContext
, I use the first instance ofResamplerAudio
for resampling a raw PCM input audio before encoding and the second for resampling decoded audio to get it's format and sample rate the same as source values of input raw audio.


ResamplerAudio class has a function that init it's SwrContext member like this :



void ResamplerAudio::init(AVCodecContext *codecContext, int inSampleRate, int outSampleRate, AVSampleFormat inSampleFmt, AVSampleFormat outSampleFmt)
{
 swrContext = swr_alloc();
 if (!swrContext)
 {
 LOGE(TAG, "[init] Couldn't allocate swr context");
 return;
 }

 av_opt_set_int(swrContext, "in_channel_layout", (int64_t) codecContext->channel_layout, 0);
 av_opt_set_int(swrContext, "out_channel_layout", (int64_t) codecContext->channel_layout, 0);

 av_opt_set_int(swrContext, "in_channel_count", codecContext->channels, 0);
 av_opt_set_int(swrContext, "out_channel_count", codecContext->channels, 0);

 av_opt_set_int(swrContext, "in_sample_rate", inSampleRate, 0);
 av_opt_set_int(swrContext, "out_sample_rate", outSampleRate, 0);

 av_opt_set_sample_fmt(swrContext, "in_sample_fmt", inSampleFmt, 0);
 av_opt_set_sample_fmt(swrContext, "out_sample_fmt", outSampleFmt, 0);

 int ret = swr_init(swrContext);
 if (ret < 0)
 {
 LOGE(TAG, "[init] swr_init error: %s", av_err2str(ret));
 return;
 }

 LOGD(TAG, "[init] success codecContext->channel_layout: %d; inSampleRate: %d; outSampleRate: %d; inSampleFmt: %d; outSampleFmt: %d", (int) codecContext->channel_layout, inSampleRate, outSampleRate, inSampleFmt, outSampleFmt);
}




And I call
ResamplerAudio::init
function for the first instance ofResamplerAudio
(this instance do resamping a raw PCM input audio before encoding and I called itresamplerEncoder
) with the following args :


resamplerEncoder->init(contextEncoder, 8000, 48000, AV_SAMPLE_FMT_S16, AV_SAMPLE_FMT_FLTP);




The second instance of
ResamplerAudio
(this instance do resamping after decoding audio from Opus and I called itresamplerDecoder
) I init with the following args :


resamplerDecoder->init(contextDecoder, 48000, 8000, AV_SAMPLE_FMT_FLTP, AV_SAMPLE_FMT_S16);




The function of
ResamplerAudio
that does resampling looks like this :


std::vector ResamplerAudio::convert(uint8_t **inData, int inSamplesCount, int outChannels, int outFormat)
{
 std::vector result;
 uint8_t *dstData = NULL;
 const int dstNbSamples = swr_get_out_samples(swrContext, inSamplesCount);
 av_samples_alloc(&dstData, NULL, outChannels, dstNbSamples, AVSampleFormat(outFormat), 1);
 int resampledSize = swr_convert(swrContext, &dstData, dstNbSamples, (const uint8_t **)inData, inSamplesCount);
 int dstBufSize = av_samples_get_buffer_size(NULL, outChannels, resampledSize, AVSampleFormat(outFormat), 1);

 if (dstBufSize <= 0) return result;

 std::copy(&dstData[0], &dstData[dstBufSize], std::back_inserter(result));

 return result;
}




And I call
ResamplerAudio::convert
function before encoding with the following args :


// data - an array of raw pcm audio
// dataLength - the length of data array
// getSamplesCount() - function that calculates samples count
// frameEncode - AVFrame that using for encode audio
std::vector resampledData = resamplerEncoder->convert(&data, getSamplesCount(dataLength, frameEncode->channels, AV_SAMPLE_FMT_S16), frameEncode->channels, frameEncode->format);




getSamplesCount()
function looks like this :


getSamplesCount(int bytesCount, int channels, AVSampleFormat format)
{
 return bytesCount / av_get_bytes_per_sample(format) / channels;
}




After that I fill my
frameEncode
with resampled samples :


memcpy(&frame->data[0][0], &resampledData[0], sizeof(uint8_t) * resampledDataLength);




And pass
frameEncode
to encoding like thisencodeFrame(resampledDataLength)
:


void encodeFrame(int dataLength)
{
 /* send the frame for encoding */
 int ret = avcodec_send_frame(contextEncoder, frameEncode);
 if (ret < 0)
 {
 LOGE(TAG, "[encodeFrame] avcodec_send_frame error: %s", av_err2str(ret));
 return;
 }

 /* read all the available output packets (in general there may be any number of them */
 while (ret >= 0)
 {
 ret = avcodec_receive_packet(contextEncoder, packetEncode);
 if (ret < 0 && ret != AVERROR(EAGAIN)) LOGE(TAG, "[encodeFrame] error in avcodec_receive_packet: %s", av_err2str(ret));
 if (ret < 0) break;

 // encodedData - std::vector that stores encoded data
 std::copy(&packetEncode->data[0], &packetEncode->data[dataLength], std::back_inserter(encodedData));
 av_packet_unref(packetEncode);
 }
}




Then I decode my encoded samples and do resampling to get back them in source sample format and sample rate so I call
ResamplerAudio::convert
function forresamplerDecoder
with the following args :


// frameDecode - AVFrame that holds decoded audio
std::vector resampledData = resamplerDecoder->convert(frameDecode->data, frameDecode->nb_samples, frameDecode->channels, AV_SAMPLE_FMT_S16);




And result sound is choppy and I also noticed that the decoded array size is bigger than the source array size with raw pcm audio.



Please any ideas what I'm doing wrong ?



UPD 18.05.2020



I tested my resampling logic, I did resampling of raw pcm sound without any encoding and decoding routines. First I tried to convert the sample rate of input sound from 8000 Hz to 48000 Hz than I took resampled samples from step above and convert it's sample rate from 48000 Hz to 8000 Hz and the result sound is perfect and clean, also I did the same steps but I converted not a sample rate but a sample format from AV_SAMPLE_FMT_S16 to AV_SAMPLE_FMT_FLTP and vice versa and again the result sound is perfect and clean, also I got the same result when I coverted both a sample rate and a sample format.
So I assume that the problem of distorted and choppy sound is in my encoding or decoding routine, I think most likely in decoding routine because after decoding I ALWAYS get AVFrame with 960 nb_samples despite what was the size of input sound.



My decoding routine looks like this :



std::vector decode(uint8_t *data, unsigned int dataLength)
{
 decodedData.clear();

 int dataSize = dataLength;

 while (dataSize > 0)
 {
 if (!frameDecode)
 {
 frameDecode = av_frame_alloc();
 if (!frameDecode)
 {
 LOGE(TAG, "[decode] Couldn't allocate the frame");
 return EMPTY_DATA;
 }
 }

 ret = av_parser_parse2(parser, contextDecoder, &packetDecode->data, &packetDecode->size, &data[0], dataSize, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
 if (ret < 0) {
 LOGE(TAG, "[decode] av_parser_parse2 error: %s", av_err2str(ret));
 return EMPTY_DATA;
 }

 data += ret;
 dataSize -= ret;

 doDecode();
 }
 return decodedData;
}

void doDecode()
{
 if (packetDecode->size) {
 /* send the packet with the compressed data to the decoder */
 int ret = avcodec_send_packet(contextDecoder, packetDecode);
 if (ret < 0) LOGE(TAG, "[decode] avcodec_send_packet error: %s", av_err2str(ret));

 /* read all the output frames (in general there may be any number of them */
 while (ret >= 0)
 {
 ret = avcodec_receive_frame(contextDecoder, frameDecode);
 if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF) LOGE(TAG, "[decode] avcodec_receive_frame error: %s", av_err2str(ret));
 if (ret < 0) break;

 std::vector resampledData = resamplerDecoder->convert(frameDecode->data, frameDecode->nb_samples, frameDecode->channels, AV_SAMPLE_FMT_S16);
 if (!resampledData.size()) continue;
 std::copy(&resampledData.data()[0], &resampledData.data()[resampledData.size()], std::back_inserter(decodedData));
 }
 }
}




UPD 30.05.2020



I decided to refuse to use FFmpeg in my project and use libopus 1.3.1 instead, so I made a wrapper around it and it works fine.