
Recherche avancée
Autres articles (59)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
Sur d’autres sites (12333)
-
How to speed up sound and video and modify the pitch ?
3 août 2020, par DiREKTI want to increase speed on both the sound and the video, while adding pitch to the audio alone.


I have those two ffmpeg commands, but I don't know how to make them work together.


ffmpeg -i input.mkv -filter_complex "[0:v]setpts=0.94*PTS[v];[0:a]atempo=1.06[a]" -map "[v]" -map "[a]" output.mkv



and


ffmpeg -i input.mkv -filter:a "atempo=1.06,asetrate=44100*1.25" output.mkv



-
Java IO Streaming Large Files Video/Data/Sound [on hold]
2 mars 2015, par James RelicI have a series of questions listed below regarding java.io streaming and sockets.
-
What is a video streaming server ? How does it differ from a standard web server.
-
Remember Napster/Morpheus etc ? They’re P2P programs, did they allow users to stream data to each other ? Is there a difference between streaming and downloading (on the clients end) ?
-
How would you go about writing a generic program in java that streams anything to the client , word docs, mp3 files, videos files ? Would you use serverlets for this purpose ?
-
If all you are doing is sending files video/sound/docs/text etc from one computer to another would you need to use specialist APIs like FFMPEG-Java, Red5 ?
-
if you are sending video or sound as a file supposedly you don’t need to worry about encoding or decoding ?
-
Do I need to worry about RTSP if im streaming videos as a file ? Rather then wanting them to play live on the client end ?
I understand my questions sound very untechnical and basic, but I’m a little confused on this whole streaming topic and want to know the best way to stream large files of all types using the Java EE/Spring platform.
-
-
FFmpeg Opus choppy sound UPDATED DESCRIPTION
2 juin 2020, par easy_breezyI'm using FFmpeg and try to encode and decode a raw PCM sound to Opus using a built-in FFmpeg "opus" codec. My input samples are raw PCM 8000 Hz 16 bit mono, in AV_SAMPLE_FMT_S16 format. Since Opus requires sample format AV_SAMPLE_FMT_FLTP and sample rate 48000 Hz only, so I resample my samples before encode them.



I have two instances of
ResamplerAudio
class that does the work of resampling audio samples and has a member ofSwrContext
, I use the first instance ofResamplerAudio
for resampling a raw PCM input audio before encoding and the second for resampling decoded audio to get it's format and sample rate the same as source values of input raw audio.


ResamplerAudio class has a function that init it's SwrContext member like this :



void ResamplerAudio::init(AVCodecContext *codecContext, int inSampleRate, int outSampleRate, AVSampleFormat inSampleFmt, AVSampleFormat outSampleFmt)
{
 swrContext = swr_alloc();
 if (!swrContext)
 {
 LOGE(TAG, "[init] Couldn't allocate swr context");
 return;
 }

 av_opt_set_int(swrContext, "in_channel_layout", (int64_t) codecContext->channel_layout, 0);
 av_opt_set_int(swrContext, "out_channel_layout", (int64_t) codecContext->channel_layout, 0);

 av_opt_set_int(swrContext, "in_channel_count", codecContext->channels, 0);
 av_opt_set_int(swrContext, "out_channel_count", codecContext->channels, 0);

 av_opt_set_int(swrContext, "in_sample_rate", inSampleRate, 0);
 av_opt_set_int(swrContext, "out_sample_rate", outSampleRate, 0);

 av_opt_set_sample_fmt(swrContext, "in_sample_fmt", inSampleFmt, 0);
 av_opt_set_sample_fmt(swrContext, "out_sample_fmt", outSampleFmt, 0);

 int ret = swr_init(swrContext);
 if (ret < 0)
 {
 LOGE(TAG, "[init] swr_init error: %s", av_err2str(ret));
 return;
 }

 LOGD(TAG, "[init] success codecContext->channel_layout: %d; inSampleRate: %d; outSampleRate: %d; inSampleFmt: %d; outSampleFmt: %d", (int) codecContext->channel_layout, inSampleRate, outSampleRate, inSampleFmt, outSampleFmt);
}




And I call
ResamplerAudio::init
function for the first instance ofResamplerAudio
(this instance do resamping a raw PCM input audio before encoding and I called itresamplerEncoder
) with the following args :


resamplerEncoder->init(contextEncoder, 8000, 48000, AV_SAMPLE_FMT_S16, AV_SAMPLE_FMT_FLTP);




The second instance of
ResamplerAudio
(this instance do resamping after decoding audio from Opus and I called itresamplerDecoder
) I init with the following args :


resamplerDecoder->init(contextDecoder, 48000, 8000, AV_SAMPLE_FMT_FLTP, AV_SAMPLE_FMT_S16);




The function of
ResamplerAudio
that does resampling looks like this :


std::vector ResamplerAudio::convert(uint8_t **inData, int inSamplesCount, int outChannels, int outFormat)
{
 std::vector result;
 uint8_t *dstData = NULL;
 const int dstNbSamples = swr_get_out_samples(swrContext, inSamplesCount);
 av_samples_alloc(&dstData, NULL, outChannels, dstNbSamples, AVSampleFormat(outFormat), 1);
 int resampledSize = swr_convert(swrContext, &dstData, dstNbSamples, (const uint8_t **)inData, inSamplesCount);
 int dstBufSize = av_samples_get_buffer_size(NULL, outChannels, resampledSize, AVSampleFormat(outFormat), 1);

 if (dstBufSize <= 0) return result;

 std::copy(&dstData[0], &dstData[dstBufSize], std::back_inserter(result));

 return result;
}




And I call
ResamplerAudio::convert
function before encoding with the following args :


// data - an array of raw pcm audio
// dataLength - the length of data array
// getSamplesCount() - function that calculates samples count
// frameEncode - AVFrame that using for encode audio
std::vector resampledData = resamplerEncoder->convert(&data, getSamplesCount(dataLength, frameEncode->channels, AV_SAMPLE_FMT_S16), frameEncode->channels, frameEncode->format);




getSamplesCount()
function looks like this :


getSamplesCount(int bytesCount, int channels, AVSampleFormat format)
{
 return bytesCount / av_get_bytes_per_sample(format) / channels;
}




After that I fill my
frameEncode
with resampled samples :


memcpy(&frame->data[0][0], &resampledData[0], sizeof(uint8_t) * resampledDataLength);




And pass
frameEncode
to encoding like thisencodeFrame(resampledDataLength)
:


void encodeFrame(int dataLength)
{
 /* send the frame for encoding */
 int ret = avcodec_send_frame(contextEncoder, frameEncode);
 if (ret < 0)
 {
 LOGE(TAG, "[encodeFrame] avcodec_send_frame error: %s", av_err2str(ret));
 return;
 }

 /* read all the available output packets (in general there may be any number of them */
 while (ret >= 0)
 {
 ret = avcodec_receive_packet(contextEncoder, packetEncode);
 if (ret < 0 && ret != AVERROR(EAGAIN)) LOGE(TAG, "[encodeFrame] error in avcodec_receive_packet: %s", av_err2str(ret));
 if (ret < 0) break;

 // encodedData - std::vector that stores encoded data
 std::copy(&packetEncode->data[0], &packetEncode->data[dataLength], std::back_inserter(encodedData));
 av_packet_unref(packetEncode);
 }
}




Then I decode my encoded samples and do resampling to get back them in source sample format and sample rate so I call
ResamplerAudio::convert
function forresamplerDecoder
with the following args :


// frameDecode - AVFrame that holds decoded audio
std::vector resampledData = resamplerDecoder->convert(frameDecode->data, frameDecode->nb_samples, frameDecode->channels, AV_SAMPLE_FMT_S16);




And result sound is choppy and I also noticed that the decoded array size is bigger than the source array size with raw pcm audio.



Please any ideas what I'm doing wrong ?



UPD 18.05.2020



I tested my resampling logic, I did resampling of raw pcm sound without any encoding and decoding routines. First I tried to convert the sample rate of input sound from 8000 Hz to 48000 Hz than I took resampled samples from step above and convert it's sample rate from 48000 Hz to 8000 Hz and the result sound is perfect and clean, also I did the same steps but I converted not a sample rate but a sample format from AV_SAMPLE_FMT_S16 to AV_SAMPLE_FMT_FLTP and vice versa and again the result sound is perfect and clean, also I got the same result when I coverted both a sample rate and a sample format.
So I assume that the problem of distorted and choppy sound is in my encoding or decoding routine, I think most likely in decoding routine because after decoding I ALWAYS get AVFrame with 960 nb_samples despite what was the size of input sound.



My decoding routine looks like this :



std::vector decode(uint8_t *data, unsigned int dataLength)
{
 decodedData.clear();

 int dataSize = dataLength;

 while (dataSize > 0)
 {
 if (!frameDecode)
 {
 frameDecode = av_frame_alloc();
 if (!frameDecode)
 {
 LOGE(TAG, "[decode] Couldn't allocate the frame");
 return EMPTY_DATA;
 }
 }

 ret = av_parser_parse2(parser, contextDecoder, &packetDecode->data, &packetDecode->size, &data[0], dataSize, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
 if (ret < 0) {
 LOGE(TAG, "[decode] av_parser_parse2 error: %s", av_err2str(ret));
 return EMPTY_DATA;
 }

 data += ret;
 dataSize -= ret;

 doDecode();
 }
 return decodedData;
}

void doDecode()
{
 if (packetDecode->size) {
 /* send the packet with the compressed data to the decoder */
 int ret = avcodec_send_packet(contextDecoder, packetDecode);
 if (ret < 0) LOGE(TAG, "[decode] avcodec_send_packet error: %s", av_err2str(ret));

 /* read all the output frames (in general there may be any number of them */
 while (ret >= 0)
 {
 ret = avcodec_receive_frame(contextDecoder, frameDecode);
 if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF) LOGE(TAG, "[decode] avcodec_receive_frame error: %s", av_err2str(ret));
 if (ret < 0) break;

 std::vector resampledData = resamplerDecoder->convert(frameDecode->data, frameDecode->nb_samples, frameDecode->channels, AV_SAMPLE_FMT_S16);
 if (!resampledData.size()) continue;
 std::copy(&resampledData.data()[0], &resampledData.data()[resampledData.size()], std::back_inserter(decodedData));
 }
 }
}




UPD 30.05.2020



I decided to refuse to use FFmpeg in my project and use libopus 1.3.1 instead, so I made a wrapper around it and it works fine.