
Recherche avancée
Autres articles (27)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Emballe Médias : Mettre en ligne simplement des documents
29 octobre 2010, parLe plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...) -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
Sur d’autres sites (5319)
-
How can I place a still image before the first frame of a video ?
20 avril 2022, par KonstantinWhen I encode videos by FFMpeg I would like to put a jpg image before the very first video frame, because when I embed the video on a webpage with "video" html5 tag, it shows the very first picture as a splash image. Alternatively I want to encode an image to an 1 frame video and concatenate it to my encoded video. I don't want to use the "poster" property of the "video" html5 element.


-
use AudioUnit of kAudioUnitType_FormatConverter type to resample LinearPCM data from FFmpeg
20 juin 2016, par Willi’m trying to play audio use AudioUnit on iOS.Because the default sample rate is 44100 and i have several audio stream of different sample rate, such as 32000,48000 etc, so i tried to set preferredSampleRate and preferredIOBufferDuration of AVAudioSession according to the audio stream.
But i found it difficult to get a proper preferredIOBufferDuration according to the preferredSampleRate, it seemed that preferredIOBufferDuration must be set differently according to the preferredSampleRate, or there may be noise.
So now i’m trying to resample all kinds of audio stream to the default hardware sample rate(44100Hz) use AudioUnit of kAudioUnitType_FormatConverter.
I use AUGraph with FormatConverter unit and remoteIO unit to do this.And now it seems i set the kAudioUnitProperty_SampleRate for kAudioUnitScope_Output successfully(the kAudioUnitProperty_SampleRate property read back is indeed 44100.But there also is noise when the input audio stream is not 44100, while it sounds normal when the input audio stream is originally 44100 Hz. Everything seems the same as i didn’t use FormatConverter and directly stream data to remoteIO Unit(44100 is OK, while others not).
I wonder where is my problem.Does it not do the resampling at all, or is the output data wrong ? Does anyone have experience of FormatConverter AudioUnit ? Any help would be appreciated.
My AUGraph :
AUGraphConnectNodeInput(_processingGraph, converterNode, 0, remoteIONode, 0);
Converter unit : (input format is AV_SAMPLE_FMT_FLTP from FFMpeg)
UInt32 bytesPerFrame = bitsPerChannel / 8;
UInt32 bytesPerPacket = bytesPerFrame * 1;
AudioStreamBasicDescription streamDescription = {
.mSampleRate = spec->sample_rate,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = formatFlags,
.mChannelsPerFrame = spec->channels,
.mFramesPerPacket = 1,
.mBitsPerChannel = bitsPerChannel,
.mBytesPerFrame = bytesPerFrame,
.mBytesPerPacket = bytesPerPacket
};
status = AudioUnitSetProperty(converterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &streamDescription, sizeof(streamDescription));
if (status != noErr) {
NSLog(@"AudioUnit: failed to set stream format (%d)", (int)status);
}
/* input callback */
AURenderCallbackStruct renderCallback;
renderCallback.inputProc = performRender;
renderCallback.inputProcRefCon = (__bridge void *)self;
AUGraphSetNodeInputCallback(_processingGraph, converterNode, 0, &renderCallback);Converter unit output sample rate :
Float64 sampleRate = 44100.0;
AudioUnitSetProperty(converterUnit, kAudioUnitProperty_SampleRate, kAudioUnitScope_Output, 0, &sampleRate, sizeof(sampleRate));i also tried
AudioStreamBasicDescription outStreamDescription = {
.mSampleRate = 44100.0,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = formatFlags,
.mChannelsPerFrame = spec->channels,
.mFramesPerPacket = 1,
.mBitsPerChannel = bitsPerChannel,
.mBytesPerFrame = bytesPerFrame,
.mBytesPerPacket = bytesPerPacket
};
status = AudioUnitSetProperty(converterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &outStreamDescription, sizeof(outStreamDescription));but seemed no difference.
-
Exceeded GA’s 10M hits data limit, now what ?
21 juin 2019, par Joselyn Khor