
Recherche avancée
Autres articles (90)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
Sur d’autres sites (13049)
-
FFMPEG - Apple 720p30 Surround MP4 H.264 AAC stereo ; Dolby Digital
10 décembre 2020, par Dean Van Greuneni would like the FFmpeg cli settings which will match this (this is a handbrake preset)




Here is the presetting file, I dont understand what these would be for ffmpeg.


{
 "AlignAVStart": false,
 "AudioCopyMask": [
 "copy:aac",
 "copy:ac3",
 "copy:dtshd",
 "copy:dts",
 "copy:mp3",
 "copy:truehd",
 "copy:flac",
 "copy:eac3"
 ],
 "AudioEncoderFallback": "ac3",
 "AudioLanguageList": [],
 "AudioList": [
 {
 "AudioBitrate": 160,
 "AudioCompressionLevel": 0.0,
 "AudioDitherMethod": null,
 "AudioEncoder": "av_aac",
 "AudioMixdown": "stereo",
 "AudioNormalizeMixLevel": false,
 "AudioSamplerate": "auto",
 "AudioTrackQualityEnable": false,
 "AudioTrackQuality": -1.0,
 "AudioTrackGainSlider": 0.0,
 "AudioTrackDRCSlider": 0.0
 },
 {
 "AudioBitrate": 640,
 "AudioCompressionLevel": 0.0,
 "AudioDitherMethod": null,
 "AudioEncoder": "copy:ac3",
 "AudioMixdown": "none",
 "AudioNormalizeMixLevel": false,
 "AudioSamplerate": "auto",
 "AudioTrackQualityEnable": false,
 "AudioTrackQuality": -1.0,
 "AudioTrackGainSlider": 0.0,
 "AudioTrackDRCSlider": 0.0
 }
 ],
 "AudioSecondaryEncoderMode": true,
 "AudioTrackSelectionBehavior": "first",
 "ChapterMarkers": true,
 "ChildrenArray": [],
 "Default": false,
 "FileFormat": "av_mp4",
 "Folder": false,
 "FolderOpen": false,
 "Mp4HttpOptimize": false,
 "Mp4iPodCompatible": false,
 "PictureAutoCrop": true,
 "PictureBottomCrop": 0,
 "PictureLeftCrop": 0,
 "PictureRightCrop": 0,
 "PictureTopCrop": 0,
 "PictureDARWidth": 0,
 "PictureDeblockPreset": "off",
 "PictureDeblockTune": "medium",
 "PictureDeblockCustom": "strength=strong:thresh=20:blocksize=8",
 "PictureDeinterlaceFilter": "decomb",
 "PictureCombDetectPreset": "default",
 "PictureCombDetectCustom": "",
 "PictureDeinterlacePreset": "default",
 "PictureDeinterlaceCustom": "",
 "PictureDenoiseCustom": "",
 "PictureDenoiseFilter": "off",
 "PictureDenoisePreset": "light",
 "PictureDenoiseTune": "none",
 "PictureSharpenCustom": "",
 "PictureSharpenFilter": "off",
 "PictureSharpenPreset": "medium",
 "PictureSharpenTune": "none",
 "PictureDetelecine": "off",
 "PictureDetelecineCustom": "",
 "PictureItuPAR": false,
 "PictureKeepRatio": true,
 "PictureLooseCrop": false,
 "PictureModulus": 2,
 "PicturePAR": "auto",
 "PicturePARWidth": 0,
 "PicturePARHeight": 0,
 "PictureRotate": null,
 "PictureWidth": 1280,
 "PictureHeight": 720,
 "PictureForceHeight": 0,
 "PictureForceWidth": 0,
 "PresetDescription": "H.264 video (up to 720p30), AAC stereo audio, and Dolby Digital (AC-3) surround audio, in an MP4 container. Compatible with Apple iPhone 4, 4S, and later; iPod touch 4th, 5th Generation and later; iPad 1st Generation, iPad 2, and later; Apple TV 2nd Generation and later.",
 "PresetName": "Apple 720p30 Surround",
 "Type": 0,
 "UsesPictureFilters": true,
 "UsesPictureSettings": 1,
 "SubtitleAddCC": false,
 "SubtitleAddForeignAudioSearch": true,
 "SubtitleAddForeignAudioSubtitle": false,
 "SubtitleBurnBehavior": "foreign",
 "SubtitleBurnBDSub": false,
 "SubtitleBurnDVDSub": false,
 "SubtitleLanguageList": [],
 "SubtitleTrackSelectionBehavior": "none",
 "VideoAvgBitrate": 3000,
 "VideoColorMatrixCode": 0,
 "VideoEncoder": "x264",
 "VideoFramerate": "30",
 "VideoFramerateMode": "pfr",
 "VideoGrayScale": false,
 "VideoScaler": "swscale",
 "VideoPreset": "medium",
 "VideoTune": "",
 "VideoProfile": "high",
 "VideoLevel": "3.1",
 "VideoOptionExtra": "",
 "VideoQualityType": 2,
 "VideoQualitySlider": 21.0,
 "VideoQSVDecode": false,
 "VideoQSVAsyncDepth": 4,
 "VideoTwoPass": true,
 "VideoTurboTwoPass": false,
 "x264Option": null,
 "x264UseAdvancedOptions": false
 },



-
rename files for language_REGION according to the standard ISO_3166-1 (http://en.wikipedia.org/wiki/ISO_3166-1), for Taiwan tha language is Chinese (zh) and the region is Taiwan (TW)
6 septembre 2012, par Cotom localization/messages_pt_BR.js m localization/messages_pt_PT.js m localization/messages_zh_TW.js rename files for language_REGION according to the standard ISO_3166-1 (http://en.wikipedia.org/wiki/ISO_3166-1), for Taiwan tha language is Chinese (zh) and the region is Taiwan (...)
-
Meaning of Timestamp Retrieved by AudioQueueGetCurrentTime() in AudioQueue Callback
30 juillet 2024, par White0930I'm working on an audio project and have a question regarding the meaning of the timestamp retrieved by AudioQueueGetCurrentTime().


According to the Apple Developer Documentation, the following calculation gives the audio time being played (since AudioQueueStart) :


- (Float64) GetCurrentTime {
 AudioTimeStamp c; 
 AudioQueueGetCurrentTime(playState.queue, NULL, &c, NULL); 
 return c.mSampleTime / _av->audio.sample_rate;
}



However, in a project I'm working on, I noticed the following code inside the fillAudioBuffer callback function of AudioQueue :



static void fillAudioBuffer(AudioQueueRef queue, AudioQueueBufferRef buffer){
 
 int lengthCopied = INT32_MAX;
 int dts= 0;
 int isDone = 0;

 buffer->mAudioDataByteSize = 0;
 buffer->mPacketDescriptionCount = 0;
 
 OSStatus err = 0;
 AudioTimeStamp bufferStartTime;

 AudioQueueGetCurrentTime(queue, NULL, &bufferStartTime, NULL);
 

 
 while(buffer->mPacketDescriptionCount < numPacketsToRead && lengthCopied > 0){
 if (buffer->mAudioDataByteSize) {
 break;
 }
 
 lengthCopied = getNextAudio(_av,buffer->mAudioDataBytesCapacity-buffer->mAudioDataByteSize, (uint8_t*)buffer->mAudioData+buffer->mAudioDataByteSize,&dts,&isDone);
 if(!lengthCopied || isDone) break;
 
 if(aqStartDts < 0) aqStartDts = dts;
 if (dts>0) currentDts = dts;
 if(buffer->mPacketDescriptionCount ==0){
 bufferStartTime.mFlags = kAudioTimeStampSampleTimeValid;
 bufferStartTime.mSampleTime = (Float64)(dts-aqStartDts) * _av->audio.frame_size;
 
 if (bufferStartTime.mSampleTime <0 ) bufferStartTime.mSampleTime = 0;
 PMSG2("AQHandler.m fillAudioBuffer: DTS for %x: %lf time base: %lf StartDTS: %d\n", (unsigned int)buffer, bufferStartTime.mSampleTime, _av->audio.time_base, aqStartDts);
 
 }
 buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mStartOffset = buffer->mAudioDataByteSize;
 buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mDataByteSize = lengthCopied;
 buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mVariableFramesInPacket = _av->audio.frame_size;
 
 buffer->mPacketDescriptionCount++;
 buffer->mAudioDataByteSize += lengthCopied;
 
 }
 
#ifdef DEBUG
 int audioBufferCount, audioBufferTotal, videoBufferCount, videoBufferTotal;
 bufferCheck(_av,&videoBufferCount, &videoBufferTotal, &audioBufferCount, &audioBufferTotal);
 
 PMSG2("AQHandler.m fillAudioBuffer: Video Buffer: %d/%d Audio Buffer: %d/%d\n", videoBufferCount, videoBufferTotal, audioBufferCount, audioBufferTotal);
 
 PMSG2("AQHandler.m fillAudioBuffer: Bytes copied for buffer 0x%x: %d\n",(unsigned int)buffer, (int)buffer->mAudioDataByteSize );
#endif 
 if(buffer->mAudioDataByteSize){
 
 if(err=AudioQueueEnqueueBufferWithParameters(queue, buffer, 0, NULL, 0, 0, 0, NULL, &bufferStartTime, NULL))
 {
#ifdef DEBUG
 char sErr[10];

 PMSG2(@"AQHandler.m fillAudioBuffer: Could not enqueue buffer 0x%x: %d %s.", buffer, err, FormatError(sErr, err));
#endif
 }
 }

}



Based on the documentation for
AudioQueueEnqueueBufferWithParameters
and the variable naming used by the author,bufferStartTime
seems to represent the time when the newly filled audio buffer will start playing, i.e., the time when all current audio in the queue has finished playing and the new audio starts. This interpretation suggestsbufferStartTime
is not the same as the time of the audio currently being played.

I have browsed through many related questions, but I still have some doubts.. I'm currently fixing an audio-video synchronization issue in my project, and there isn't much detailed information in the Apple Developer Documentation (or maybe my search skills are lacking).


Can someone clarify the exact meaning of the timestamp returned by AudioQueueGetCurrentTime() in this context ? Is it the time when the current audio will finish playing, or is it the time when the new audio will start playing ? Any additional resources or documentation that explain this in detail would also be appreciated.