
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (67)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (8574)
-
What does the summary output of the coding with ffmpeg means
12 novembre 2015, par JaiI am working with video comparison using ffmpeg. By Using ffmpeg command I can find the difference between 2 videos. But i want to find the percentage different in 2 videos.
From the below ffmpeg output how can i found the percentage difference in two videos. Which attribute denote the difference.?TaskList: video:530kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.266679%
TaskList: [libx264 @ 0000000002750b00] frame I:2 Avg QP:23.92 size: 29796
TaskList: [libx264 @ 0000000002750b00] frame P:97 Avg QP:22.97 size: 4477
TaskList: [libx264 @ 0000000002750b00] frame B:9 Avg QP:28.16 size: 5338
TaskList: [libx264 @ 0000000002750b00] consecutive B-frames: 83.3% 16.7% 0.0% 0.0%
TaskList: [libx264 @ 0000000002750b00] mb I I16..4: 25.7% 37.8% 36.5%
TaskList: [libx264 @ 0000000002750b00] mb P I16..4: 1.9% 4.5% 1.0% P16..4: 26.7% 8.8% 3.8% 0.0% 0.0% skip:53.3%
TaskList: [libx264 @ 0000000002750b00] mb B I16..4: 0.7% 2.4% 2.7% B16..8: 19.9% 8.8% 2.6% direct: 4.7% skip:58.2% L0:32.3% L1:53.2% BI:14.4%
TaskList: [libx264 @ 0000000002750b00] 8x8 transform intra:55.1% inter:69.5%
TaskList: [libx264 @ 0000000002750b00] coded y,uvDC,uvAC intra: 55.6% 70.0% 24.2% inter: 19.8% 26.7% 2.5%
TaskList: [libx264 @ 0000000002750b00] i16 v,h,dc,p: 25% 44% 5% 27%
TaskList: [libx264 @ 0000000002750b00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 24% 26% 17% 5% 5% 6% 5% 6% 6%
TaskList: [libx264 @ 0000000002750b00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 26% 29% 13% 5% 5% 6% 5% 6% 5%
TaskList: [libx264 @ 0000000002750b00] i8c dc,h,v,p: 44% 30% 20% 5%
TaskList: [libx264 @ 0000000002750b00] Weighted P-Frames: Y:6.2% UV:4.1%
TaskList: [libx264 @ 0000000002750b00] ref P L0: 64.2% 28.5% 5.8% 1.3% 0.1%
TaskList: [libx264 @ 0000000002750b00] ref B L0: 88.4% 11.6%
TaskList: [libx264 @ 0000000002750b00] kb/s:1204.25 -
Meaning of Timestamp Retrieved by AudioQueueGetCurrentTime() in AudioQueue Callback
30 juillet 2024, par White0930I'm working on an audio project and have a question regarding the meaning of the timestamp retrieved by AudioQueueGetCurrentTime().


According to the Apple Developer Documentation, the following calculation gives the audio time being played (since AudioQueueStart) :


- (Float64) GetCurrentTime {
 AudioTimeStamp c; 
 AudioQueueGetCurrentTime(playState.queue, NULL, &c, NULL); 
 return c.mSampleTime / _av->audio.sample_rate;
}



However, in a project I'm working on, I noticed the following code inside the fillAudioBuffer callback function of AudioQueue :



static void fillAudioBuffer(AudioQueueRef queue, AudioQueueBufferRef buffer){
 
 int lengthCopied = INT32_MAX;
 int dts= 0;
 int isDone = 0;

 buffer->mAudioDataByteSize = 0;
 buffer->mPacketDescriptionCount = 0;
 
 OSStatus err = 0;
 AudioTimeStamp bufferStartTime;

 AudioQueueGetCurrentTime(queue, NULL, &bufferStartTime, NULL);
 

 
 while(buffer->mPacketDescriptionCount < numPacketsToRead && lengthCopied > 0){
 if (buffer->mAudioDataByteSize) {
 break;
 }
 
 lengthCopied = getNextAudio(_av,buffer->mAudioDataBytesCapacity-buffer->mAudioDataByteSize, (uint8_t*)buffer->mAudioData+buffer->mAudioDataByteSize,&dts,&isDone);
 if(!lengthCopied || isDone) break;
 
 if(aqStartDts < 0) aqStartDts = dts;
 if (dts>0) currentDts = dts;
 if(buffer->mPacketDescriptionCount ==0){
 bufferStartTime.mFlags = kAudioTimeStampSampleTimeValid;
 bufferStartTime.mSampleTime = (Float64)(dts-aqStartDts) * _av->audio.frame_size;
 
 if (bufferStartTime.mSampleTime <0 ) bufferStartTime.mSampleTime = 0;
 PMSG2("AQHandler.m fillAudioBuffer: DTS for %x: %lf time base: %lf StartDTS: %d\n", (unsigned int)buffer, bufferStartTime.mSampleTime, _av->audio.time_base, aqStartDts);
 
 }
 buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mStartOffset = buffer->mAudioDataByteSize;
 buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mDataByteSize = lengthCopied;
 buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mVariableFramesInPacket = _av->audio.frame_size;
 
 buffer->mPacketDescriptionCount++;
 buffer->mAudioDataByteSize += lengthCopied;
 
 }
 
#ifdef DEBUG
 int audioBufferCount, audioBufferTotal, videoBufferCount, videoBufferTotal;
 bufferCheck(_av,&videoBufferCount, &videoBufferTotal, &audioBufferCount, &audioBufferTotal);
 
 PMSG2("AQHandler.m fillAudioBuffer: Video Buffer: %d/%d Audio Buffer: %d/%d\n", videoBufferCount, videoBufferTotal, audioBufferCount, audioBufferTotal);
 
 PMSG2("AQHandler.m fillAudioBuffer: Bytes copied for buffer 0x%x: %d\n",(unsigned int)buffer, (int)buffer->mAudioDataByteSize );
#endif 
 if(buffer->mAudioDataByteSize){
 
 if(err=AudioQueueEnqueueBufferWithParameters(queue, buffer, 0, NULL, 0, 0, 0, NULL, &bufferStartTime, NULL))
 {
#ifdef DEBUG
 char sErr[10];

 PMSG2(@"AQHandler.m fillAudioBuffer: Could not enqueue buffer 0x%x: %d %s.", buffer, err, FormatError(sErr, err));
#endif
 }
 }

}



Based on the documentation for
AudioQueueEnqueueBufferWithParameters
and the variable naming used by the author,bufferStartTime
seems to represent the time when the newly filled audio buffer will start playing, i.e., the time when all current audio in the queue has finished playing and the new audio starts. This interpretation suggestsbufferStartTime
is not the same as the time of the audio currently being played.

I have browsed through many related questions, but I still have some doubts.. I'm currently fixing an audio-video synchronization issue in my project, and there isn't much detailed information in the Apple Developer Documentation (or maybe my search skills are lacking).


Can someone clarify the exact meaning of the timestamp returned by AudioQueueGetCurrentTime() in this context ? Is it the time when the current audio will finish playing, or is it the time when the new audio will start playing ? Any additional resources or documentation that explain this in detail would also be appreciated.


-
Add Language Name for each file, rename the language code according to the standard ISO 639 for Estonian, Georgian, Ukrainian and Chinese (http://en.wikipedia.org/wiki/List_of_ISO_639-1_codes)
14 août 2012, par Cotom localization/messages_ar.js m localization/messages_bg.js m localization/messages_ca.js m localization/messages_cs.js m localization/messages_da.js m localization/messages_de.js m localization/messages_el.js m localization/messages_es.js m localization/messages_et.js m (...)