
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (34)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (4519)
-
Meaning of Timestamp Retrieved by AudioQueueGetCurrentTime() in AudioQueue Callback
30 juillet 2024, par White0930I'm working on an audio project and have a question regarding the meaning of the timestamp retrieved by AudioQueueGetCurrentTime().


According to the Apple Developer Documentation, the following calculation gives the audio time being played (since AudioQueueStart) :


- (Float64) GetCurrentTime {
 AudioTimeStamp c; 
 AudioQueueGetCurrentTime(playState.queue, NULL, &c, NULL); 
 return c.mSampleTime / _av->audio.sample_rate;
}



However, in a project I'm working on, I noticed the following code inside the fillAudioBuffer callback function of AudioQueue :



static void fillAudioBuffer(AudioQueueRef queue, AudioQueueBufferRef buffer){
 
 int lengthCopied = INT32_MAX;
 int dts= 0;
 int isDone = 0;

 buffer->mAudioDataByteSize = 0;
 buffer->mPacketDescriptionCount = 0;
 
 OSStatus err = 0;
 AudioTimeStamp bufferStartTime;

 AudioQueueGetCurrentTime(queue, NULL, &bufferStartTime, NULL);
 

 
 while(buffer->mPacketDescriptionCount < numPacketsToRead && lengthCopied > 0){
 if (buffer->mAudioDataByteSize) {
 break;
 }
 
 lengthCopied = getNextAudio(_av,buffer->mAudioDataBytesCapacity-buffer->mAudioDataByteSize, (uint8_t*)buffer->mAudioData+buffer->mAudioDataByteSize,&dts,&isDone);
 if(!lengthCopied || isDone) break;
 
 if(aqStartDts < 0) aqStartDts = dts;
 if (dts>0) currentDts = dts;
 if(buffer->mPacketDescriptionCount ==0){
 bufferStartTime.mFlags = kAudioTimeStampSampleTimeValid;
 bufferStartTime.mSampleTime = (Float64)(dts-aqStartDts) * _av->audio.frame_size;
 
 if (bufferStartTime.mSampleTime <0 ) bufferStartTime.mSampleTime = 0;
 PMSG2("AQHandler.m fillAudioBuffer: DTS for %x: %lf time base: %lf StartDTS: %d\n", (unsigned int)buffer, bufferStartTime.mSampleTime, _av->audio.time_base, aqStartDts);
 
 }
 buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mStartOffset = buffer->mAudioDataByteSize;
 buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mDataByteSize = lengthCopied;
 buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mVariableFramesInPacket = _av->audio.frame_size;
 
 buffer->mPacketDescriptionCount++;
 buffer->mAudioDataByteSize += lengthCopied;
 
 }
 
#ifdef DEBUG
 int audioBufferCount, audioBufferTotal, videoBufferCount, videoBufferTotal;
 bufferCheck(_av,&videoBufferCount, &videoBufferTotal, &audioBufferCount, &audioBufferTotal);
 
 PMSG2("AQHandler.m fillAudioBuffer: Video Buffer: %d/%d Audio Buffer: %d/%d\n", videoBufferCount, videoBufferTotal, audioBufferCount, audioBufferTotal);
 
 PMSG2("AQHandler.m fillAudioBuffer: Bytes copied for buffer 0x%x: %d\n",(unsigned int)buffer, (int)buffer->mAudioDataByteSize );
#endif 
 if(buffer->mAudioDataByteSize){
 
 if(err=AudioQueueEnqueueBufferWithParameters(queue, buffer, 0, NULL, 0, 0, 0, NULL, &bufferStartTime, NULL))
 {
#ifdef DEBUG
 char sErr[10];

 PMSG2(@"AQHandler.m fillAudioBuffer: Could not enqueue buffer 0x%x: %d %s.", buffer, err, FormatError(sErr, err));
#endif
 }
 }

}



Based on the documentation for
AudioQueueEnqueueBufferWithParameters
and the variable naming used by the author,bufferStartTime
seems to represent the time when the newly filled audio buffer will start playing, i.e., the time when all current audio in the queue has finished playing and the new audio starts. This interpretation suggestsbufferStartTime
is not the same as the time of the audio currently being played.

I have browsed through many related questions, but I still have some doubts.. I'm currently fixing an audio-video synchronization issue in my project, and there isn't much detailed information in the Apple Developer Documentation (or maybe my search skills are lacking).


Can someone clarify the exact meaning of the timestamp returned by AudioQueueGetCurrentTime() in this context ? Is it the time when the current audio will finish playing, or is it the time when the new audio will start playing ? Any additional resources or documentation that explain this in detail would also be appreciated.


-
how to make work opencv with FFMPEG driver
14 janvier 2021, par user3313834I have a camera on my linuxbox it is working well :


# $ ls -al /dev/video*
# crw-rw----+ 1 root video 81, 0 janv. 8 16:13 /dev/video0
# crw-rw----+ 1 root video 81, 1 janv. 8 16:13 /dev/video1
# $ groups
# adm cdrom sudo dip video plugdev lpadmin lxd sambashare docker libvirt



From python with cv2 it work well with the default driver
CAP_V4L2


>>> from pathlib import Path
>>> import cv2
>>> print(cv2.VideoCapture(0, apiPreference=cv2.cv2.CAP_V4L2).isOpened())
True
>>>



I would like to access it with the FFMPEG driver (no success) :


>>> print(cv2.VideoCapture(0, apiPreference=cv2.CAP_FFMPEG).isOpened())
False
>>>



From Python side the opencv look like to have the FFMPEG Driver :


>>> cv2.__version__
 '4.4.0'
 >>> info = cv2.getBuildInformation()
 >>> video, parallel = info.index('Video'), info.index('Parallel')
 >>> print(info[video:parallel])
 Video I/O:
 DC1394: NO
 FFMPEG: YES
 avcodec: YES (58.109.100)
 avformat: YES (58.61.100)
 avutil: YES (56.60.100)
 swscale: YES (5.8.100)
 avresample: NO
 GStreamer: NO
 v4l/v4l2: YES (linux/videodev2.h)
 >>>



From Linux side look ok too :


$ dpkg -l |grep -i opencv
ii libopencv-core4.2:amd64 4.2.0+dfsg-5 amd64 computer vision core library
ii libopencv-imgcodecs4.2:amd64 4.2.0+dfsg-5 amd64 computer vision Image Codecs library
ii libopencv-imgproc4.2:amd64 4.2.0+dfsg-5 amd64 computer vision Image Processing library
ii libopencv-videoio4.2:amd64 4.2.0+dfsg-5 amd64 computer vision Video I/O library

$ dpkg -l |grep -i ffm
ii ffmpeg 7:4.2.4-1ubuntu0.1 amd64 Tools for transcoding, streaming and playing of multimedia files
ii gstreamer1.0-libav:amd64 1.16.2-2 amd64 ffmpeg plugin for GStreamer
ii libavcodec-extra:amd64 7:4.2.4-1ubuntu0.1 amd64 FFmpeg library with extra codecs (metapackage)
ii libavcodec-extra58:amd64 7:4.2.4-1ubuntu0.1 amd64 FFmpeg library with additional de/encoders for audio/video codecs
ii libavdevice58:amd64 7:4.2.4-1ubuntu0.1 amd64 FFmpeg library for handling input and output devices - runtime files
ii libavfilter7:amd64 7:4.2.4-1ubuntu0.1 amd64 FFmpeg library containing media filters - runtime files
ii libavformat58:amd64 7:4.2.4-1ubuntu0.1 amd64 FFmpeg library with (de)muxers for multimedia containers - runtime files
ii libavresample4:amd64 7:4.2.4-1ubuntu0.1 amd64 FFmpeg compatibility library for resampling - runtime files
ii libavutil56:amd64 7:4.2.4-1ubuntu0.1 amd64 FFmpeg library with functions for simplifying programming - runtime files
ii libffmpegthumbnailer4v5 2.1.1-0.2build2 amd64 shared library for ffmpegthumbnailer
ii libpostproc55:amd64 7:4.2.4-1ubuntu0.1 amd64 FFmpeg library for post processing - runtime files
ii libswresample3:amd64 7:4.2.4-1ubuntu0.1 amd64 FFmpeg library for audio resampling, rematrixing etc. - runtime files
ii libswscale5:amd64 7:4.2.4-1ubuntu0.1 amd64 FFmpeg library for image scaling and various conversions - runtime files
$



-
avcodec/v4l2_buffers : don't prevent enqueue capture buffer to driver
16 mars 2020, par Ming Qianavcodec/v4l2_buffers : don't prevent enqueue capture buffer to driver
Enqueue/dequeue of the capture buffers should continue while draining.
Reference : linux/Documentation/media/uapi/v4l/dev-decoder.rst
"The client must continue to handle both queues independently,
similarly to normal decode operation. This includes :
...
- queuing and dequeuing CAPTURE buffers, until a buffer marked with
the V4L2_BUF_FLAG_LAST flag is dequeued"Signed-off-by : Ming Qian <ming.qian@nxp.com>
Signed-off-by : Andriy Gelman <andriy.gelman@gmail.com>