
Recherche avancée
Médias (91)
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (93)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (5243)
-
avcodec : move end zeroing code from av_packet_split_side_data() to avcodec_decode_sub...
21 novembre 2013, par Michael Niedermayeravcodec : move end zeroing code from av_packet_split_side_data() to avcodec_decode_subtitle2()
This code changes the input packet, which is read only and can in
rare circumstances lead to decoder errors. (i run into one of these in
the audio decoder, which corrupted the packet during av_find_stream_info()
so that actual decoding that single packet failed later)
Until a better fix is implemented, this commit limits the problem.
A better fix might be to make the subtitle decoders not depend on
data[size] = 0 or to copy their input when this is not the case. -
How to properly close a FFmpeg stream and AVFormatContext without leaking memory ?
13 décembre 2019, par DarkwonderI have built an app that uses
FFmpeg
to connect to remote IP cameras in order to receive video and audio frames viaRTSP 2.0
.The app is built using
Xcode 10-11
andObjective-C
with a customFFmpeg
build config.The architecture is the following :
MyApp
Document_0
RTSPContainerObject_0
RTSPObject_0
RTSPContainerObject_1
RTSPObject_1
...
Document_1
...GOAL :
- After closing
Document_0
noFFmpeg
objects should be leaked. - The closing process should stop-frame reading and destroy all objects which use
FFmpeg
.
PROBLEM :
- Somehow Xcode’s memory debugger shows two instances of
MyApp
.
FACTS :
-
macOS’es Activity Monitor doesn’t show two instances of
MyApp
. -
macOS’es Activity Monitor doesn’t any instances of FFmpeg or other child processes.
-
The issue is not related to some leftover memory due to a late memory snapshot since it can be reproduced easily.
-
Xcode’s memory debugger shows that the second instance only having
RTSPObject's
AVFormatContext
and no other objects.- The second instance has an
AVFormatContext
and theRTPSObject
still has a pointer to theAVFormatContext
.
- The second instance has an
FACTS :
- Opening and closing the second document
Document_1
leads to the same problem and having two objects leaked. This means that there is a bug that creates scalable problems. More and more memory is used and unavailable.
Here is my termination code :
- (void)terminate
{
// * Video and audio frame provisioning termination *
[self stopVideoStream];
[self stopAudioStream];
// *
// * Video codec termination *
avcodec_free_context(&_videoCodecContext); // NULL pointer safe.
self.videoCodecContext = NULL;
// *
// * Audio codec termination *
avcodec_free_context(&_audioCodecContext); // NULL pointer safe.
self.audioCodecContext = NULL;
// *
if (self.packet)
{
// Free the packet that was allocated by av_read_frame.
av_packet_unref(&packet); // The documentation doesn't mention NULL safety.
self.packet = NULL;
}
if (self.currentAudioPacket)
{
av_packet_unref(_currentAudioPacket);
self.currentAudioPacket = NULL;
}
// Free raw frame data.
av_freep(&_rawFrameData); // NULL pointer safe.
// Free the swscaler context swsContext.
self.isFrameConversionContextAllocated = NO;
sws_freeContext(scallingContext); // NULL pointer safe.
[self.audioPacketQueue removeAllObjects];
self.audioPacketQueue = nil;
self.audioPacketQueueLock = nil;
self.packetQueueLock = nil;
self.audioStream = nil;
BXLogInDomain(kLogDomainSources, kLogLevelVerbose, @"%s:%d: All streams have been terminated!", __FUNCTION__, __LINE__);
// * Session context termination *
AVFormatContext *pFormatCtx = self.sessionContext;
BOOL shouldProceedWithInputSessionTermination = self.isInputStreamOpen && self.shouldTerminateStreams && pFormatCtx;
NSLog(@"\nTerminating session context...");
if (shouldProceedWithInputSessionTermination)
{
NSLog(@"\nTerminating...");
//av_write_trailer(pFormatCtx);
// Discard all internally buffered data.
avformat_flush(pFormatCtx); // The documentation doesn't mention NULL safety.
// Close an opened input AVFormatContext and free it and all its contents.
// WARNING: Closing an non-opened stream will cause avformat_close_input to crash.
avformat_close_input(&pFormatCtx); // The documentation doesn't mention NULL safety.
NSLog(@"Logging leftovers - %p, %p %p", self.sessionContext, _sessionContext, pFormatCtx);
avformat_free_context(pFormatCtx);
NSLog(@"Logging content = %c", *self.sessionContext);
//avformat_free_context(pFormatCtx); - Not needed because avformat_close_input is closing it.
self.sessionContext = NULL;
}
// *
}IMPORTANT : The termination sequence is :
New frame will be read.
-[(RTSPObject)StreamInput currentVideoFrameDurationSec]
-[(RTSPObject)StreamInput frameDuration:]
-[(RTSPObject)StreamInput currentCGImageRef]
-[(RTSPObject)StreamInput convertRawFrameToRGB]
-[(RTSPObject)StreamInput pixelBufferFromImage:]
-[(RTSPObject)StreamInput cleanup]
-[(RTSPObject)StreamInput dealloc]
-[(RTSPObject)StreamInput stopVideoStream]
-[(RTSPObject)StreamInput stopAudioStream]
Terminating session context...
Terminating...
Logging leftovers - 0x109ec6400, 0x109ec6400 0x109ec6400
Logging content = \330
-[Document dealloc]NOT WORKING SOLUTIONS :
- Changing the order of object releases (The
AVFormatContext
has been freed first but it didn’t lead to any change). - Calling
RTSPObject's cleanup
method much sooner to giveFFmpeg
more time to handle object releases. - Reading a lot of SO answers and
FFmpeg
documentation to find a clean cleanup process or newer code which might highlight why the object release doesn’t happen properly.
I am currently reading the documentation on
AVFormatContext
since I believe that I am forgetting to release something. This believe is based on the memory debuggers output thatAVFormatContext
is still around.Here is my creation code :
#pragma mark # Helpers - Start
- (NSError *)openInputStreamWithVideoStreamId:(int)videoStreamId
audioStreamId:(int)audioStreamId
useFirst:(BOOL)useFirstStreamAvailable
inInit:(BOOL)isInitProcess
{
// NSLog(@"%s", __PRETTY_FUNCTION__); // RTSP
self.status = StreamProvisioningStatusStarting;
AVCodec *decoderCodec;
NSString *rtspURL = self.streamURL;
NSString *errorMessage = nil;
NSError *error = nil;
self.sessionContext = NULL;
self.sessionContext = avformat_alloc_context();
AVFormatContext *pFormatCtx = self.sessionContext;
if (!pFormatCtx)
{
// Create approp error.
return error;
}
// MUST be called before avformat_open_input().
av_dict_free(&_sessionOptions);
self.sessionOptions = 0;
if (self.usesTcp)
{
// "rtsp_transport" - Set RTSP transport protocols.
// Allowed are: udp_multicast, tcp, udp, http.
av_dict_set(&_sessionOptions, "rtsp_transport", "tcp", 0);
}
av_dict_set(&_sessionOptions, "rtsp_transport", "tcp", 0);
// Open an input stream and read the header with the demuxer options.
// WARNING: The stream must be closed with avformat_close_input()
if (avformat_open_input(&pFormatCtx, rtspURL.UTF8String, NULL, &_sessionOptions) != 0)
{
// WARNING: Note that a user-supplied AVFormatContext (pFormatCtx) will be freed on failure.
self.isInputStreamOpen = NO;
// Create approp error.
return error;
}
self.isInputStreamOpen = YES;
// user-supplied AVFormatContext pFormatCtx might have been modified.
self.sessionContext = pFormatCtx;
// Retrieve stream information.
if (avformat_find_stream_info(pFormatCtx,NULL) < 0)
{
// Create approp error.
return error;
}
// Find the first video stream
int streamCount = pFormatCtx->nb_streams;
if (streamCount == 0)
{
// Create approp error.
return error;
}
int noStreamsAvailable = pFormatCtx->streams == NULL;
if (noStreamsAvailable)
{
// Create approp error.
return error;
}
// Result. An Index can change, an identifier shouldn't.
self.selectedVideoStreamId = STREAM_NOT_FOUND;
self.selectedAudioStreamId = STREAM_NOT_FOUND;
// Fallback.
int firstVideoStreamIndex = STREAM_NOT_FOUND;
int firstAudioStreamIndex = STREAM_NOT_FOUND;
self.selectedVideoStreamIndex = STREAM_NOT_FOUND;
self.selectedAudioStreamIndex = STREAM_NOT_FOUND;
for (int i = 0; i < streamCount; i++)
{
// Looking for video streams.
AVStream *stream = pFormatCtx->streams[i];
if (!stream) { continue; }
AVCodecParameters *codecPar = stream->codecpar;
if (!codecPar) { continue; }
if (codecPar->codec_type==AVMEDIA_TYPE_VIDEO)
{
if (stream->id == videoStreamId)
{
self.selectedVideoStreamId = videoStreamId;
self.selectedVideoStreamIndex = i;
}
if (firstVideoStreamIndex == STREAM_NOT_FOUND)
{
firstVideoStreamIndex = i;
}
}
// Looking for audio streams.
if (codecPar->codec_type==AVMEDIA_TYPE_AUDIO)
{
if (stream->id == audioStreamId)
{
self.selectedAudioStreamId = audioStreamId;
self.selectedAudioStreamIndex = i;
}
if (firstAudioStreamIndex == STREAM_NOT_FOUND)
{
firstAudioStreamIndex = i;
}
}
}
// Use first video and audio stream available (if possible).
if (self.selectedVideoStreamIndex == STREAM_NOT_FOUND && useFirstStreamAvailable && firstVideoStreamIndex != STREAM_NOT_FOUND)
{
self.selectedVideoStreamIndex = firstVideoStreamIndex;
self.selectedVideoStreamId = pFormatCtx->streams[firstVideoStreamIndex]->id;
}
if (self.selectedAudioStreamIndex == STREAM_NOT_FOUND && useFirstStreamAvailable && firstAudioStreamIndex != STREAM_NOT_FOUND)
{
self.selectedAudioStreamIndex = firstAudioStreamIndex;
self.selectedAudioStreamId = pFormatCtx->streams[firstAudioStreamIndex]->id;
}
if (self.selectedVideoStreamIndex == STREAM_NOT_FOUND)
{
// Create approp error.
return error;
}
// See AVCodecID for codec listing.
// * Video codec setup:
// 1. Find the decoder for the video stream with the gived codec id.
AVStream *stream = pFormatCtx->streams[self.selectedVideoStreamIndex];
if (!stream)
{
// Create approp error.
return error;
}
AVCodecParameters *codecPar = stream->codecpar;
if (!codecPar)
{
// Create approp error.
return error;
}
decoderCodec = avcodec_find_decoder(codecPar->codec_id);
if (decoderCodec == NULL)
{
// Create approp error.
return error;
}
// Get a pointer to the codec context for the video stream.
// WARNING: The resulting AVCodecContext should be freed with avcodec_free_context().
// Replaced:
// self.videoCodecContext = pFormatCtx->streams[self.selectedVideoStreamIndex]->codec;
// With:
self.videoCodecContext = avcodec_alloc_context3(decoderCodec);
avcodec_parameters_to_context(self.videoCodecContext,
codecPar);
self.videoCodecContext->thread_count = 4;
NSString *description = [NSString stringWithUTF8String:decoderCodec->long_name];
// 2. Open codec.
if (avcodec_open2(self.videoCodecContext, decoderCodec, NULL) < 0)
{
// Create approp error.
return error;
}
// * Audio codec setup:
if (self.selectedAudioStreamIndex > -1)
{
[self setupAudioDecoder];
}
// Allocate a raw video frame data structure. Contains audio and video data.
self.rawFrameData = av_frame_alloc();
self.outputWidth = self.videoCodecContext->width;
self.outputHeight = self.videoCodecContext->height;
if (!isInitProcess)
{
// Triggering notifications in init process won't change UI since the object is created locally. All
// objects which need data access to this object will not be able to get it. Thats why we don't notifiy anyone about the changes.
[NSNotificationCenter.defaultCenter postNotificationName:NSNotification.rtspVideoStreamSelectionChanged
object:nil userInfo: self.selectedVideoStream];
[NSNotificationCenter.defaultCenter postNotificationName:NSNotification.rtspAudioStreamSelectionChanged
object:nil userInfo: self.selectedAudioStream];
}
return nil;
}UPDATE 1
The initial architecture allowed using any given thread. Most of the below code would mostly run on the main thread. This solution was not appropriate since the opening of the stream input can take several seconds for which the main thread is blocked while waiting for a network response inside
FFmpeg
. To solve this issue I have implemented the following solution :- Creation and the initial setup are only allowed on the
background_thread
(see code snippet "1" below). - Changes are allowed on the
current_thread(Any)
. - Termination is allowed on the
current_thread(Any)
.
After removing
main thread
checks anddispatch_asyncs
to background threads, leaking has stopped and I can’t reproduce the issue anymore :// Code that produces the issue.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// 1 - Create and do initial setup.
// This block creates the issue.
[self.rtspObject = [[RTSPObject alloc] initWithURL: ... ];
[self.rtspObject openInputStreamWithVideoStreamId: ...
audioStreamId: ...
useFirst: ...
inInit: ...];
});I still don’t understand why Xcode’s memory debugger says that this block is retained ?
Any advice or idea is welcome.
- After closing
-
Merge multi channel audio buffers into one CMSampleBuffer
26 avril 2020, par DarkwonderI am using FFmpeg to access an RTSP stream in my macOS app.



REACHED GOALS : I have created a tone generator which creates single channel audio and returns a CMSampleBuffer. The tone generator is used to test my audio pipeline when the video's fps and audio sample rates are changed.



GOAL : The goal is to merge multi-channel audio buffers into a single CMSampleBuffer.



Audio data lifecyle :



AVCodecContext* audioContext = self.rtspStreamProvider.audioCodecContext;
 if (!audioContext) { return; }

 // Getting audio settings from FFmpegs audio context (AVCodecContext).
 int samplesPerChannel = audioContext->frame_size;
 int frameNumber = audioContext->frame_number;
 int sampleRate = audioContext->sample_rate;
 int fps = [self.rtspStreamProvider fps];

 int calculatedSampleRate = sampleRate / fps;

 // NSLog(@"\nSamples per channel = %i, frames = %i.\nSample rate = %i, fps = %i.\ncalculatedSampleRate = %i.", samplesPerChannel, frameNumber, sampleRate, fps, calculatedSampleRate);

 // Decoding the audio data from a encoded AVPacket into a AVFrame.
 AVFrame* audioFrame = [self.rtspStreamProvider readDecodedAudioFrame];
 if (!audioFrame) { return; }

 // Extracting my audio buffers from FFmpegs AVFrame.
 uint8_t* leftChannelAudioBufRef = audioFrame->data[0];
 uint8_t* rightChannelAudioBufRef = audioFrame->data[1];

 // Creating the CMSampleBuffer with audio data.
 CMSampleBufferRef leftSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:leftChannelAudioBufRef channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];
// CMSampleBufferRef rightSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:packet->data[1] channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];

 if (!leftSampleBuffer) { return; }
 if (!self.audioQueue) { return; }
 if (!self.audioDelegates) { return; }

 // All audio consumers will receive audio samples via delegation. 
 dispatch_sync(self.audioQueue, ^{
 NSHashTable *audioDelegates = self.audioDelegates;
 for (id<audiodataproviderdelegate> audioDelegate in audioDelegates)
 {
 [audioDelegate provider:self didOutputAudioSampleBuffer:leftSampleBuffer];
 // [audioDelegate provider:self didOutputAudioSampleBuffer:rightSampleBuffer];
 }
 });
</audiodataproviderdelegate>



CMSampleBuffer containing audio data creation :



import Foundation
import CoreMedia

@objc class CMSampleBufferFactory: NSObject
{

 @objc static func createAudioSampleBufferUsing(data: UnsafeMutablePointer<uint8> ,
 channelCount: UInt32,
 framesCount: CMItemCount,
 sampleRate: Double) -> CMSampleBuffer? {

 /* Prepare for sample Buffer creation */
 var sampleBuffer: CMSampleBuffer! = nil
 var osStatus: OSStatus = -1
 var audioFormatDescription: CMFormatDescription! = nil

 var absd: AudioStreamBasicDescription! = nil
 let sampleDuration = CMTimeMake(value: 1, timescale: Int32(sampleRate))
 let presentationTimeStamp = CMTimeMake(value: 0, timescale: Int32(sampleRate))

 // NOTE: Change bytesPerFrame if you change the block buffer value types. Currently we are using double.
 let bytesPerFrame: UInt32 = UInt32(MemoryLayout<float32>.size) * channelCount
 let memoryBlockByteLength = framesCount * Int(bytesPerFrame)

// var acl = AudioChannelLayout()
// acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo

 /* Sample Buffer Block buffer creation */
 var blockBuffer: CMBlockBuffer?

 osStatus = CMBlockBufferCreateWithMemoryBlock(
 allocator: kCFAllocatorDefault,
 memoryBlock: nil,
 blockLength: memoryBlockByteLength,
 blockAllocator: nil,
 customBlockSource: nil,
 offsetToData: 0,
 dataLength: memoryBlockByteLength,
 flags: 0,
 blockBufferOut: &blockBuffer
 )

 assert(osStatus == kCMBlockBufferNoErr)

 guard let eBlock = blockBuffer else { return nil }

 osStatus = CMBlockBufferFillDataBytes(with: 0, blockBuffer: eBlock, offsetIntoDestination: 0, dataLength: memoryBlockByteLength)
 assert(osStatus == kCMBlockBufferNoErr)

 TVBlockBufferHelper.fillAudioBlockBuffer(blockBuffer,
 audioData: data,
 frames: Int32(framesCount))
 /* Audio description creations */

 absd = AudioStreamBasicDescription(
 mSampleRate: sampleRate,
 mFormatID: kAudioFormatLinearPCM,
 mFormatFlags: kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsFloat,
 mBytesPerPacket: bytesPerFrame,
 mFramesPerPacket: 1,
 mBytesPerFrame: bytesPerFrame,
 mChannelsPerFrame: channelCount,
 mBitsPerChannel: 32,
 mReserved: 0
 )

 guard absd != nil else {
 print("\nCreating AudioStreamBasicDescription Failed.")
 return nil
 }

 osStatus = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault,
 asbd: &absd,
 layoutSize: 0,
 layout: nil,
// layoutSize: MemoryLayout<audiochannellayout>.size,
// layout: &acl,
 magicCookieSize: 0,
 magicCookie: nil,
 extensions: nil,
 formatDescriptionOut: &audioFormatDescription)

 guard osStatus == noErr else {
 print("\nCreating CMFormatDescription Failed.")
 return nil
 }

 /* Create sample Buffer */
 var timmingInfo = CMSampleTimingInfo(duration: sampleDuration, presentationTimeStamp: presentationTimeStamp, decodeTimeStamp: .invalid)

 osStatus = CMSampleBufferCreate(allocator: kCFAllocatorDefault,
 dataBuffer: eBlock,
 dataReady: true,
 makeDataReadyCallback: nil,
 refcon: nil,
 formatDescription: audioFormatDescription,
 sampleCount: framesCount,
 sampleTimingEntryCount: 1,
 sampleTimingArray: &timmingInfo,
 sampleSizeEntryCount: 0, // Must be 0, 1, or numSamples.
 sampleSizeArray: nil, // Pointer ot Int. Don't know the size. Don't know if its bytes or bits?
 sampleBufferOut: &sampleBuffer)
 return sampleBuffer
 }

}
</audiochannellayout></float32></uint8>



CMSampleBuffer gets filled with raw audio data from FFmpeg's data :



@import Foundation;
@import CoreMedia;

@interface BlockBufferHelper : NSObject

+(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer
 audioData:(uint8_t *)data
 frames:(int)framesCount;



@end

#import "TVBlockBufferHelper.h"

@implementation BlockBufferHelper

+(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer
 audioData:(uint8_t *)data
 frames:(int)framesCount
{
 // Possibly dev error.
 if (framesCount == 0) {
 NSAssert(false, @"\nfillAudioBlockBuffer/audioData/frames will not be able to fill an blockBuffer which has no frames.");
 return;
 }

 char *rawBuffer = NULL;

 size_t size = 0;

 OSStatus status = CMBlockBufferGetDataPointer(blockBuffer, 0, &size, NULL, &rawBuffer);
 if(status != noErr)
 {
 return;
 }

 memcpy(rawBuffer, data, framesCount);
}

@end




The
LEARNING Core Audio
book from Chris Adamson/Kevin Avila points me toward a multi channel mixer. 
The multi channel mixer should have 2-n inputs and 1 output. I assume the output could be a buffer or something that could be put into aCMSampleBuffer
for further consumption.


This direction should lead me to
AudioUnits
,AUGraph
and theAudioToolbox
. I don't understand all of these classes and how they work together. I have found some code snippets on SO which could help me but most of them useAudioToolBox
classes and don't useCMSampleBuffers
as much as I need.


Is there another way to merge audio buffers into a new one ?



Is creating a multi channel mixer using AudioToolBox the right direction ?