Recherche avancée

Médias (91)

Autres articles (93)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (5243)

  • avcodec : move end zeroing code from av_packet_split_side_data() to avcodec_decode_sub...

    21 novembre 2013, par Michael Niedermayer
    avcodec : move end zeroing code from av_packet_split_side_data() to avcodec_decode_subtitle2()
    

    This code changes the input packet, which is read only and can in
    rare circumstances lead to decoder errors. (i run into one of these in
    the audio decoder, which corrupted the packet during av_find_stream_info()
    so that actual decoding that single packet failed later)
    Until a better fix is implemented, this commit limits the problem.
    A better fix might be to make the subtitle decoders not depend on
    data[size] = 0 or to copy their input when this is not the case.

    • [DH] libavcodec/avpacket.c
    • [DH] libavcodec/utils.c
  • How to properly close a FFmpeg stream and AVFormatContext without leaking memory ?

    13 décembre 2019, par Darkwonder

    I have built an app that uses FFmpeg to connect to remote IP cameras in order to receive video and audio frames via RTSP 2.0.

    The app is built using Xcode 10-11 and Objective-C with a custom FFmpeg build config.

    The architecture is the following :

    MyApp


    Document_0

       RTSPContainerObject_0
           RTSPObject_0

       RTSPContainerObject_1
           RTSPObject_1

       ...
    Document_1
    ...

    GOAL :

    1. After closing Document_0 no FFmpeg objects should be leaked.
    2. The closing process should stop-frame reading and destroy all objects which use FFmpeg.

    PROBLEM :

    enter image description here

    1. Somehow Xcode’s memory debugger shows two instances of MyApp.

    FACTS :

    • macOS’es Activity Monitor doesn’t show two instances of MyApp.

    • macOS’es Activity Monitor doesn’t any instances of FFmpeg or other child processes.

    • The issue is not related to some leftover memory due to a late memory snapshot since it can be reproduced easily.

    • Xcode’s memory debugger shows that the second instance only having RTSPObject's AVFormatContext and no other objects.

      1. The second instance has an AVFormatContext and the RTPSObject still has a pointer to the AVFormatContext.

    FACTS :

    • Opening and closing the second document Document_1 leads to the same problem and having two objects leaked. This means that there is a bug that creates scalable problems. More and more memory is used and unavailable.

    Here is my termination code :

      - (void)terminate
    {
       // * Video and audio frame provisioning termination *
       [self stopVideoStream];
       [self stopAudioStream];
       // *

       // * Video codec termination *
       avcodec_free_context(&_videoCodecContext); // NULL pointer safe.
       self.videoCodecContext = NULL;
       // *

    // * Audio codec termination *
    avcodec_free_context(&_audioCodecContext); // NULL pointer safe.
    self.audioCodecContext = NULL;
    // *

    if (self.packet)
    {
       // Free the packet that was allocated by av_read_frame.
       av_packet_unref(&packet); // The documentation doesn't mention NULL safety.
       self.packet = NULL;
    }

    if (self.currentAudioPacket)
    {
       av_packet_unref(_currentAudioPacket);
       self.currentAudioPacket = NULL;
    }

    // Free raw frame data.
    av_freep(&_rawFrameData); // NULL pointer safe.

    // Free the swscaler context swsContext.
    self.isFrameConversionContextAllocated = NO;
    sws_freeContext(scallingContext); // NULL pointer safe.

    [self.audioPacketQueue removeAllObjects];

    self.audioPacketQueue = nil;

    self.audioPacketQueueLock = nil;
    self.packetQueueLock = nil;
    self.audioStream = nil;
    BXLogInDomain(kLogDomainSources, kLogLevelVerbose, @"%s:%d: All streams have been terminated!", __FUNCTION__, __LINE__);

    // * Session context termination *
    AVFormatContext *pFormatCtx = self.sessionContext;
    BOOL shouldProceedWithInputSessionTermination = self.isInputStreamOpen && self.shouldTerminateStreams && pFormatCtx;
    NSLog(@"\nTerminating session context...");
    if (shouldProceedWithInputSessionTermination)
    {
       NSLog(@"\nTerminating...");
       //av_write_trailer(pFormatCtx);
       // Discard all internally buffered data.
       avformat_flush(pFormatCtx); // The documentation doesn't mention NULL safety.
       // Close an opened input AVFormatContext and free it and all its contents.
       // WARNING: Closing an non-opened stream will cause avformat_close_input to crash.
       avformat_close_input(&pFormatCtx); // The documentation doesn't mention NULL safety.
       NSLog(@"Logging leftovers - %p, %p  %p", self.sessionContext, _sessionContext, pFormatCtx);
       avformat_free_context(pFormatCtx);

       NSLog(@"Logging content = %c", *self.sessionContext);
       //avformat_free_context(pFormatCtx); - Not needed because avformat_close_input is closing it.
       self.sessionContext = NULL;
    }
    // *

    }

    IMPORTANT : The termination sequence is :

       New frame will be read.
    -[(RTSPObject)StreamInput currentVideoFrameDurationSec]
    -[(RTSPObject)StreamInput frameDuration:]
    -[(RTSPObject)StreamInput currentCGImageRef]
    -[(RTSPObject)StreamInput convertRawFrameToRGB]
    -[(RTSPObject)StreamInput pixelBufferFromImage:]
    -[(RTSPObject)StreamInput cleanup]
    -[(RTSPObject)StreamInput dealloc]
    -[(RTSPObject)StreamInput stopVideoStream]
    -[(RTSPObject)StreamInput stopAudioStream]

    Terminating session context...
    Terminating...
    Logging leftovers - 0x109ec6400, 0x109ec6400  0x109ec6400
    Logging content = \330
    -[Document dealloc]

    NOT WORKING SOLUTIONS :

    • Changing the order of object releases (The AVFormatContext has been freed first but it didn’t lead to any change).
    • Calling RTSPObject's cleanup method much sooner to give FFmpeg more time to handle object releases.
    • Reading a lot of SO answers and FFmpeg documentation to find a clean cleanup process or newer code which might highlight why the object release doesn’t happen properly.

    I am currently reading the documentation on AVFormatContext since I believe that I am forgetting to release something. This believe is based on the memory debuggers output that AVFormatContext is still around.

    Here is my creation code :

    #pragma mark # Helpers - Start

    - (NSError *)openInputStreamWithVideoStreamId:(int)videoStreamId
                                   audioStreamId:(int)audioStreamId
                                        useFirst:(BOOL)useFirstStreamAvailable
                                          inInit:(BOOL)isInitProcess
    {
       // NSLog(@"%s", __PRETTY_FUNCTION__); // RTSP
       self.status = StreamProvisioningStatusStarting;
       AVCodec *decoderCodec;
       NSString *rtspURL = self.streamURL;
       NSString *errorMessage = nil;
       NSError *error = nil;

       self.sessionContext = NULL;
       self.sessionContext = avformat_alloc_context();

       AVFormatContext *pFormatCtx = self.sessionContext;
       if (!pFormatCtx)
       {
           // Create approp error.
           return error;
       }


       // MUST be called before avformat_open_input().
       av_dict_free(&_sessionOptions);

           self.sessionOptions = 0;
           if (self.usesTcp)
           {
               // "rtsp_transport" - Set RTSP transport protocols.
               // Allowed are: udp_multicast, tcp, udp, http.
               av_dict_set(&_sessionOptions, "rtsp_transport", "tcp", 0);
           }
           av_dict_set(&_sessionOptions, "rtsp_transport", "tcp", 0);

       // Open an input stream and read the header with the demuxer options.
       // WARNING: The stream must be closed with avformat_close_input()
       if (avformat_open_input(&pFormatCtx, rtspURL.UTF8String, NULL, &_sessionOptions) != 0)
       {
           // WARNING: Note that a user-supplied AVFormatContext (pFormatCtx) will be freed on failure.
           self.isInputStreamOpen = NO;
           // Create approp error.
           return error;
       }

       self.isInputStreamOpen = YES;

       // user-supplied AVFormatContext pFormatCtx might have been modified.
       self.sessionContext = pFormatCtx;

       // Retrieve stream information.
       if (avformat_find_stream_info(pFormatCtx,NULL) < 0)
       {
           // Create approp error.
           return error;
       }

       // Find the first video stream
       int streamCount = pFormatCtx->nb_streams;

       if (streamCount == 0)
       {
           // Create approp error.
           return error;
       }

       int noStreamsAvailable = pFormatCtx->streams == NULL;

       if (noStreamsAvailable)
       {
           // Create approp error.
           return error;
       }

       // Result. An Index can change, an identifier shouldn't.
       self.selectedVideoStreamId = STREAM_NOT_FOUND;
       self.selectedAudioStreamId = STREAM_NOT_FOUND;

       // Fallback.
       int firstVideoStreamIndex = STREAM_NOT_FOUND;
       int firstAudioStreamIndex = STREAM_NOT_FOUND;

       self.selectedVideoStreamIndex = STREAM_NOT_FOUND;
       self.selectedAudioStreamIndex = STREAM_NOT_FOUND;

       for (int i = 0; i < streamCount; i++)
       {
           // Looking for video streams.
           AVStream *stream = pFormatCtx->streams[i];
           if (!stream) { continue; }
           AVCodecParameters *codecPar = stream->codecpar;
           if (!codecPar) { continue; }

           if (codecPar->codec_type==AVMEDIA_TYPE_VIDEO)
           {
               if (stream->id == videoStreamId)
               {
                   self.selectedVideoStreamId = videoStreamId;
                   self.selectedVideoStreamIndex = i;
               }

               if (firstVideoStreamIndex == STREAM_NOT_FOUND)
               {
                   firstVideoStreamIndex = i;
               }
           }
           // Looking for audio streams.
           if (codecPar->codec_type==AVMEDIA_TYPE_AUDIO)
           {
               if (stream->id == audioStreamId)
               {
                   self.selectedAudioStreamId = audioStreamId;
                   self.selectedAudioStreamIndex = i;
               }

               if (firstAudioStreamIndex == STREAM_NOT_FOUND)
               {
                   firstAudioStreamIndex = i;
               }
           }
       }

       // Use first video and audio stream available (if possible).

       if (self.selectedVideoStreamIndex == STREAM_NOT_FOUND && useFirstStreamAvailable && firstVideoStreamIndex != STREAM_NOT_FOUND)
       {
           self.selectedVideoStreamIndex = firstVideoStreamIndex;
           self.selectedVideoStreamId = pFormatCtx->streams[firstVideoStreamIndex]->id;
       }

       if (self.selectedAudioStreamIndex == STREAM_NOT_FOUND && useFirstStreamAvailable && firstAudioStreamIndex != STREAM_NOT_FOUND)
       {
           self.selectedAudioStreamIndex = firstAudioStreamIndex;
           self.selectedAudioStreamId = pFormatCtx->streams[firstAudioStreamIndex]->id;
       }

       if (self.selectedVideoStreamIndex == STREAM_NOT_FOUND)
       {
           // Create approp error.
           return error;
       }

       // See AVCodecID for codec listing.

       // * Video codec setup:
       // 1. Find the decoder for the video stream with the gived codec id.
       AVStream *stream = pFormatCtx->streams[self.selectedVideoStreamIndex];
       if (!stream)
       {
           // Create approp error.
           return error;
       }
       AVCodecParameters *codecPar = stream->codecpar;
       if (!codecPar)
       {
           // Create approp error.
           return error;
       }

       decoderCodec = avcodec_find_decoder(codecPar->codec_id);
       if (decoderCodec == NULL)
       {
           // Create approp error.
           return error;
       }

       // Get a pointer to the codec context for the video stream.
       // WARNING: The resulting AVCodecContext should be freed with avcodec_free_context().
       // Replaced:
       // self.videoCodecContext = pFormatCtx->streams[self.selectedVideoStreamIndex]->codec;
       // With:
       self.videoCodecContext = avcodec_alloc_context3(decoderCodec);
       avcodec_parameters_to_context(self.videoCodecContext,
                                     codecPar);

       self.videoCodecContext->thread_count = 4;
       NSString *description = [NSString stringWithUTF8String:decoderCodec->long_name];

       // 2. Open codec.
       if (avcodec_open2(self.videoCodecContext, decoderCodec, NULL) < 0)
       {
           // Create approp error.
           return error;
       }

       // * Audio codec setup:
       if (self.selectedAudioStreamIndex > -1)
       {
           [self setupAudioDecoder];
       }

       // Allocate a raw video frame data structure. Contains audio and video data.
       self.rawFrameData = av_frame_alloc();

       self.outputWidth = self.videoCodecContext->width;
       self.outputHeight = self.videoCodecContext->height;

       if (!isInitProcess)
       {
           // Triggering notifications in init process won't change UI since the object is created locally. All
           // objects which need data access to this object will not be able to get it. Thats why we don't notifiy anyone about the changes.
           [NSNotificationCenter.defaultCenter postNotificationName:NSNotification.rtspVideoStreamSelectionChanged
                                                             object:nil userInfo: self.selectedVideoStream];

           [NSNotificationCenter.defaultCenter postNotificationName:NSNotification.rtspAudioStreamSelectionChanged
                                                             object:nil userInfo: self.selectedAudioStream];
       }

       return nil;
    }

    UPDATE 1

    The initial architecture allowed using any given thread. Most of the below code would mostly run on the main thread. This solution was not appropriate since the opening of the stream input can take several seconds for which the main thread is blocked while waiting for a network response inside FFmpeg. To solve this issue I have implemented the following solution :

    • Creation and the initial setup are only allowed on the background_thread (see code snippet "1" below).
    • Changes are allowed on the current_thread(Any).
    • Termination is allowed on the current_thread(Any).

    After removing main thread checks and dispatch_asyncs to background threads, leaking has stopped and I can’t reproduce the issue anymore :

    // Code that produces the issue.  
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
       // 1 - Create and do initial setup.
       // This block creates the issue.
    [self.rtspObject = [[RTSPObject alloc] initWithURL: ... ];
    [self.rtspObject openInputStreamWithVideoStreamId: ...
                                   audioStreamId: ...
                                        useFirst: ...
                                          inInit: ...];
    });

    I still don’t understand why Xcode’s memory debugger says that this block is retained ?

    Any advice or idea is welcome.

  • Merge multi channel audio buffers into one CMSampleBuffer

    26 avril 2020, par Darkwonder

    I am using FFmpeg to access an RTSP stream in my macOS app.

    



    REACHED GOALS : I have created a tone generator which creates single channel audio and returns a CMSampleBuffer. The tone generator is used to test my audio pipeline when the video's fps and audio sample rates are changed.

    



    GOAL : The goal is to merge multi-channel audio buffers into a single CMSampleBuffer.

    



    Audio data lifecyle :

    



    AVCodecContext* audioContext = self.rtspStreamProvider.audioCodecContext;&#xA;        if (!audioContext) { return; }&#xA;&#xA;        // Getting audio settings from FFmpegs audio context (AVCodecContext).&#xA;        int samplesPerChannel = audioContext->frame_size;&#xA;        int frameNumber = audioContext->frame_number;&#xA;        int sampleRate = audioContext->sample_rate;&#xA;        int fps = [self.rtspStreamProvider fps];&#xA;&#xA;        int calculatedSampleRate = sampleRate / fps;&#xA;&#xA;        // NSLog(@"\nSamples per channel = %i, frames = %i.\nSample rate = %i, fps = %i.\ncalculatedSampleRate = %i.", samplesPerChannel, frameNumber, sampleRate, fps, calculatedSampleRate);&#xA;&#xA;        // Decoding the audio data from a encoded AVPacket into a AVFrame.&#xA;        AVFrame* audioFrame = [self.rtspStreamProvider readDecodedAudioFrame];&#xA;        if (!audioFrame) { return; }&#xA;&#xA;        // Extracting my audio buffers from FFmpegs AVFrame.&#xA;        uint8_t* leftChannelAudioBufRef = audioFrame->data[0];&#xA;        uint8_t* rightChannelAudioBufRef = audioFrame->data[1];&#xA;&#xA;        // Creating the CMSampleBuffer with audio data.&#xA;        CMSampleBufferRef leftSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:leftChannelAudioBufRef channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];&#xA;//      CMSampleBufferRef rightSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:packet->data[1] channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];&#xA;&#xA;        if (!leftSampleBuffer) { return; }&#xA;        if (!self.audioQueue) { return; }&#xA;        if (!self.audioDelegates) { return; }&#xA;&#xA;        // All audio consumers will receive audio samples via delegation. &#xA;        dispatch_sync(self.audioQueue, ^{&#xA;            NSHashTable *audioDelegates = self.audioDelegates;&#xA;            for (id<audiodataproviderdelegate> audioDelegate in audioDelegates)&#xA;            {&#xA;                [audioDelegate provider:self didOutputAudioSampleBuffer:leftSampleBuffer];&#xA;                // [audioDelegate provider:self didOutputAudioSampleBuffer:rightSampleBuffer];&#xA;            }&#xA;        });&#xA;</audiodataproviderdelegate>

    &#xA;&#xA;

    CMSampleBuffer containing audio data creation :

    &#xA;&#xA;

    import Foundation&#xA;import CoreMedia&#xA;&#xA;@objc class CMSampleBufferFactory: NSObject&#xA;{&#xA;&#xA;    @objc static func createAudioSampleBufferUsing(data: UnsafeMutablePointer<uint8> ,&#xA;                                             channelCount: UInt32,&#xA;                                             framesCount: CMItemCount,&#xA;                                             sampleRate: Double) -> CMSampleBuffer? {&#xA;&#xA;        /* Prepare for sample Buffer creation */&#xA;        var sampleBuffer: CMSampleBuffer! = nil&#xA;        var osStatus: OSStatus = -1&#xA;        var audioFormatDescription: CMFormatDescription! = nil&#xA;&#xA;        var absd: AudioStreamBasicDescription! = nil&#xA;        let sampleDuration = CMTimeMake(value: 1, timescale: Int32(sampleRate))&#xA;        let presentationTimeStamp = CMTimeMake(value: 0, timescale: Int32(sampleRate))&#xA;&#xA;        // NOTE: Change bytesPerFrame if you change the block buffer value types. Currently we are using double.&#xA;        let bytesPerFrame: UInt32 = UInt32(MemoryLayout<float32>.size) * channelCount&#xA;        let memoryBlockByteLength = framesCount * Int(bytesPerFrame)&#xA;&#xA;//      var acl = AudioChannelLayout()&#xA;//      acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo&#xA;&#xA;        /* Sample Buffer Block buffer creation */&#xA;        var blockBuffer: CMBlockBuffer?&#xA;&#xA;        osStatus = CMBlockBufferCreateWithMemoryBlock(&#xA;            allocator: kCFAllocatorDefault,&#xA;            memoryBlock: nil,&#xA;            blockLength: memoryBlockByteLength,&#xA;            blockAllocator: nil,&#xA;            customBlockSource: nil,&#xA;            offsetToData: 0,&#xA;            dataLength: memoryBlockByteLength,&#xA;            flags: 0,&#xA;            blockBufferOut: &amp;blockBuffer&#xA;        )&#xA;&#xA;        assert(osStatus == kCMBlockBufferNoErr)&#xA;&#xA;        guard let eBlock = blockBuffer else { return nil }&#xA;&#xA;        osStatus = CMBlockBufferFillDataBytes(with: 0, blockBuffer: eBlock, offsetIntoDestination: 0, dataLength: memoryBlockByteLength)&#xA;        assert(osStatus == kCMBlockBufferNoErr)&#xA;&#xA;        TVBlockBufferHelper.fillAudioBlockBuffer(blockBuffer,&#xA;                                                 audioData: data,&#xA;                                                 frames: Int32(framesCount))&#xA;        /* Audio description creations */&#xA;&#xA;        absd = AudioStreamBasicDescription(&#xA;            mSampleRate: sampleRate,&#xA;            mFormatID: kAudioFormatLinearPCM,&#xA;            mFormatFlags: kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsFloat,&#xA;            mBytesPerPacket: bytesPerFrame,&#xA;            mFramesPerPacket: 1,&#xA;            mBytesPerFrame: bytesPerFrame,&#xA;            mChannelsPerFrame: channelCount,&#xA;            mBitsPerChannel: 32,&#xA;            mReserved: 0&#xA;        )&#xA;&#xA;        guard absd != nil else {&#xA;            print("\nCreating AudioStreamBasicDescription Failed.")&#xA;            return nil&#xA;        }&#xA;&#xA;        osStatus = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault,&#xA;                                                  asbd: &amp;absd,&#xA;                                                  layoutSize: 0,&#xA;                                                  layout: nil,&#xA;//                                                layoutSize: MemoryLayout<audiochannellayout>.size,&#xA;//                                                layout: &amp;acl,&#xA;                                                  magicCookieSize: 0,&#xA;                                                  magicCookie: nil,&#xA;                                                  extensions: nil,&#xA;                                                  formatDescriptionOut: &amp;audioFormatDescription)&#xA;&#xA;        guard osStatus == noErr else {&#xA;            print("\nCreating CMFormatDescription Failed.")&#xA;            return nil&#xA;        }&#xA;&#xA;        /* Create sample Buffer */&#xA;        var timmingInfo = CMSampleTimingInfo(duration: sampleDuration, presentationTimeStamp: presentationTimeStamp, decodeTimeStamp: .invalid)&#xA;&#xA;        osStatus = CMSampleBufferCreate(allocator: kCFAllocatorDefault,&#xA;                                        dataBuffer: eBlock,&#xA;                                        dataReady: true,&#xA;                                        makeDataReadyCallback: nil,&#xA;                                        refcon: nil,&#xA;                                        formatDescription: audioFormatDescription,&#xA;                                        sampleCount: framesCount,&#xA;                                        sampleTimingEntryCount: 1,&#xA;                                        sampleTimingArray: &amp;timmingInfo,&#xA;                                        sampleSizeEntryCount: 0, // Must be 0, 1, or numSamples.&#xA;            sampleSizeArray: nil, // Pointer ot Int. Don&#x27;t know the size. Don&#x27;t know if its bytes or bits?&#xA;            sampleBufferOut: &amp;sampleBuffer)&#xA;        return sampleBuffer&#xA;    }&#xA;&#xA;}&#xA;</audiochannellayout></float32></uint8>

    &#xA;&#xA;

    CMSampleBuffer gets filled with raw audio data from FFmpeg's data :

    &#xA;&#xA;

    @import Foundation;&#xA;@import CoreMedia;&#xA;&#xA;@interface BlockBufferHelper : NSObject&#xA;&#xA;&#x2B;(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer&#xA;                  audioData:(uint8_t *)data&#xA;                     frames:(int)framesCount;&#xA;&#xA;&#xA;&#xA;@end&#xA;&#xA;#import "TVBlockBufferHelper.h"&#xA;&#xA;@implementation BlockBufferHelper&#xA;&#xA;&#x2B;(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer&#xA;                  audioData:(uint8_t *)data&#xA;                     frames:(int)framesCount&#xA;{&#xA;    // Possibly dev error.&#xA;    if (framesCount == 0) {&#xA;        NSAssert(false, @"\nfillAudioBlockBuffer/audioData/frames will not be able to fill an blockBuffer which has no frames.");&#xA;        return;&#xA;    }&#xA;&#xA;    char *rawBuffer = NULL;&#xA;&#xA;    size_t size = 0;&#xA;&#xA;    OSStatus status = CMBlockBufferGetDataPointer(blockBuffer, 0, &amp;size, NULL, &amp;rawBuffer);&#xA;    if(status != noErr)&#xA;    {&#xA;        return;&#xA;    }&#xA;&#xA;    memcpy(rawBuffer, data, framesCount);&#xA;}&#xA;&#xA;@end&#xA;

    &#xA;&#xA;

    The LEARNING Core Audio book from Chris Adamson/Kevin Avila points me toward a multi channel mixer. &#xA;The multi channel mixer should have 2-n inputs and 1 output. I assume the output could be a buffer or something that could be put into a CMSampleBuffer for further consumption.

    &#xA;&#xA;

    This direction should lead me to AudioUnits, AUGraph and the AudioToolbox. I don't understand all of these classes and how they work together. I have found some code snippets on SO which could help me but most of them use AudioToolBox classes and don't use CMSampleBuffers as much as I need.

    &#xA;&#xA;

    Is there another way to merge audio buffers into a new one ?

    &#xA;&#xA;

    Is creating a multi channel mixer using AudioToolBox the right direction ?

    &#xA;