
Recherche avancée
Autres articles (55)
-
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...) -
Automated installation script of MediaSPIP
25 avril 2011, parTo overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
The documentation of the use of this installation script is available here.
The code of this (...)
Sur d’autres sites (6127)
-
Merge multi channel audio buffers into one CMSampleBuffer
26 avril 2020, par DarkwonderI am using FFmpeg to access an RTSP stream in my macOS app.



REACHED GOALS : I have created a tone generator which creates single channel audio and returns a CMSampleBuffer. The tone generator is used to test my audio pipeline when the video's fps and audio sample rates are changed.



GOAL : The goal is to merge multi-channel audio buffers into a single CMSampleBuffer.



Audio data lifecyle :



AVCodecContext* audioContext = self.rtspStreamProvider.audioCodecContext;
 if (!audioContext) { return; }

 // Getting audio settings from FFmpegs audio context (AVCodecContext).
 int samplesPerChannel = audioContext->frame_size;
 int frameNumber = audioContext->frame_number;
 int sampleRate = audioContext->sample_rate;
 int fps = [self.rtspStreamProvider fps];

 int calculatedSampleRate = sampleRate / fps;

 // NSLog(@"\nSamples per channel = %i, frames = %i.\nSample rate = %i, fps = %i.\ncalculatedSampleRate = %i.", samplesPerChannel, frameNumber, sampleRate, fps, calculatedSampleRate);

 // Decoding the audio data from a encoded AVPacket into a AVFrame.
 AVFrame* audioFrame = [self.rtspStreamProvider readDecodedAudioFrame];
 if (!audioFrame) { return; }

 // Extracting my audio buffers from FFmpegs AVFrame.
 uint8_t* leftChannelAudioBufRef = audioFrame->data[0];
 uint8_t* rightChannelAudioBufRef = audioFrame->data[1];

 // Creating the CMSampleBuffer with audio data.
 CMSampleBufferRef leftSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:leftChannelAudioBufRef channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];
// CMSampleBufferRef rightSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:packet->data[1] channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];

 if (!leftSampleBuffer) { return; }
 if (!self.audioQueue) { return; }
 if (!self.audioDelegates) { return; }

 // All audio consumers will receive audio samples via delegation. 
 dispatch_sync(self.audioQueue, ^{
 NSHashTable *audioDelegates = self.audioDelegates;
 for (id<audiodataproviderdelegate> audioDelegate in audioDelegates)
 {
 [audioDelegate provider:self didOutputAudioSampleBuffer:leftSampleBuffer];
 // [audioDelegate provider:self didOutputAudioSampleBuffer:rightSampleBuffer];
 }
 });
</audiodataproviderdelegate>



CMSampleBuffer containing audio data creation :



import Foundation
import CoreMedia

@objc class CMSampleBufferFactory: NSObject
{

 @objc static func createAudioSampleBufferUsing(data: UnsafeMutablePointer<uint8> ,
 channelCount: UInt32,
 framesCount: CMItemCount,
 sampleRate: Double) -> CMSampleBuffer? {

 /* Prepare for sample Buffer creation */
 var sampleBuffer: CMSampleBuffer! = nil
 var osStatus: OSStatus = -1
 var audioFormatDescription: CMFormatDescription! = nil

 var absd: AudioStreamBasicDescription! = nil
 let sampleDuration = CMTimeMake(value: 1, timescale: Int32(sampleRate))
 let presentationTimeStamp = CMTimeMake(value: 0, timescale: Int32(sampleRate))

 // NOTE: Change bytesPerFrame if you change the block buffer value types. Currently we are using double.
 let bytesPerFrame: UInt32 = UInt32(MemoryLayout<float32>.size) * channelCount
 let memoryBlockByteLength = framesCount * Int(bytesPerFrame)

// var acl = AudioChannelLayout()
// acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo

 /* Sample Buffer Block buffer creation */
 var blockBuffer: CMBlockBuffer?

 osStatus = CMBlockBufferCreateWithMemoryBlock(
 allocator: kCFAllocatorDefault,
 memoryBlock: nil,
 blockLength: memoryBlockByteLength,
 blockAllocator: nil,
 customBlockSource: nil,
 offsetToData: 0,
 dataLength: memoryBlockByteLength,
 flags: 0,
 blockBufferOut: &blockBuffer
 )

 assert(osStatus == kCMBlockBufferNoErr)

 guard let eBlock = blockBuffer else { return nil }

 osStatus = CMBlockBufferFillDataBytes(with: 0, blockBuffer: eBlock, offsetIntoDestination: 0, dataLength: memoryBlockByteLength)
 assert(osStatus == kCMBlockBufferNoErr)

 TVBlockBufferHelper.fillAudioBlockBuffer(blockBuffer,
 audioData: data,
 frames: Int32(framesCount))
 /* Audio description creations */

 absd = AudioStreamBasicDescription(
 mSampleRate: sampleRate,
 mFormatID: kAudioFormatLinearPCM,
 mFormatFlags: kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsFloat,
 mBytesPerPacket: bytesPerFrame,
 mFramesPerPacket: 1,
 mBytesPerFrame: bytesPerFrame,
 mChannelsPerFrame: channelCount,
 mBitsPerChannel: 32,
 mReserved: 0
 )

 guard absd != nil else {
 print("\nCreating AudioStreamBasicDescription Failed.")
 return nil
 }

 osStatus = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault,
 asbd: &absd,
 layoutSize: 0,
 layout: nil,
// layoutSize: MemoryLayout<audiochannellayout>.size,
// layout: &acl,
 magicCookieSize: 0,
 magicCookie: nil,
 extensions: nil,
 formatDescriptionOut: &audioFormatDescription)

 guard osStatus == noErr else {
 print("\nCreating CMFormatDescription Failed.")
 return nil
 }

 /* Create sample Buffer */
 var timmingInfo = CMSampleTimingInfo(duration: sampleDuration, presentationTimeStamp: presentationTimeStamp, decodeTimeStamp: .invalid)

 osStatus = CMSampleBufferCreate(allocator: kCFAllocatorDefault,
 dataBuffer: eBlock,
 dataReady: true,
 makeDataReadyCallback: nil,
 refcon: nil,
 formatDescription: audioFormatDescription,
 sampleCount: framesCount,
 sampleTimingEntryCount: 1,
 sampleTimingArray: &timmingInfo,
 sampleSizeEntryCount: 0, // Must be 0, 1, or numSamples.
 sampleSizeArray: nil, // Pointer ot Int. Don't know the size. Don't know if its bytes or bits?
 sampleBufferOut: &sampleBuffer)
 return sampleBuffer
 }

}
</audiochannellayout></float32></uint8>



CMSampleBuffer gets filled with raw audio data from FFmpeg's data :



@import Foundation;
@import CoreMedia;

@interface BlockBufferHelper : NSObject

+(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer
 audioData:(uint8_t *)data
 frames:(int)framesCount;



@end

#import "TVBlockBufferHelper.h"

@implementation BlockBufferHelper

+(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer
 audioData:(uint8_t *)data
 frames:(int)framesCount
{
 // Possibly dev error.
 if (framesCount == 0) {
 NSAssert(false, @"\nfillAudioBlockBuffer/audioData/frames will not be able to fill an blockBuffer which has no frames.");
 return;
 }

 char *rawBuffer = NULL;

 size_t size = 0;

 OSStatus status = CMBlockBufferGetDataPointer(blockBuffer, 0, &size, NULL, &rawBuffer);
 if(status != noErr)
 {
 return;
 }

 memcpy(rawBuffer, data, framesCount);
}

@end




The
LEARNING Core Audio
book from Chris Adamson/Kevin Avila points me toward a multi channel mixer. 
The multi channel mixer should have 2-n inputs and 1 output. I assume the output could be a buffer or something that could be put into aCMSampleBuffer
for further consumption.


This direction should lead me to
AudioUnits
,AUGraph
and theAudioToolbox
. I don't understand all of these classes and how they work together. I have found some code snippets on SO which could help me but most of them useAudioToolBox
classes and don't useCMSampleBuffers
as much as I need.


Is there another way to merge audio buffers into a new one ?



Is creating a multi channel mixer using AudioToolBox the right direction ?


-
Data Privacy Day 2020
27 janvier 2020, par Matthieu Aubry — Privacy -
FFmpeg mix audio clips at given time into a main audio file
5 mai 2020, par AramilI has been recording small audio clips for an audio book. I have the start time of each one in seconds. The music lenght is of, let's say 60 min. I am thinking in create a silence audio file of the same duration as the music, but how I can add each clip into the given start time ? No matter if the clips overlap. I tried using concat and inpoint without the blank file and the output is empty (I am using wav files), that why the idea of use a master blank file as base.



If possible I would really appreciate any example.



Thanks