Recherche avancée

Médias (0)

Mot : - Tags -/clipboard

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (55)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

Sur d’autres sites (6127)

  • Merge multi channel audio buffers into one CMSampleBuffer

    26 avril 2020, par Darkwonder

    I am using FFmpeg to access an RTSP stream in my macOS app.

    



    REACHED GOALS : I have created a tone generator which creates single channel audio and returns a CMSampleBuffer. The tone generator is used to test my audio pipeline when the video's fps and audio sample rates are changed.

    



    GOAL : The goal is to merge multi-channel audio buffers into a single CMSampleBuffer.

    



    Audio data lifecyle :

    



    AVCodecContext* audioContext = self.rtspStreamProvider.audioCodecContext;&#xA;        if (!audioContext) { return; }&#xA;&#xA;        // Getting audio settings from FFmpegs audio context (AVCodecContext).&#xA;        int samplesPerChannel = audioContext->frame_size;&#xA;        int frameNumber = audioContext->frame_number;&#xA;        int sampleRate = audioContext->sample_rate;&#xA;        int fps = [self.rtspStreamProvider fps];&#xA;&#xA;        int calculatedSampleRate = sampleRate / fps;&#xA;&#xA;        // NSLog(@"\nSamples per channel = %i, frames = %i.\nSample rate = %i, fps = %i.\ncalculatedSampleRate = %i.", samplesPerChannel, frameNumber, sampleRate, fps, calculatedSampleRate);&#xA;&#xA;        // Decoding the audio data from a encoded AVPacket into a AVFrame.&#xA;        AVFrame* audioFrame = [self.rtspStreamProvider readDecodedAudioFrame];&#xA;        if (!audioFrame) { return; }&#xA;&#xA;        // Extracting my audio buffers from FFmpegs AVFrame.&#xA;        uint8_t* leftChannelAudioBufRef = audioFrame->data[0];&#xA;        uint8_t* rightChannelAudioBufRef = audioFrame->data[1];&#xA;&#xA;        // Creating the CMSampleBuffer with audio data.&#xA;        CMSampleBufferRef leftSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:leftChannelAudioBufRef channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];&#xA;//      CMSampleBufferRef rightSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:packet->data[1] channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];&#xA;&#xA;        if (!leftSampleBuffer) { return; }&#xA;        if (!self.audioQueue) { return; }&#xA;        if (!self.audioDelegates) { return; }&#xA;&#xA;        // All audio consumers will receive audio samples via delegation. &#xA;        dispatch_sync(self.audioQueue, ^{&#xA;            NSHashTable *audioDelegates = self.audioDelegates;&#xA;            for (id<audiodataproviderdelegate> audioDelegate in audioDelegates)&#xA;            {&#xA;                [audioDelegate provider:self didOutputAudioSampleBuffer:leftSampleBuffer];&#xA;                // [audioDelegate provider:self didOutputAudioSampleBuffer:rightSampleBuffer];&#xA;            }&#xA;        });&#xA;</audiodataproviderdelegate>

    &#xA;&#xA;

    CMSampleBuffer containing audio data creation :

    &#xA;&#xA;

    import Foundation&#xA;import CoreMedia&#xA;&#xA;@objc class CMSampleBufferFactory: NSObject&#xA;{&#xA;&#xA;    @objc static func createAudioSampleBufferUsing(data: UnsafeMutablePointer<uint8> ,&#xA;                                             channelCount: UInt32,&#xA;                                             framesCount: CMItemCount,&#xA;                                             sampleRate: Double) -> CMSampleBuffer? {&#xA;&#xA;        /* Prepare for sample Buffer creation */&#xA;        var sampleBuffer: CMSampleBuffer! = nil&#xA;        var osStatus: OSStatus = -1&#xA;        var audioFormatDescription: CMFormatDescription! = nil&#xA;&#xA;        var absd: AudioStreamBasicDescription! = nil&#xA;        let sampleDuration = CMTimeMake(value: 1, timescale: Int32(sampleRate))&#xA;        let presentationTimeStamp = CMTimeMake(value: 0, timescale: Int32(sampleRate))&#xA;&#xA;        // NOTE: Change bytesPerFrame if you change the block buffer value types. Currently we are using double.&#xA;        let bytesPerFrame: UInt32 = UInt32(MemoryLayout<float32>.size) * channelCount&#xA;        let memoryBlockByteLength = framesCount * Int(bytesPerFrame)&#xA;&#xA;//      var acl = AudioChannelLayout()&#xA;//      acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo&#xA;&#xA;        /* Sample Buffer Block buffer creation */&#xA;        var blockBuffer: CMBlockBuffer?&#xA;&#xA;        osStatus = CMBlockBufferCreateWithMemoryBlock(&#xA;            allocator: kCFAllocatorDefault,&#xA;            memoryBlock: nil,&#xA;            blockLength: memoryBlockByteLength,&#xA;            blockAllocator: nil,&#xA;            customBlockSource: nil,&#xA;            offsetToData: 0,&#xA;            dataLength: memoryBlockByteLength,&#xA;            flags: 0,&#xA;            blockBufferOut: &amp;blockBuffer&#xA;        )&#xA;&#xA;        assert(osStatus == kCMBlockBufferNoErr)&#xA;&#xA;        guard let eBlock = blockBuffer else { return nil }&#xA;&#xA;        osStatus = CMBlockBufferFillDataBytes(with: 0, blockBuffer: eBlock, offsetIntoDestination: 0, dataLength: memoryBlockByteLength)&#xA;        assert(osStatus == kCMBlockBufferNoErr)&#xA;&#xA;        TVBlockBufferHelper.fillAudioBlockBuffer(blockBuffer,&#xA;                                                 audioData: data,&#xA;                                                 frames: Int32(framesCount))&#xA;        /* Audio description creations */&#xA;&#xA;        absd = AudioStreamBasicDescription(&#xA;            mSampleRate: sampleRate,&#xA;            mFormatID: kAudioFormatLinearPCM,&#xA;            mFormatFlags: kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsFloat,&#xA;            mBytesPerPacket: bytesPerFrame,&#xA;            mFramesPerPacket: 1,&#xA;            mBytesPerFrame: bytesPerFrame,&#xA;            mChannelsPerFrame: channelCount,&#xA;            mBitsPerChannel: 32,&#xA;            mReserved: 0&#xA;        )&#xA;&#xA;        guard absd != nil else {&#xA;            print("\nCreating AudioStreamBasicDescription Failed.")&#xA;            return nil&#xA;        }&#xA;&#xA;        osStatus = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault,&#xA;                                                  asbd: &amp;absd,&#xA;                                                  layoutSize: 0,&#xA;                                                  layout: nil,&#xA;//                                                layoutSize: MemoryLayout<audiochannellayout>.size,&#xA;//                                                layout: &amp;acl,&#xA;                                                  magicCookieSize: 0,&#xA;                                                  magicCookie: nil,&#xA;                                                  extensions: nil,&#xA;                                                  formatDescriptionOut: &amp;audioFormatDescription)&#xA;&#xA;        guard osStatus == noErr else {&#xA;            print("\nCreating CMFormatDescription Failed.")&#xA;            return nil&#xA;        }&#xA;&#xA;        /* Create sample Buffer */&#xA;        var timmingInfo = CMSampleTimingInfo(duration: sampleDuration, presentationTimeStamp: presentationTimeStamp, decodeTimeStamp: .invalid)&#xA;&#xA;        osStatus = CMSampleBufferCreate(allocator: kCFAllocatorDefault,&#xA;                                        dataBuffer: eBlock,&#xA;                                        dataReady: true,&#xA;                                        makeDataReadyCallback: nil,&#xA;                                        refcon: nil,&#xA;                                        formatDescription: audioFormatDescription,&#xA;                                        sampleCount: framesCount,&#xA;                                        sampleTimingEntryCount: 1,&#xA;                                        sampleTimingArray: &amp;timmingInfo,&#xA;                                        sampleSizeEntryCount: 0, // Must be 0, 1, or numSamples.&#xA;            sampleSizeArray: nil, // Pointer ot Int. Don&#x27;t know the size. Don&#x27;t know if its bytes or bits?&#xA;            sampleBufferOut: &amp;sampleBuffer)&#xA;        return sampleBuffer&#xA;    }&#xA;&#xA;}&#xA;</audiochannellayout></float32></uint8>

    &#xA;&#xA;

    CMSampleBuffer gets filled with raw audio data from FFmpeg's data :

    &#xA;&#xA;

    @import Foundation;&#xA;@import CoreMedia;&#xA;&#xA;@interface BlockBufferHelper : NSObject&#xA;&#xA;&#x2B;(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer&#xA;                  audioData:(uint8_t *)data&#xA;                     frames:(int)framesCount;&#xA;&#xA;&#xA;&#xA;@end&#xA;&#xA;#import "TVBlockBufferHelper.h"&#xA;&#xA;@implementation BlockBufferHelper&#xA;&#xA;&#x2B;(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer&#xA;                  audioData:(uint8_t *)data&#xA;                     frames:(int)framesCount&#xA;{&#xA;    // Possibly dev error.&#xA;    if (framesCount == 0) {&#xA;        NSAssert(false, @"\nfillAudioBlockBuffer/audioData/frames will not be able to fill an blockBuffer which has no frames.");&#xA;        return;&#xA;    }&#xA;&#xA;    char *rawBuffer = NULL;&#xA;&#xA;    size_t size = 0;&#xA;&#xA;    OSStatus status = CMBlockBufferGetDataPointer(blockBuffer, 0, &amp;size, NULL, &amp;rawBuffer);&#xA;    if(status != noErr)&#xA;    {&#xA;        return;&#xA;    }&#xA;&#xA;    memcpy(rawBuffer, data, framesCount);&#xA;}&#xA;&#xA;@end&#xA;

    &#xA;&#xA;

    The LEARNING Core Audio book from Chris Adamson/Kevin Avila points me toward a multi channel mixer. &#xA;The multi channel mixer should have 2-n inputs and 1 output. I assume the output could be a buffer or something that could be put into a CMSampleBuffer for further consumption.

    &#xA;&#xA;

    This direction should lead me to AudioUnits, AUGraph and the AudioToolbox. I don't understand all of these classes and how they work together. I have found some code snippets on SO which could help me but most of them use AudioToolBox classes and don't use CMSampleBuffers as much as I need.

    &#xA;&#xA;

    Is there another way to merge audio buffers into a new one ?

    &#xA;&#xA;

    Is creating a multi channel mixer using AudioToolBox the right direction ?

    &#xA;

  • Data Privacy Day 2020

    27 janvier 2020, par Matthieu Aubry — Privacy

    It’s January 28th which means it’s Data Privacy Day !

    Today is an important day for the Matomo team as we reflect on our mission and our goals for 2020. This year I wanted to send a video message to all Matomo users, community members and customers. 

    Check it out (full transcript below)

    A video message from Matomo founder, Matthieu Aubry

    Privacy-friendly alternatives

    Video transcript

    Hey everyone,

    Matthieu here, Founder of Matomo.

    Today is one of the most significant days of the year for the Matomo team – it’s Data Privacy Day. And so I wanted to quickly reflect on our mission and the significance of this day. 

    In today’s busy online world where data is king, this day is an important reminder of being vigilant in protecting our personal information online.

    Matomo began 12 years ago as an open-source alternative to Google Analytics – the goal was, and still is to give full control of data back to users. 

    In 2020, we are determined to see through this commitment. We will keep building a powerful and ethical web analytics platform that focuses on privacy protection, data ownership, and provides value to all Matomo users and customers.

    And what’s fantastic is to see the rise of other quality software companies offering privacy-friendly alternatives for web browsers, search engines, file sharing, email providers, all with a similar mission. And with these products now widely available, we encourage you to take back control of all your online activities and begin this new decade with a resolution to stay safe online.

    I’ll provide you with some links below the video to check out these privacy-friendly alternatives. If you have a website and want to gain valuable insights on the visitors while owning your data, join us ! 

    Matomo Analytics On-Premise is and always will be free to download and install on your own servers and on your own terms.

    Also feel free to join our active community or spread the word to your friends and network about the importance of data privacy.

    Thank you all and wishing you a great 2020 !

    For more information on how Matomo protects the privacy of your users, visit : https://matomo.org/privacy/

    Do you have privacy concerns ?

    What better day than today to speak up ! What privacy concerns have you experienced ?

  • FFmpeg mix audio clips at given time into a main audio file

    5 mai 2020, par Aramil

    I has been recording small audio clips for an audio book. I have the start time of each one in seconds. The music lenght is of, let's say 60 min. I am thinking in create a silence audio file of the same duration as the music, but how I can add each clip into the given start time ? No matter if the clips overlap. I tried using concat and inpoint without the blank file and the output is empty (I am using wav files), that why the idea of use a master blank file as base.

    &#xA;&#xA;

    If possible I would really appreciate any example.

    &#xA;&#xA;

    Thanks

    &#xA;