Recherche avancée

Médias (1)

Mot : - Tags -/belgique

Autres articles (37)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

Sur d’autres sites (6327)

  • How to simultaneously capture mic, stream it to RTSP server and play it on iPhone's speaker ?

    24 août 2021, par Norbert Towiański

    I want to capture sound from mic, stream it to RTSP server and play it simultaneously on iPhone's speaker after getting samples from RTSP server. I mean such kind of loop. I use FFMPEGKit and I want to use MobileVLCKit, but unfortunately microphone is off when I start play stream.
I think I've done first step (capturing from microphone and send OutputStream to RTSP server) :

    


    @IBAction func transmitBtnPressed(_ sender: Any) {&#xA;    ffmpeg_transmit()&#xA;}&#xA;&#xA;@IBAction func recordBtnPressed(_ sender: Any) {&#xA;    switch recordingState {&#xA;    case .idle:&#xA;        recordingState = .start&#xA;        startRecording()&#xA;        recordBtn.setTitle("Started", for: .normal)&#xA;        let urlToFile = URL(fileURLWithPath: outPipePath!)&#xA;        outputStream = OutputStream(url: urlToFile, append: false)&#xA;        outputStream!.open()&#xA;    case .capturing:&#xA;        recordingState = .end&#xA;        stopRecording()&#xA;        recordBtn.setTitle("End", for: .normal)&#xA;    default:&#xA;        break&#xA;    }&#xA;}&#xA;&#xA;override func viewDidLoad() {&#xA;    super.viewDidLoad()&#xA;    outPipePath = FFmpegKitConfig.registerNewFFmpegPipe()&#xA;    self.setup()&#xA;}&#xA;&#xA;override func viewDidAppear(_ animated: Bool) {&#xA;    super.viewDidAppear(animated)&#xA;    setUpAuthStatus()&#xA;}&#xA;&#xA;func setUpAuthStatus() {&#xA;    if AVCaptureDevice.authorizationStatus(for: AVMediaType.audio) != .authorized {&#xA;        AVCaptureDevice.requestAccess(for: AVMediaType.audio, completionHandler: { (authorized) in&#xA;            DispatchQueue.main.async {&#xA;                if authorized {&#xA;                    self.setup()&#xA;                }&#xA;            }&#xA;        })&#xA;    }&#xA;}&#xA;&#xA;func setup() {&#xA;    self.session.sessionPreset = AVCaptureSession.Preset.high&#xA;    &#xA;    self.recordingURL = URL(fileURLWithPath: "\(NSTemporaryDirectory() as String)/file.m4a")&#xA;    if self.fileManager.isDeletableFile(atPath: self.recordingURL!.path) {&#xA;        _ = try? self.fileManager.removeItem(atPath: self.recordingURL!.path)&#xA;    }&#xA;    &#xA;    self.assetWriter = try? AVAssetWriter(outputURL: self.recordingURL!,&#xA;                                          fileType: AVFileType.m4a)&#xA;    self.assetWriter!.movieFragmentInterval = CMTime.invalid&#xA;    self.assetWriter!.shouldOptimizeForNetworkUse = true&#xA;    &#xA;    let audioSettings = [&#xA;        AVFormatIDKey: kAudioFormatLinearPCM,&#xA;        AVSampleRateKey: 48000.0,&#xA;        AVNumberOfChannelsKey: 1,&#xA;        AVLinearPCMIsFloatKey: false,&#xA;        AVLinearPCMBitDepthKey: 16,&#xA;        AVLinearPCMIsBigEndianKey: false,&#xA;        AVLinearPCMIsNonInterleaved: false,&#xA;        &#xA;    ] as [String : Any]&#xA;    &#xA;    &#xA;    self.audioInput = AVAssetWriterInput(mediaType: AVMediaType.audio,&#xA;                                         outputSettings: audioSettings)&#xA;    &#xA;    self.audioInput?.expectsMediaDataInRealTime = true&#xA;            &#xA;    if self.assetWriter!.canAdd(self.audioInput!) {&#xA;        self.assetWriter?.add(self.audioInput!)&#xA;    }&#xA;    &#xA;    self.session.startRunning()&#xA;    &#xA;    DispatchQueue.main.async {&#xA;        self.session.beginConfiguration()&#xA;        &#xA;        self.session.commitConfiguration()&#xA;        &#xA;        let audioDevice = AVCaptureDevice.default(for: AVMediaType.audio)&#xA;        let audioIn = try? AVCaptureDeviceInput(device: audioDevice!)&#xA;        &#xA;        if self.session.canAddInput(audioIn!) {&#xA;            self.session.addInput(audioIn!)&#xA;        }&#xA;        &#xA;        if self.session.canAddOutput(self.audioOutput) {&#xA;            self.session.addOutput(self.audioOutput)&#xA;        }&#xA;        &#xA;        self.audioConnection = self.audioOutput.connection(with: AVMediaType.audio)&#xA;    }&#xA;}&#xA;&#xA;func startRecording() {&#xA;    if self.assetWriter?.startWriting() != true {&#xA;        print("error: \(self.assetWriter?.error.debugDescription ?? "")")&#xA;    }&#xA;    &#xA;    self.audioOutput.setSampleBufferDelegate(self, queue: self.recordingQueue)&#xA;}&#xA;&#xA;func stopRecording() {&#xA;    self.audioOutput.setSampleBufferDelegate(nil, queue: nil)&#xA;    &#xA;    self.assetWriter?.finishWriting {&#xA;        print("Saved in folder \(self.recordingURL!)")&#xA;    }&#xA;}&#xA;func captureOutput(_ captureOutput: AVCaptureOutput, didOutput&#xA;                    sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {&#xA;    &#xA;    if !self.isRecordingSessionStarted {&#xA;        let presentationTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)&#xA;        self.assetWriter?.startSession(atSourceTime: presentationTime)&#xA;        self.isRecordingSessionStarted = true&#xA;        recordingState = .capturing&#xA;    }&#xA;    &#xA;    var blockBuffer: CMBlockBuffer?&#xA;    var audioBufferList: AudioBufferList = AudioBufferList.init()&#xA;    &#xA;    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &amp;audioBufferList, bufferListSize: MemoryLayout<audiobufferlist>.size, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, blockBufferOut: &amp;blockBuffer)&#xA;    let buffers = UnsafeMutableAudioBufferListPointer(&amp;audioBufferList)&#xA;    &#xA;    for buffer in buffers {&#xA;        let u8ptr = buffer.mData!.assumingMemoryBound(to: UInt8.self)&#xA;        let output = outputStream!.write(u8ptr, maxLength: Int(buffer.mDataByteSize))&#xA;        &#xA;        if (output == -1) {&#xA;            let error = outputStream?.streamError&#xA;            print("\(#file) > \(#function) > Error on outputStream: \(error!.localizedDescription)")&#xA;        }&#xA;        else {&#xA;            print("\(#file) > \(#function) > Data sent")&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;func ffmpeg_transmit() {&#xA;    &#xA;    let cmd1: String = "-f s16le -ar 48000 -ac 1 -i "&#xA;    let cmd2: String = " -probesize 32 -analyzeduration 0 -c:a libopus -application lowdelay -ac 1 -ar 48000 -f rtsp -rtsp_transport udp rtsp://localhost:18556/mystream"&#xA;    let cmd = cmd1 &#x2B; outPipePath! &#x2B; cmd2&#xA;    &#xA;    print(cmd)&#xA;    &#xA;    ffmpegSession = FFmpegKit.executeAsync(cmd, withExecuteCallback: { ffmpegSession in&#xA;        &#xA;        let state = ffmpegSession?.getState()&#xA;        let returnCode = ffmpegSession?.getReturnCode()&#xA;        if let returnCode = returnCode, let get = ffmpegSession?.getFailStackTrace() {&#xA;            print("FFmpeg process exited with state \(String(describing: FFmpegKitConfig.sessionState(toString: state!))) and rc \(returnCode).\(get)")&#xA;        }&#xA;    }, withLogCallback: { log in&#xA;        &#xA;    }, withStatisticsCallback: { statistics in&#xA;        &#xA;    })&#xA;}&#xA;</audiobufferlist>

    &#xA;

    I want to use MobileVLCKit in that way :

    &#xA;

    func startStream(){&#xA;    guard let url = URL(string: "rtsp://localhost:18556/mystream") else {return}&#xA;    audioPlayer!.media = VLCMedia(url: url)&#xA;&#xA;    audioPlayer!.media.addOption( "-vv")&#xA;    audioPlayer!.media.addOption( "--network-caching=10000")&#xA;&#xA;    audioPlayer!.delegate = self&#xA;    audioPlayer!.audio.volume = 100&#xA;&#xA;    audioPlayer!.play()&#xA;&#xA;}&#xA;

    &#xA;

    Could you give me some hints how to implement that ?

    &#xA;

  • ffmpeg - How to change the filter-paramaters depending on time oder framenumber ?

    2 mars 2020, par LookAndSee

    Hallo to all userse and helpers here in this forum ! Thank you, i am new and i have found allready a lot of solutions.

    Now I want to ask, if someone can help me :

    I have a Movie about 30 seconds made of 1 image.
    Now I want to pixelate depending on time or framenumber - every time a litlle bit less.
    My code so far :

    ffmpeg -i in.mp4 -vf scale=iw/n:ih/n,scale=niw:nih:flags=neighbor out.mp4

    where n should be the framenumber 1 to 900.
    this scould also be t+1 for slower change.

    the stars are gone - so i mean n times iw:n times ih :

    error-massage :

    undefined constant or missing ’(’ in ’n’
    error when evaluating the expression ’ih/n’
    maybe the expression for out_w :’w/n’ or for out_h :’ih/n’ is self-referencing.
    failed to configure output pad on paresed_scale_0
    error reinitializing filters !
    failed to inject frame into filter network : invalid argument
    error while processing the decoded data for stream #0:0

    Do you have some suggestion plaese - Thank you in Advance

  • Merge multi channel audio buffers into one CMSampleBuffer

    26 avril 2020, par Darkwonder

    I am using FFmpeg to access an RTSP stream in my macOS app.

    &#xA;&#xA;

    REACHED GOALS : I have created a tone generator which creates single channel audio and returns a CMSampleBuffer. The tone generator is used to test my audio pipeline when the video's fps and audio sample rates are changed.

    &#xA;&#xA;

    GOAL : The goal is to merge multi-channel audio buffers into a single CMSampleBuffer.

    &#xA;&#xA;

    Audio data lifecyle :

    &#xA;&#xA;

    AVCodecContext* audioContext = self.rtspStreamProvider.audioCodecContext;&#xA;        if (!audioContext) { return; }&#xA;&#xA;        // Getting audio settings from FFmpegs audio context (AVCodecContext).&#xA;        int samplesPerChannel = audioContext->frame_size;&#xA;        int frameNumber = audioContext->frame_number;&#xA;        int sampleRate = audioContext->sample_rate;&#xA;        int fps = [self.rtspStreamProvider fps];&#xA;&#xA;        int calculatedSampleRate = sampleRate / fps;&#xA;&#xA;        // NSLog(@"\nSamples per channel = %i, frames = %i.\nSample rate = %i, fps = %i.\ncalculatedSampleRate = %i.", samplesPerChannel, frameNumber, sampleRate, fps, calculatedSampleRate);&#xA;&#xA;        // Decoding the audio data from a encoded AVPacket into a AVFrame.&#xA;        AVFrame* audioFrame = [self.rtspStreamProvider readDecodedAudioFrame];&#xA;        if (!audioFrame) { return; }&#xA;&#xA;        // Extracting my audio buffers from FFmpegs AVFrame.&#xA;        uint8_t* leftChannelAudioBufRef = audioFrame->data[0];&#xA;        uint8_t* rightChannelAudioBufRef = audioFrame->data[1];&#xA;&#xA;        // Creating the CMSampleBuffer with audio data.&#xA;        CMSampleBufferRef leftSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:leftChannelAudioBufRef channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];&#xA;//      CMSampleBufferRef rightSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:packet->data[1] channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];&#xA;&#xA;        if (!leftSampleBuffer) { return; }&#xA;        if (!self.audioQueue) { return; }&#xA;        if (!self.audioDelegates) { return; }&#xA;&#xA;        // All audio consumers will receive audio samples via delegation. &#xA;        dispatch_sync(self.audioQueue, ^{&#xA;            NSHashTable *audioDelegates = self.audioDelegates;&#xA;            for (id<audiodataproviderdelegate> audioDelegate in audioDelegates)&#xA;            {&#xA;                [audioDelegate provider:self didOutputAudioSampleBuffer:leftSampleBuffer];&#xA;                // [audioDelegate provider:self didOutputAudioSampleBuffer:rightSampleBuffer];&#xA;            }&#xA;        });&#xA;</audiodataproviderdelegate>

    &#xA;&#xA;

    CMSampleBuffer containing audio data creation :

    &#xA;&#xA;

    import Foundation&#xA;import CoreMedia&#xA;&#xA;@objc class CMSampleBufferFactory: NSObject&#xA;{&#xA;&#xA;    @objc static func createAudioSampleBufferUsing(data: UnsafeMutablePointer<uint8> ,&#xA;                                             channelCount: UInt32,&#xA;                                             framesCount: CMItemCount,&#xA;                                             sampleRate: Double) -> CMSampleBuffer? {&#xA;&#xA;        /* Prepare for sample Buffer creation */&#xA;        var sampleBuffer: CMSampleBuffer! = nil&#xA;        var osStatus: OSStatus = -1&#xA;        var audioFormatDescription: CMFormatDescription! = nil&#xA;&#xA;        var absd: AudioStreamBasicDescription! = nil&#xA;        let sampleDuration = CMTimeMake(value: 1, timescale: Int32(sampleRate))&#xA;        let presentationTimeStamp = CMTimeMake(value: 0, timescale: Int32(sampleRate))&#xA;&#xA;        // NOTE: Change bytesPerFrame if you change the block buffer value types. Currently we are using double.&#xA;        let bytesPerFrame: UInt32 = UInt32(MemoryLayout<float32>.size) * channelCount&#xA;        let memoryBlockByteLength = framesCount * Int(bytesPerFrame)&#xA;&#xA;//      var acl = AudioChannelLayout()&#xA;//      acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo&#xA;&#xA;        /* Sample Buffer Block buffer creation */&#xA;        var blockBuffer: CMBlockBuffer?&#xA;&#xA;        osStatus = CMBlockBufferCreateWithMemoryBlock(&#xA;            allocator: kCFAllocatorDefault,&#xA;            memoryBlock: nil,&#xA;            blockLength: memoryBlockByteLength,&#xA;            blockAllocator: nil,&#xA;            customBlockSource: nil,&#xA;            offsetToData: 0,&#xA;            dataLength: memoryBlockByteLength,&#xA;            flags: 0,&#xA;            blockBufferOut: &amp;blockBuffer&#xA;        )&#xA;&#xA;        assert(osStatus == kCMBlockBufferNoErr)&#xA;&#xA;        guard let eBlock = blockBuffer else { return nil }&#xA;&#xA;        osStatus = CMBlockBufferFillDataBytes(with: 0, blockBuffer: eBlock, offsetIntoDestination: 0, dataLength: memoryBlockByteLength)&#xA;        assert(osStatus == kCMBlockBufferNoErr)&#xA;&#xA;        TVBlockBufferHelper.fillAudioBlockBuffer(blockBuffer,&#xA;                                                 audioData: data,&#xA;                                                 frames: Int32(framesCount))&#xA;        /* Audio description creations */&#xA;&#xA;        absd = AudioStreamBasicDescription(&#xA;            mSampleRate: sampleRate,&#xA;            mFormatID: kAudioFormatLinearPCM,&#xA;            mFormatFlags: kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsFloat,&#xA;            mBytesPerPacket: bytesPerFrame,&#xA;            mFramesPerPacket: 1,&#xA;            mBytesPerFrame: bytesPerFrame,&#xA;            mChannelsPerFrame: channelCount,&#xA;            mBitsPerChannel: 32,&#xA;            mReserved: 0&#xA;        )&#xA;&#xA;        guard absd != nil else {&#xA;            print("\nCreating AudioStreamBasicDescription Failed.")&#xA;            return nil&#xA;        }&#xA;&#xA;        osStatus = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault,&#xA;                                                  asbd: &amp;absd,&#xA;                                                  layoutSize: 0,&#xA;                                                  layout: nil,&#xA;//                                                layoutSize: MemoryLayout<audiochannellayout>.size,&#xA;//                                                layout: &amp;acl,&#xA;                                                  magicCookieSize: 0,&#xA;                                                  magicCookie: nil,&#xA;                                                  extensions: nil,&#xA;                                                  formatDescriptionOut: &amp;audioFormatDescription)&#xA;&#xA;        guard osStatus == noErr else {&#xA;            print("\nCreating CMFormatDescription Failed.")&#xA;            return nil&#xA;        }&#xA;&#xA;        /* Create sample Buffer */&#xA;        var timmingInfo = CMSampleTimingInfo(duration: sampleDuration, presentationTimeStamp: presentationTimeStamp, decodeTimeStamp: .invalid)&#xA;&#xA;        osStatus = CMSampleBufferCreate(allocator: kCFAllocatorDefault,&#xA;                                        dataBuffer: eBlock,&#xA;                                        dataReady: true,&#xA;                                        makeDataReadyCallback: nil,&#xA;                                        refcon: nil,&#xA;                                        formatDescription: audioFormatDescription,&#xA;                                        sampleCount: framesCount,&#xA;                                        sampleTimingEntryCount: 1,&#xA;                                        sampleTimingArray: &amp;timmingInfo,&#xA;                                        sampleSizeEntryCount: 0, // Must be 0, 1, or numSamples.&#xA;            sampleSizeArray: nil, // Pointer ot Int. Don&#x27;t know the size. Don&#x27;t know if its bytes or bits?&#xA;            sampleBufferOut: &amp;sampleBuffer)&#xA;        return sampleBuffer&#xA;    }&#xA;&#xA;}&#xA;</audiochannellayout></float32></uint8>

    &#xA;&#xA;

    CMSampleBuffer gets filled with raw audio data from FFmpeg's data :

    &#xA;&#xA;

    @import Foundation;&#xA;@import CoreMedia;&#xA;&#xA;@interface BlockBufferHelper : NSObject&#xA;&#xA;&#x2B;(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer&#xA;                  audioData:(uint8_t *)data&#xA;                     frames:(int)framesCount;&#xA;&#xA;&#xA;&#xA;@end&#xA;&#xA;#import "TVBlockBufferHelper.h"&#xA;&#xA;@implementation BlockBufferHelper&#xA;&#xA;&#x2B;(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer&#xA;                  audioData:(uint8_t *)data&#xA;                     frames:(int)framesCount&#xA;{&#xA;    // Possibly dev error.&#xA;    if (framesCount == 0) {&#xA;        NSAssert(false, @"\nfillAudioBlockBuffer/audioData/frames will not be able to fill an blockBuffer which has no frames.");&#xA;        return;&#xA;    }&#xA;&#xA;    char *rawBuffer = NULL;&#xA;&#xA;    size_t size = 0;&#xA;&#xA;    OSStatus status = CMBlockBufferGetDataPointer(blockBuffer, 0, &amp;size, NULL, &amp;rawBuffer);&#xA;    if(status != noErr)&#xA;    {&#xA;        return;&#xA;    }&#xA;&#xA;    memcpy(rawBuffer, data, framesCount);&#xA;}&#xA;&#xA;@end&#xA;

    &#xA;&#xA;

    The LEARNING Core Audio book from Chris Adamson/Kevin Avila points me toward a multi channel mixer. &#xA;The multi channel mixer should have 2-n inputs and 1 output. I assume the output could be a buffer or something that could be put into a CMSampleBuffer for further consumption.

    &#xA;&#xA;

    This direction should lead me to AudioUnits, AUGraph and the AudioToolbox. I don't understand all of these classes and how they work together. I have found some code snippets on SO which could help me but most of them use AudioToolBox classes and don't use CMSampleBuffers as much as I need.

    &#xA;&#xA;

    Is there another way to merge audio buffers into a new one ?

    &#xA;&#xA;

    Is creating a multi channel mixer using AudioToolBox the right direction ?

    &#xA;