
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (37)
-
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Selection of projects using MediaSPIP
2 mai 2011, parThe examples below are representative elements of MediaSPIP specific uses for specific projects.
MediaSPIP farm @ Infini
The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)
Sur d’autres sites (6327)
-
How to simultaneously capture mic, stream it to RTSP server and play it on iPhone's speaker ?
24 août 2021, par Norbert TowiańskiI want to capture sound from mic, stream it to RTSP server and play it simultaneously on iPhone's speaker after getting samples from RTSP server. I mean such kind of loop. I use FFMPEGKit and I want to use MobileVLCKit, but unfortunately microphone is off when I start play stream.
I think I've done first step (capturing from microphone and send OutputStream to RTSP server) :


@IBAction func transmitBtnPressed(_ sender: Any) {
 ffmpeg_transmit()
}

@IBAction func recordBtnPressed(_ sender: Any) {
 switch recordingState {
 case .idle:
 recordingState = .start
 startRecording()
 recordBtn.setTitle("Started", for: .normal)
 let urlToFile = URL(fileURLWithPath: outPipePath!)
 outputStream = OutputStream(url: urlToFile, append: false)
 outputStream!.open()
 case .capturing:
 recordingState = .end
 stopRecording()
 recordBtn.setTitle("End", for: .normal)
 default:
 break
 }
}

override func viewDidLoad() {
 super.viewDidLoad()
 outPipePath = FFmpegKitConfig.registerNewFFmpegPipe()
 self.setup()
}

override func viewDidAppear(_ animated: Bool) {
 super.viewDidAppear(animated)
 setUpAuthStatus()
}

func setUpAuthStatus() {
 if AVCaptureDevice.authorizationStatus(for: AVMediaType.audio) != .authorized {
 AVCaptureDevice.requestAccess(for: AVMediaType.audio, completionHandler: { (authorized) in
 DispatchQueue.main.async {
 if authorized {
 self.setup()
 }
 }
 })
 }
}

func setup() {
 self.session.sessionPreset = AVCaptureSession.Preset.high
 
 self.recordingURL = URL(fileURLWithPath: "\(NSTemporaryDirectory() as String)/file.m4a")
 if self.fileManager.isDeletableFile(atPath: self.recordingURL!.path) {
 _ = try? self.fileManager.removeItem(atPath: self.recordingURL!.path)
 }
 
 self.assetWriter = try? AVAssetWriter(outputURL: self.recordingURL!,
 fileType: AVFileType.m4a)
 self.assetWriter!.movieFragmentInterval = CMTime.invalid
 self.assetWriter!.shouldOptimizeForNetworkUse = true
 
 let audioSettings = [
 AVFormatIDKey: kAudioFormatLinearPCM,
 AVSampleRateKey: 48000.0,
 AVNumberOfChannelsKey: 1,
 AVLinearPCMIsFloatKey: false,
 AVLinearPCMBitDepthKey: 16,
 AVLinearPCMIsBigEndianKey: false,
 AVLinearPCMIsNonInterleaved: false,
 
 ] as [String : Any]
 
 
 self.audioInput = AVAssetWriterInput(mediaType: AVMediaType.audio,
 outputSettings: audioSettings)
 
 self.audioInput?.expectsMediaDataInRealTime = true
 
 if self.assetWriter!.canAdd(self.audioInput!) {
 self.assetWriter?.add(self.audioInput!)
 }
 
 self.session.startRunning()
 
 DispatchQueue.main.async {
 self.session.beginConfiguration()
 
 self.session.commitConfiguration()
 
 let audioDevice = AVCaptureDevice.default(for: AVMediaType.audio)
 let audioIn = try? AVCaptureDeviceInput(device: audioDevice!)
 
 if self.session.canAddInput(audioIn!) {
 self.session.addInput(audioIn!)
 }
 
 if self.session.canAddOutput(self.audioOutput) {
 self.session.addOutput(self.audioOutput)
 }
 
 self.audioConnection = self.audioOutput.connection(with: AVMediaType.audio)
 }
}

func startRecording() {
 if self.assetWriter?.startWriting() != true {
 print("error: \(self.assetWriter?.error.debugDescription ?? "")")
 }
 
 self.audioOutput.setSampleBufferDelegate(self, queue: self.recordingQueue)
}

func stopRecording() {
 self.audioOutput.setSampleBufferDelegate(nil, queue: nil)
 
 self.assetWriter?.finishWriting {
 print("Saved in folder \(self.recordingURL!)")
 }
}
func captureOutput(_ captureOutput: AVCaptureOutput, didOutput
 sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
 
 if !self.isRecordingSessionStarted {
 let presentationTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
 self.assetWriter?.startSession(atSourceTime: presentationTime)
 self.isRecordingSessionStarted = true
 recordingState = .capturing
 }
 
 var blockBuffer: CMBlockBuffer?
 var audioBufferList: AudioBufferList = AudioBufferList.init()
 
 CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &audioBufferList, bufferListSize: MemoryLayout<audiobufferlist>.size, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, blockBufferOut: &blockBuffer)
 let buffers = UnsafeMutableAudioBufferListPointer(&audioBufferList)
 
 for buffer in buffers {
 let u8ptr = buffer.mData!.assumingMemoryBound(to: UInt8.self)
 let output = outputStream!.write(u8ptr, maxLength: Int(buffer.mDataByteSize))
 
 if (output == -1) {
 let error = outputStream?.streamError
 print("\(#file) > \(#function) > Error on outputStream: \(error!.localizedDescription)")
 }
 else {
 print("\(#file) > \(#function) > Data sent")
 }
 }
}

func ffmpeg_transmit() {
 
 let cmd1: String = "-f s16le -ar 48000 -ac 1 -i "
 let cmd2: String = " -probesize 32 -analyzeduration 0 -c:a libopus -application lowdelay -ac 1 -ar 48000 -f rtsp -rtsp_transport udp rtsp://localhost:18556/mystream"
 let cmd = cmd1 + outPipePath! + cmd2
 
 print(cmd)
 
 ffmpegSession = FFmpegKit.executeAsync(cmd, withExecuteCallback: { ffmpegSession in
 
 let state = ffmpegSession?.getState()
 let returnCode = ffmpegSession?.getReturnCode()
 if let returnCode = returnCode, let get = ffmpegSession?.getFailStackTrace() {
 print("FFmpeg process exited with state \(String(describing: FFmpegKitConfig.sessionState(toString: state!))) and rc \(returnCode).\(get)")
 }
 }, withLogCallback: { log in
 
 }, withStatisticsCallback: { statistics in
 
 })
}
</audiobufferlist>


I want to use MobileVLCKit in that way :


func startStream(){
 guard let url = URL(string: "rtsp://localhost:18556/mystream") else {return}
 audioPlayer!.media = VLCMedia(url: url)

 audioPlayer!.media.addOption( "-vv")
 audioPlayer!.media.addOption( "--network-caching=10000")

 audioPlayer!.delegate = self
 audioPlayer!.audio.volume = 100

 audioPlayer!.play()

}



Could you give me some hints how to implement that ?


-
ffmpeg - How to change the filter-paramaters depending on time oder framenumber ?
2 mars 2020, par LookAndSeeHallo to all userse and helpers here in this forum ! Thank you, i am new and i have found allready a lot of solutions.
Now I want to ask, if someone can help me :
I have a Movie about 30 seconds made of 1 image.
Now I want to pixelate depending on time or framenumber - every time a litlle bit less.
My code so far :ffmpeg -i in.mp4 -vf scale=iw/n:ih/n,scale=niw:nih:flags=neighbor out.mp4
where n should be the framenumber 1 to 900.
this scould also be t+1 for slower change.the stars are gone - so i mean n times iw:n times ih :
error-massage :
undefined constant or missing ’(’ in ’n’
error when evaluating the expression ’ih/n’
maybe the expression for out_w :’w/n’ or for out_h :’ih/n’ is self-referencing.
failed to configure output pad on paresed_scale_0
error reinitializing filters !
failed to inject frame into filter network : invalid argument
error while processing the decoded data for stream #0:0Do you have some suggestion plaese - Thank you in Advance
-
Merge multi channel audio buffers into one CMSampleBuffer
26 avril 2020, par DarkwonderI am using FFmpeg to access an RTSP stream in my macOS app.



REACHED GOALS : I have created a tone generator which creates single channel audio and returns a CMSampleBuffer. The tone generator is used to test my audio pipeline when the video's fps and audio sample rates are changed.



GOAL : The goal is to merge multi-channel audio buffers into a single CMSampleBuffer.



Audio data lifecyle :



AVCodecContext* audioContext = self.rtspStreamProvider.audioCodecContext;
 if (!audioContext) { return; }

 // Getting audio settings from FFmpegs audio context (AVCodecContext).
 int samplesPerChannel = audioContext->frame_size;
 int frameNumber = audioContext->frame_number;
 int sampleRate = audioContext->sample_rate;
 int fps = [self.rtspStreamProvider fps];

 int calculatedSampleRate = sampleRate / fps;

 // NSLog(@"\nSamples per channel = %i, frames = %i.\nSample rate = %i, fps = %i.\ncalculatedSampleRate = %i.", samplesPerChannel, frameNumber, sampleRate, fps, calculatedSampleRate);

 // Decoding the audio data from a encoded AVPacket into a AVFrame.
 AVFrame* audioFrame = [self.rtspStreamProvider readDecodedAudioFrame];
 if (!audioFrame) { return; }

 // Extracting my audio buffers from FFmpegs AVFrame.
 uint8_t* leftChannelAudioBufRef = audioFrame->data[0];
 uint8_t* rightChannelAudioBufRef = audioFrame->data[1];

 // Creating the CMSampleBuffer with audio data.
 CMSampleBufferRef leftSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:leftChannelAudioBufRef channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];
// CMSampleBufferRef rightSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:packet->data[1] channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];

 if (!leftSampleBuffer) { return; }
 if (!self.audioQueue) { return; }
 if (!self.audioDelegates) { return; }

 // All audio consumers will receive audio samples via delegation. 
 dispatch_sync(self.audioQueue, ^{
 NSHashTable *audioDelegates = self.audioDelegates;
 for (id<audiodataproviderdelegate> audioDelegate in audioDelegates)
 {
 [audioDelegate provider:self didOutputAudioSampleBuffer:leftSampleBuffer];
 // [audioDelegate provider:self didOutputAudioSampleBuffer:rightSampleBuffer];
 }
 });
</audiodataproviderdelegate>



CMSampleBuffer containing audio data creation :



import Foundation
import CoreMedia

@objc class CMSampleBufferFactory: NSObject
{

 @objc static func createAudioSampleBufferUsing(data: UnsafeMutablePointer<uint8> ,
 channelCount: UInt32,
 framesCount: CMItemCount,
 sampleRate: Double) -> CMSampleBuffer? {

 /* Prepare for sample Buffer creation */
 var sampleBuffer: CMSampleBuffer! = nil
 var osStatus: OSStatus = -1
 var audioFormatDescription: CMFormatDescription! = nil

 var absd: AudioStreamBasicDescription! = nil
 let sampleDuration = CMTimeMake(value: 1, timescale: Int32(sampleRate))
 let presentationTimeStamp = CMTimeMake(value: 0, timescale: Int32(sampleRate))

 // NOTE: Change bytesPerFrame if you change the block buffer value types. Currently we are using double.
 let bytesPerFrame: UInt32 = UInt32(MemoryLayout<float32>.size) * channelCount
 let memoryBlockByteLength = framesCount * Int(bytesPerFrame)

// var acl = AudioChannelLayout()
// acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo

 /* Sample Buffer Block buffer creation */
 var blockBuffer: CMBlockBuffer?

 osStatus = CMBlockBufferCreateWithMemoryBlock(
 allocator: kCFAllocatorDefault,
 memoryBlock: nil,
 blockLength: memoryBlockByteLength,
 blockAllocator: nil,
 customBlockSource: nil,
 offsetToData: 0,
 dataLength: memoryBlockByteLength,
 flags: 0,
 blockBufferOut: &blockBuffer
 )

 assert(osStatus == kCMBlockBufferNoErr)

 guard let eBlock = blockBuffer else { return nil }

 osStatus = CMBlockBufferFillDataBytes(with: 0, blockBuffer: eBlock, offsetIntoDestination: 0, dataLength: memoryBlockByteLength)
 assert(osStatus == kCMBlockBufferNoErr)

 TVBlockBufferHelper.fillAudioBlockBuffer(blockBuffer,
 audioData: data,
 frames: Int32(framesCount))
 /* Audio description creations */

 absd = AudioStreamBasicDescription(
 mSampleRate: sampleRate,
 mFormatID: kAudioFormatLinearPCM,
 mFormatFlags: kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsFloat,
 mBytesPerPacket: bytesPerFrame,
 mFramesPerPacket: 1,
 mBytesPerFrame: bytesPerFrame,
 mChannelsPerFrame: channelCount,
 mBitsPerChannel: 32,
 mReserved: 0
 )

 guard absd != nil else {
 print("\nCreating AudioStreamBasicDescription Failed.")
 return nil
 }

 osStatus = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault,
 asbd: &absd,
 layoutSize: 0,
 layout: nil,
// layoutSize: MemoryLayout<audiochannellayout>.size,
// layout: &acl,
 magicCookieSize: 0,
 magicCookie: nil,
 extensions: nil,
 formatDescriptionOut: &audioFormatDescription)

 guard osStatus == noErr else {
 print("\nCreating CMFormatDescription Failed.")
 return nil
 }

 /* Create sample Buffer */
 var timmingInfo = CMSampleTimingInfo(duration: sampleDuration, presentationTimeStamp: presentationTimeStamp, decodeTimeStamp: .invalid)

 osStatus = CMSampleBufferCreate(allocator: kCFAllocatorDefault,
 dataBuffer: eBlock,
 dataReady: true,
 makeDataReadyCallback: nil,
 refcon: nil,
 formatDescription: audioFormatDescription,
 sampleCount: framesCount,
 sampleTimingEntryCount: 1,
 sampleTimingArray: &timmingInfo,
 sampleSizeEntryCount: 0, // Must be 0, 1, or numSamples.
 sampleSizeArray: nil, // Pointer ot Int. Don't know the size. Don't know if its bytes or bits?
 sampleBufferOut: &sampleBuffer)
 return sampleBuffer
 }

}
</audiochannellayout></float32></uint8>



CMSampleBuffer gets filled with raw audio data from FFmpeg's data :



@import Foundation;
@import CoreMedia;

@interface BlockBufferHelper : NSObject

+(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer
 audioData:(uint8_t *)data
 frames:(int)framesCount;



@end

#import "TVBlockBufferHelper.h"

@implementation BlockBufferHelper

+(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer
 audioData:(uint8_t *)data
 frames:(int)framesCount
{
 // Possibly dev error.
 if (framesCount == 0) {
 NSAssert(false, @"\nfillAudioBlockBuffer/audioData/frames will not be able to fill an blockBuffer which has no frames.");
 return;
 }

 char *rawBuffer = NULL;

 size_t size = 0;

 OSStatus status = CMBlockBufferGetDataPointer(blockBuffer, 0, &size, NULL, &rawBuffer);
 if(status != noErr)
 {
 return;
 }

 memcpy(rawBuffer, data, framesCount);
}

@end




The
LEARNING Core Audio
book from Chris Adamson/Kevin Avila points me toward a multi channel mixer. 
The multi channel mixer should have 2-n inputs and 1 output. I assume the output could be a buffer or something that could be put into aCMSampleBuffer
for further consumption.


This direction should lead me to
AudioUnits
,AUGraph
and theAudioToolbox
. I don't understand all of these classes and how they work together. I have found some code snippets on SO which could help me but most of them useAudioToolBox
classes and don't useCMSampleBuffers
as much as I need.


Is there another way to merge audio buffers into a new one ?



Is creating a multi channel mixer using AudioToolBox the right direction ?