
Recherche avancée
Autres articles (46)
-
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...) -
MediaSPIP Player : les contrôles
26 mai 2010, parLes contrôles à la souris du lecteur
En plus des actions au click sur les boutons visibles de l’interface du lecteur, il est également possible d’effectuer d’autres actions grâce à la souris : Click : en cliquant sur la vidéo ou sur le logo du son, celui ci se mettra en lecture ou en pause en fonction de son état actuel ; Molette (roulement) : en plaçant la souris sur l’espace utilisé par le média (hover), la molette de la souris n’exerce plus l’effet habituel de scroll de la page, mais diminue ou (...)
Sur d’autres sites (5311)
-
Merge multi channel audio buffers into one CMSampleBuffer
26 avril 2020, par DarkwonderI am using FFmpeg to access an RTSP stream in my macOS app.



REACHED GOALS : I have created a tone generator which creates single channel audio and returns a CMSampleBuffer. The tone generator is used to test my audio pipeline when the video's fps and audio sample rates are changed.



GOAL : The goal is to merge multi-channel audio buffers into a single CMSampleBuffer.



Audio data lifecyle :



AVCodecContext* audioContext = self.rtspStreamProvider.audioCodecContext;
 if (!audioContext) { return; }

 // Getting audio settings from FFmpegs audio context (AVCodecContext).
 int samplesPerChannel = audioContext->frame_size;
 int frameNumber = audioContext->frame_number;
 int sampleRate = audioContext->sample_rate;
 int fps = [self.rtspStreamProvider fps];

 int calculatedSampleRate = sampleRate / fps;

 // NSLog(@"\nSamples per channel = %i, frames = %i.\nSample rate = %i, fps = %i.\ncalculatedSampleRate = %i.", samplesPerChannel, frameNumber, sampleRate, fps, calculatedSampleRate);

 // Decoding the audio data from a encoded AVPacket into a AVFrame.
 AVFrame* audioFrame = [self.rtspStreamProvider readDecodedAudioFrame];
 if (!audioFrame) { return; }

 // Extracting my audio buffers from FFmpegs AVFrame.
 uint8_t* leftChannelAudioBufRef = audioFrame->data[0];
 uint8_t* rightChannelAudioBufRef = audioFrame->data[1];

 // Creating the CMSampleBuffer with audio data.
 CMSampleBufferRef leftSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:leftChannelAudioBufRef channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];
// CMSampleBufferRef rightSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:packet->data[1] channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];

 if (!leftSampleBuffer) { return; }
 if (!self.audioQueue) { return; }
 if (!self.audioDelegates) { return; }

 // All audio consumers will receive audio samples via delegation. 
 dispatch_sync(self.audioQueue, ^{
 NSHashTable *audioDelegates = self.audioDelegates;
 for (id<audiodataproviderdelegate> audioDelegate in audioDelegates)
 {
 [audioDelegate provider:self didOutputAudioSampleBuffer:leftSampleBuffer];
 // [audioDelegate provider:self didOutputAudioSampleBuffer:rightSampleBuffer];
 }
 });
</audiodataproviderdelegate>



CMSampleBuffer containing audio data creation :



import Foundation
import CoreMedia

@objc class CMSampleBufferFactory: NSObject
{

 @objc static func createAudioSampleBufferUsing(data: UnsafeMutablePointer<uint8> ,
 channelCount: UInt32,
 framesCount: CMItemCount,
 sampleRate: Double) -> CMSampleBuffer? {

 /* Prepare for sample Buffer creation */
 var sampleBuffer: CMSampleBuffer! = nil
 var osStatus: OSStatus = -1
 var audioFormatDescription: CMFormatDescription! = nil

 var absd: AudioStreamBasicDescription! = nil
 let sampleDuration = CMTimeMake(value: 1, timescale: Int32(sampleRate))
 let presentationTimeStamp = CMTimeMake(value: 0, timescale: Int32(sampleRate))

 // NOTE: Change bytesPerFrame if you change the block buffer value types. Currently we are using double.
 let bytesPerFrame: UInt32 = UInt32(MemoryLayout<float32>.size) * channelCount
 let memoryBlockByteLength = framesCount * Int(bytesPerFrame)

// var acl = AudioChannelLayout()
// acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo

 /* Sample Buffer Block buffer creation */
 var blockBuffer: CMBlockBuffer?

 osStatus = CMBlockBufferCreateWithMemoryBlock(
 allocator: kCFAllocatorDefault,
 memoryBlock: nil,
 blockLength: memoryBlockByteLength,
 blockAllocator: nil,
 customBlockSource: nil,
 offsetToData: 0,
 dataLength: memoryBlockByteLength,
 flags: 0,
 blockBufferOut: &blockBuffer
 )

 assert(osStatus == kCMBlockBufferNoErr)

 guard let eBlock = blockBuffer else { return nil }

 osStatus = CMBlockBufferFillDataBytes(with: 0, blockBuffer: eBlock, offsetIntoDestination: 0, dataLength: memoryBlockByteLength)
 assert(osStatus == kCMBlockBufferNoErr)

 TVBlockBufferHelper.fillAudioBlockBuffer(blockBuffer,
 audioData: data,
 frames: Int32(framesCount))
 /* Audio description creations */

 absd = AudioStreamBasicDescription(
 mSampleRate: sampleRate,
 mFormatID: kAudioFormatLinearPCM,
 mFormatFlags: kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsFloat,
 mBytesPerPacket: bytesPerFrame,
 mFramesPerPacket: 1,
 mBytesPerFrame: bytesPerFrame,
 mChannelsPerFrame: channelCount,
 mBitsPerChannel: 32,
 mReserved: 0
 )

 guard absd != nil else {
 print("\nCreating AudioStreamBasicDescription Failed.")
 return nil
 }

 osStatus = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault,
 asbd: &absd,
 layoutSize: 0,
 layout: nil,
// layoutSize: MemoryLayout<audiochannellayout>.size,
// layout: &acl,
 magicCookieSize: 0,
 magicCookie: nil,
 extensions: nil,
 formatDescriptionOut: &audioFormatDescription)

 guard osStatus == noErr else {
 print("\nCreating CMFormatDescription Failed.")
 return nil
 }

 /* Create sample Buffer */
 var timmingInfo = CMSampleTimingInfo(duration: sampleDuration, presentationTimeStamp: presentationTimeStamp, decodeTimeStamp: .invalid)

 osStatus = CMSampleBufferCreate(allocator: kCFAllocatorDefault,
 dataBuffer: eBlock,
 dataReady: true,
 makeDataReadyCallback: nil,
 refcon: nil,
 formatDescription: audioFormatDescription,
 sampleCount: framesCount,
 sampleTimingEntryCount: 1,
 sampleTimingArray: &timmingInfo,
 sampleSizeEntryCount: 0, // Must be 0, 1, or numSamples.
 sampleSizeArray: nil, // Pointer ot Int. Don't know the size. Don't know if its bytes or bits?
 sampleBufferOut: &sampleBuffer)
 return sampleBuffer
 }

}
</audiochannellayout></float32></uint8>



CMSampleBuffer gets filled with raw audio data from FFmpeg's data :



@import Foundation;
@import CoreMedia;

@interface BlockBufferHelper : NSObject

+(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer
 audioData:(uint8_t *)data
 frames:(int)framesCount;



@end

#import "TVBlockBufferHelper.h"

@implementation BlockBufferHelper

+(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer
 audioData:(uint8_t *)data
 frames:(int)framesCount
{
 // Possibly dev error.
 if (framesCount == 0) {
 NSAssert(false, @"\nfillAudioBlockBuffer/audioData/frames will not be able to fill an blockBuffer which has no frames.");
 return;
 }

 char *rawBuffer = NULL;

 size_t size = 0;

 OSStatus status = CMBlockBufferGetDataPointer(blockBuffer, 0, &size, NULL, &rawBuffer);
 if(status != noErr)
 {
 return;
 }

 memcpy(rawBuffer, data, framesCount);
}

@end




The
LEARNING Core Audio
book from Chris Adamson/Kevin Avila points me toward a multi channel mixer. 
The multi channel mixer should have 2-n inputs and 1 output. I assume the output could be a buffer or something that could be put into aCMSampleBuffer
for further consumption.


This direction should lead me to
AudioUnits
,AUGraph
and theAudioToolbox
. I don't understand all of these classes and how they work together. I have found some code snippets on SO which could help me but most of them useAudioToolBox
classes and don't useCMSampleBuffers
as much as I need.


Is there another way to merge audio buffers into a new one ?



Is creating a multi channel mixer using AudioToolBox the right direction ?


-
how to set output video length from ffmpeg
25 avril 2022, par naval HurpadeI'm Creating youtube video downloader using
ytdl-core
andffmpeg
, I'm able to combine video and audio files using ffmpeg, and files are working fine, but

When I play that video video length (duration) is set to some random number like 212309854
I Already tryied ading
-t
flag to set time it works but I still video duration as this random number.

See screenshot bellow.
And in video properties I see no length is set.


module.exports = function (audio, video, selectedAudioFormat, selectedVideoFormat,res) {
 
 const ffmpegProcess = spawn(
 ffmpegInstallation.path,
 [
 '-i',
 `pipe:3`,
 '-i',
 `pipe:4`,
 '-map',
 '0:v',
 '-map',
 '1:a',
 '-c:v',
 'copy',
 '-c:a',
 'copy',
 '-crf',
 '27',
 '-preset',
 '6',
 '-movflags',
 'frag_keyframe+empty_moov',
 '-f',
 selectedVideoFormat.container,
 '-t',
 '30',
 '-loglevel',
 'info',
 '-',
 ],
 {
 stdio: ['pipe', 'pipe', 'pipe', 'pipe', 'pipe']
 }
 );
 
 
 video.pipe(ffmpegProcess.stdio[3]);
 audio.pipe(ffmpegProcess.stdio[4]);
 ffmpegProcess.stdio[1]
 .pipe(res);
 
 let ffmpegLogs = '';
 
 ffmpegProcess.stdio[2].on('data', (chunk) => {
 ffmpegLogs += chunk.toString();
 });
 
 ffmpegProcess.on('exit', (exitCode) => {
 if (exitCode === 1) {
 console.error('ERROR IN CHILD ::', ffmpegLogs);
 }
 });





-
FFmpeg starting manually but not with Systemd on boot
23 juin 2021, par eKrajnakOn Raspberry Pi 4 B 4GB with official Debian 10 image, I have /home/pi/run.sh script with following :


#!/bin/bash
ffmpeg -nostdin -framerate 15 -video_size 1280x720 -input_format yuyv422 -i /dev/video0 -f alsa -i hw:Device \
 -af acompressor=threshold=-14dB:ratio=9:attack=10:release=1000 -c:a aac -ac 2 -ar 48000 -ab 160k \
 -c:v libx264 -pix_fmt yuv420p -b:v 3M -bf 1 -g 20 -flags +ilme+ildct -preset ultrafast \
 -streamid 0:0x101 -streamid 1:0x100 -mpegts_pmt_start_pid 4096 -mpegts_start_pid 0x259 -metadata:s:a:0 language="" -mpegts_service_id 131 -mpegts_transport_stream_id 9217 -metadata provider_name="Doesnt matter" -metadata service_name="Doesnt matter" \
 -minrate 3500 -maxrate 3500k -bufsize 4500k -muxrate 4000k -f mpegts "udp://@239.1.67.13:1234?pkt_size=1316&bitrate=4000000&dscp=34" -loglevel debug < /dev/null > /tmp/ff3.log 2>&1



Script is starting from console without problems. It takes audio from USB sound card and video from USB camera and creates UDP stream to IPTV. Then I created Systemd service :


[Unit]
Description=Streamer
After=multi-user.target sound.target network.target

[Service]
ExecStart=/home/pi/run.sh
KillMode=control-group
Restart=on-failure
TimeoutSec=1

[Install]
WantedBy=multi-user.target
Alias=streaming.service



After restarting Raspberry, script has started, but FFmpeg hangs on error failures in log :


cur_dts is invalid st:0 (257) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:1 (256) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:0 (257) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:1 (256) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:0 (257) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:1 (256) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:0 (257) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:1 (256) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)



and will not start streaming to UDP target. But, if I manually login to SSH and issue systemctl stop streaming and then systemctl start streaming Ffmpeg starts successfully. What's different with service auto-start on boot ?


Setting the "sleep timeout" at script begginging will not help. However, removing audio stream from FFmpeg config looks to solve auto-start on boot.