
Recherche avancée
Autres articles (92)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (7101)
-
Merge multi channel audio buffers into one CMSampleBuffer
26 avril 2020, par DarkwonderI am using FFmpeg to access an RTSP stream in my macOS app.



REACHED GOALS : I have created a tone generator which creates single channel audio and returns a CMSampleBuffer. The tone generator is used to test my audio pipeline when the video's fps and audio sample rates are changed.



GOAL : The goal is to merge multi-channel audio buffers into a single CMSampleBuffer.



Audio data lifecyle :



AVCodecContext* audioContext = self.rtspStreamProvider.audioCodecContext;
 if (!audioContext) { return; }

 // Getting audio settings from FFmpegs audio context (AVCodecContext).
 int samplesPerChannel = audioContext->frame_size;
 int frameNumber = audioContext->frame_number;
 int sampleRate = audioContext->sample_rate;
 int fps = [self.rtspStreamProvider fps];

 int calculatedSampleRate = sampleRate / fps;

 // NSLog(@"\nSamples per channel = %i, frames = %i.\nSample rate = %i, fps = %i.\ncalculatedSampleRate = %i.", samplesPerChannel, frameNumber, sampleRate, fps, calculatedSampleRate);

 // Decoding the audio data from a encoded AVPacket into a AVFrame.
 AVFrame* audioFrame = [self.rtspStreamProvider readDecodedAudioFrame];
 if (!audioFrame) { return; }

 // Extracting my audio buffers from FFmpegs AVFrame.
 uint8_t* leftChannelAudioBufRef = audioFrame->data[0];
 uint8_t* rightChannelAudioBufRef = audioFrame->data[1];

 // Creating the CMSampleBuffer with audio data.
 CMSampleBufferRef leftSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:leftChannelAudioBufRef channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];
// CMSampleBufferRef rightSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:packet->data[1] channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];

 if (!leftSampleBuffer) { return; }
 if (!self.audioQueue) { return; }
 if (!self.audioDelegates) { return; }

 // All audio consumers will receive audio samples via delegation. 
 dispatch_sync(self.audioQueue, ^{
 NSHashTable *audioDelegates = self.audioDelegates;
 for (id<audiodataproviderdelegate> audioDelegate in audioDelegates)
 {
 [audioDelegate provider:self didOutputAudioSampleBuffer:leftSampleBuffer];
 // [audioDelegate provider:self didOutputAudioSampleBuffer:rightSampleBuffer];
 }
 });
</audiodataproviderdelegate>



CMSampleBuffer containing audio data creation :



import Foundation
import CoreMedia

@objc class CMSampleBufferFactory: NSObject
{

 @objc static func createAudioSampleBufferUsing(data: UnsafeMutablePointer<uint8> ,
 channelCount: UInt32,
 framesCount: CMItemCount,
 sampleRate: Double) -> CMSampleBuffer? {

 /* Prepare for sample Buffer creation */
 var sampleBuffer: CMSampleBuffer! = nil
 var osStatus: OSStatus = -1
 var audioFormatDescription: CMFormatDescription! = nil

 var absd: AudioStreamBasicDescription! = nil
 let sampleDuration = CMTimeMake(value: 1, timescale: Int32(sampleRate))
 let presentationTimeStamp = CMTimeMake(value: 0, timescale: Int32(sampleRate))

 // NOTE: Change bytesPerFrame if you change the block buffer value types. Currently we are using double.
 let bytesPerFrame: UInt32 = UInt32(MemoryLayout<float32>.size) * channelCount
 let memoryBlockByteLength = framesCount * Int(bytesPerFrame)

// var acl = AudioChannelLayout()
// acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo

 /* Sample Buffer Block buffer creation */
 var blockBuffer: CMBlockBuffer?

 osStatus = CMBlockBufferCreateWithMemoryBlock(
 allocator: kCFAllocatorDefault,
 memoryBlock: nil,
 blockLength: memoryBlockByteLength,
 blockAllocator: nil,
 customBlockSource: nil,
 offsetToData: 0,
 dataLength: memoryBlockByteLength,
 flags: 0,
 blockBufferOut: &blockBuffer
 )

 assert(osStatus == kCMBlockBufferNoErr)

 guard let eBlock = blockBuffer else { return nil }

 osStatus = CMBlockBufferFillDataBytes(with: 0, blockBuffer: eBlock, offsetIntoDestination: 0, dataLength: memoryBlockByteLength)
 assert(osStatus == kCMBlockBufferNoErr)

 TVBlockBufferHelper.fillAudioBlockBuffer(blockBuffer,
 audioData: data,
 frames: Int32(framesCount))
 /* Audio description creations */

 absd = AudioStreamBasicDescription(
 mSampleRate: sampleRate,
 mFormatID: kAudioFormatLinearPCM,
 mFormatFlags: kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsFloat,
 mBytesPerPacket: bytesPerFrame,
 mFramesPerPacket: 1,
 mBytesPerFrame: bytesPerFrame,
 mChannelsPerFrame: channelCount,
 mBitsPerChannel: 32,
 mReserved: 0
 )

 guard absd != nil else {
 print("\nCreating AudioStreamBasicDescription Failed.")
 return nil
 }

 osStatus = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault,
 asbd: &absd,
 layoutSize: 0,
 layout: nil,
// layoutSize: MemoryLayout<audiochannellayout>.size,
// layout: &acl,
 magicCookieSize: 0,
 magicCookie: nil,
 extensions: nil,
 formatDescriptionOut: &audioFormatDescription)

 guard osStatus == noErr else {
 print("\nCreating CMFormatDescription Failed.")
 return nil
 }

 /* Create sample Buffer */
 var timmingInfo = CMSampleTimingInfo(duration: sampleDuration, presentationTimeStamp: presentationTimeStamp, decodeTimeStamp: .invalid)

 osStatus = CMSampleBufferCreate(allocator: kCFAllocatorDefault,
 dataBuffer: eBlock,
 dataReady: true,
 makeDataReadyCallback: nil,
 refcon: nil,
 formatDescription: audioFormatDescription,
 sampleCount: framesCount,
 sampleTimingEntryCount: 1,
 sampleTimingArray: &timmingInfo,
 sampleSizeEntryCount: 0, // Must be 0, 1, or numSamples.
 sampleSizeArray: nil, // Pointer ot Int. Don't know the size. Don't know if its bytes or bits?
 sampleBufferOut: &sampleBuffer)
 return sampleBuffer
 }

}
</audiochannellayout></float32></uint8>



CMSampleBuffer gets filled with raw audio data from FFmpeg's data :



@import Foundation;
@import CoreMedia;

@interface BlockBufferHelper : NSObject

+(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer
 audioData:(uint8_t *)data
 frames:(int)framesCount;



@end

#import "TVBlockBufferHelper.h"

@implementation BlockBufferHelper

+(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer
 audioData:(uint8_t *)data
 frames:(int)framesCount
{
 // Possibly dev error.
 if (framesCount == 0) {
 NSAssert(false, @"\nfillAudioBlockBuffer/audioData/frames will not be able to fill an blockBuffer which has no frames.");
 return;
 }

 char *rawBuffer = NULL;

 size_t size = 0;

 OSStatus status = CMBlockBufferGetDataPointer(blockBuffer, 0, &size, NULL, &rawBuffer);
 if(status != noErr)
 {
 return;
 }

 memcpy(rawBuffer, data, framesCount);
}

@end




The
LEARNING Core Audio
book from Chris Adamson/Kevin Avila points me toward a multi channel mixer. 
The multi channel mixer should have 2-n inputs and 1 output. I assume the output could be a buffer or something that could be put into aCMSampleBuffer
for further consumption.


This direction should lead me to
AudioUnits
,AUGraph
and theAudioToolbox
. I don't understand all of these classes and how they work together. I have found some code snippets on SO which could help me but most of them useAudioToolBox
classes and don't useCMSampleBuffers
as much as I need.


Is there another way to merge audio buffers into a new one ?



Is creating a multi channel mixer using AudioToolBox the right direction ?


-
How to speed up creating video mosaic with ffmpeg
7 janvier 2020, par DALER RAHIMOVI’m looking to speed up ffmpeg pipeline in some way (camera configuration, different filters or any other ideas would be appreciated).
I have a device that captures videos streams and later creates mosaic video view from 4 cameras. The main issue I’m having is that it’s taking too long to create mosaic video. There is no GPU on the device that could be used to accelerate the process so I’m left with camera configurations (Hikvision).
Here is what I have so far
About 160 sec on Intel J-1900 :
- 5 min video files,
- 640*480 resolution,
- h264 encoding,
- 10 fps,
- 1024 max bitrate,
- 10 I-frame interval,Command that I’m using :
ffmpeg -y -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 \
-filter_complex " \
color=c=black:size=1280x720 [base]; \
[0:v] setpts=PTS-STARTPTS, scale=640x360 [cam0]; \
[1:v] setpts=PTS-STARTPTS, scale=640x360 [cam1]; \
[2:v] setpts=PTS-STARTPTS, scale=640x360 [cam2]; \
[3:v] setpts=PTS-STARTPTS, scale=640x360 [cam3]; \
[base][cam0] overlay=shortest=1:x=0:y=0 [z1]; \
[z1][cam1] overlay=shortest=1:x=640:y=0 [z2]; \
[z2][cam2] overlay=shortest=1:x=0:y=360 [z3]; \
[z3][cam3] overlay=shortest=1:x=640:y=360 \
" \
-an -c:v libx264 -x264-params keyint=10 \
-movflags faststart -preset fast -nostats -loglevel quiet -r 10.000000 mosaic.mp4Thanks
Here is full output as requested
ffmpeg -y -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 \
> -filter_complex " \
> color=c=black:size=1280x720 [base]; \
> [0:v] setpts=PTS-STARTPTS, scale=640x360 [cam0]; \
> [1:v] setpts=PTS-STARTPTS, scale=640x360 [cam1]; \
> [2:v] setpts=PTS-STARTPTS, scale=640x360 [cam2]; \
> [3:v] setpts=PTS-STARTPTS, scale=640x360 [cam3]; \
> [base][cam0] overlay=shortest=1:x=0:y=0 [z1]; \
> [z1][cam1] overlay=shortest=1:x=640:y=0 [z2]; \
> [z2][cam2] overlay=shortest=1:x=0:y=360 [z3]; \
> [z3][cam3] overlay=shortest=1:x=640:y=360 \
> " \
> -an -c:v libx264 -x264-params keyint=10 \
> -movflags faststart -preset fast -nostats -r 10.000000 mosaic.mp4
ffmpeg version 2.8.15-0ubuntu0.16.04.1 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) 20160609
configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg -y -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 -filter_complex " \
color=c=black:size=1280x720 [base]; \
[0:v] setpts=PTS-STARTPTS, scale=640x360 [cam0]; \
[1:v] setpts=PTS-STARTPTS, scale=640x360 [cam1]; \
[2:v] setpts=PTS-STARTPTS, scale=640x360 [cam2]; \
[3:v] setpts=PTS-STARTPTS, scale=640x360 [cam3]; \
[base][cam0] overlay=shortest=1:x=0:y=0 [z1]; \
[z1][cam1] overlay=shortest=1:x=640:y=0 [z2]; \
[z2][cam2] overlay=shortest=1:x=0:y=360 [z3]; \
[z3][cam3] overlay=shortest=1:x=640:y=360 \
" -an -c:v libx264 -x264-params keyint=10 -movflags faststart -preset fast -r 10.000000 mosaic.mp4
ffmpeg version 2.8.15-0ubuntu0.16.04.1 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) 20160609
configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '1578324600-1-stitched.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf56.40.101
Duration: 00:05:00.07, start: 0.000000, bitrate: 96 kb/s
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 640x480, 95 kb/s, 10 fps, 25 tbr, 10240 tbn, 20 tbc (default)
Metadata:
handler_name : VideoHandler
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from '1578324600-1-stitched.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf56.40.101
Duration: 00:05:00.07, start: 0.000000, bitrate: 96 kb/s
Stream #1:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 640x480, 95 kb/s, 10 fps, 25 tbr, 10240 tbn, 20 tbc (default)
Metadata:
handler_name : VideoHandler
Input #2, mov,mp4,m4a,3gp,3g2,mj2, from '1578324600-1-stitched.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf56.40.101
Duration: 00:05:00.07, start: 0.000000, bitrate: 96 kb/s
Stream #2:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 640x480, 95 kb/s, 10 fps, 25 tbr, 10240 tbn, 20 tbc (default)
Metadata:
handler_name : VideoHandler
Input #3, mov,mp4,m4a,3gp,3g2,mj2, from '1578324600-1-stitched.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf56.40.101
Duration: 00:05:00.07, start: 0.000000, bitrate: 96 kb/s
Stream #3:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 640x480, 95 kb/s, 10 fps, 25 tbr, 10240 tbn, 20 tbc (default)
Metadata:
handler_name : VideoHandler
[libx264 @ 0x171c9e0] using SAR=1/1
[libx264 @ 0x171c9e0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2
[libx264 @ 0x171c9e0] profile High, level 3.1
[libx264 @ 0x171c9e0] 264 - core 148 r2643 5c65704 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=2 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=6 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=1 keyint=10 keyint_min=1 scenecut=40 intra_refresh=0 rc_lookahead=10 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'mosaic.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf56.40.101
Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], q=-1--1, 10 fps, 10240 tbn, 10 tbc (default)
Metadata:
encoder : Lavc56.60.100 libx264
Stream mapping:
Stream #0:0 (h264) -> setpts
Stream #1:0 (h264) -> setpts
Stream #2:0 (h264) -> setpts
Stream #3:0 (h264) -> setpts
overlay -> Stream #0:0 (libx264)
Press [q] to stop, [?] for help
[mp4 @ 0x1730600] Starting second pass: moving the moov atom to the beginning of the filerop=4497
frame= 3002 fps= 17 q=-1.0 Lsize= 51052kB time=00:05:00.00 bitrate=1394.1kbits/s dup=0 drop=4498
video:51017kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.068308%
[libx264 @ 0x171c9e0] frame I:301 Avg QP:15.80 size:154257
[libx264 @ 0x171c9e0] frame P:959 Avg QP:21.69 size: 5486
[libx264 @ 0x171c9e0] frame B:1742 Avg QP:22.73 size: 315
[libx264 @ 0x171c9e0] consecutive B-frames: 22.0% 0.0% 5.8% 72.2%
[libx264 @ 0x171c9e0] mb I I16..4: 9.7% 31.0% 59.4%
[libx264 @ 0x171c9e0] mb P I16..4: 0.6% 0.7% 0.2% P16..4: 14.5% 2.8% 2.4% 0.0% 0.0% skip:78.7%
[libx264 @ 0x171c9e0] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 2.7% 0.3% 0.0% direct: 1.4% skip:95.5% L0:25.9% L1:73.3% BI: 0.9%
[libx264 @ 0x171c9e0] 8x8 transform intra:31.8% inter:43.5%
[libx264 @ 0x171c9e0] coded y,uvDC,uvAC intra: 83.2% 85.2% 69.5% inter: 3.0% 6.3% 0.7%
[libx264 @ 0x171c9e0] i16 v,h,dc,p: 35% 21% 8% 36%
[libx264 @ 0x171c9e0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 27% 42% 10% 3% 2% 3% 6% 4% 5%
[libx264 @ 0x171c9e0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 25% 33% 9% 6% 5% 5% 7% 5% 6%
[libx264 @ 0x171c9e0] i8c dc,h,v,p: 44% 29% 20% 7%
[libx264 @ 0x171c9e0] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x171c9e0] ref P L0: 93.8% 6.2%
[libx264 @ 0x171c9e0] ref B L0: 91.7% 8.3%
[libx264 @ 0x171c9e0] ref B L1: 89.3% 10.7%
[libx264 @ 0x171c9e0] kb/s:1392.16 -
Revision 120059 : Report de r119426 : [Salvatore] ...
25 janvier 2020, par cedric@… — LogReport de r119426 : [Salvatore] paquet-suivant_precedent Export depuis http://trad.spip.net
Author : salvatore@…