Recherche avancée

Médias (0)

Mot : - Tags -/alertes

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (92)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (7101)

  • Merge multi channel audio buffers into one CMSampleBuffer

    26 avril 2020, par Darkwonder

    I am using FFmpeg to access an RTSP stream in my macOS app.

    



    REACHED GOALS : I have created a tone generator which creates single channel audio and returns a CMSampleBuffer. The tone generator is used to test my audio pipeline when the video's fps and audio sample rates are changed.

    



    GOAL : The goal is to merge multi-channel audio buffers into a single CMSampleBuffer.

    



    Audio data lifecyle :

    



    AVCodecContext* audioContext = self.rtspStreamProvider.audioCodecContext;&#xA;        if (!audioContext) { return; }&#xA;&#xA;        // Getting audio settings from FFmpegs audio context (AVCodecContext).&#xA;        int samplesPerChannel = audioContext->frame_size;&#xA;        int frameNumber = audioContext->frame_number;&#xA;        int sampleRate = audioContext->sample_rate;&#xA;        int fps = [self.rtspStreamProvider fps];&#xA;&#xA;        int calculatedSampleRate = sampleRate / fps;&#xA;&#xA;        // NSLog(@"\nSamples per channel = %i, frames = %i.\nSample rate = %i, fps = %i.\ncalculatedSampleRate = %i.", samplesPerChannel, frameNumber, sampleRate, fps, calculatedSampleRate);&#xA;&#xA;        // Decoding the audio data from a encoded AVPacket into a AVFrame.&#xA;        AVFrame* audioFrame = [self.rtspStreamProvider readDecodedAudioFrame];&#xA;        if (!audioFrame) { return; }&#xA;&#xA;        // Extracting my audio buffers from FFmpegs AVFrame.&#xA;        uint8_t* leftChannelAudioBufRef = audioFrame->data[0];&#xA;        uint8_t* rightChannelAudioBufRef = audioFrame->data[1];&#xA;&#xA;        // Creating the CMSampleBuffer with audio data.&#xA;        CMSampleBufferRef leftSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:leftChannelAudioBufRef channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];&#xA;//      CMSampleBufferRef rightSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:packet->data[1] channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];&#xA;&#xA;        if (!leftSampleBuffer) { return; }&#xA;        if (!self.audioQueue) { return; }&#xA;        if (!self.audioDelegates) { return; }&#xA;&#xA;        // All audio consumers will receive audio samples via delegation. &#xA;        dispatch_sync(self.audioQueue, ^{&#xA;            NSHashTable *audioDelegates = self.audioDelegates;&#xA;            for (id<audiodataproviderdelegate> audioDelegate in audioDelegates)&#xA;            {&#xA;                [audioDelegate provider:self didOutputAudioSampleBuffer:leftSampleBuffer];&#xA;                // [audioDelegate provider:self didOutputAudioSampleBuffer:rightSampleBuffer];&#xA;            }&#xA;        });&#xA;</audiodataproviderdelegate>

    &#xA;&#xA;

    CMSampleBuffer containing audio data creation :

    &#xA;&#xA;

    import Foundation&#xA;import CoreMedia&#xA;&#xA;@objc class CMSampleBufferFactory: NSObject&#xA;{&#xA;&#xA;    @objc static func createAudioSampleBufferUsing(data: UnsafeMutablePointer<uint8> ,&#xA;                                             channelCount: UInt32,&#xA;                                             framesCount: CMItemCount,&#xA;                                             sampleRate: Double) -> CMSampleBuffer? {&#xA;&#xA;        /* Prepare for sample Buffer creation */&#xA;        var sampleBuffer: CMSampleBuffer! = nil&#xA;        var osStatus: OSStatus = -1&#xA;        var audioFormatDescription: CMFormatDescription! = nil&#xA;&#xA;        var absd: AudioStreamBasicDescription! = nil&#xA;        let sampleDuration = CMTimeMake(value: 1, timescale: Int32(sampleRate))&#xA;        let presentationTimeStamp = CMTimeMake(value: 0, timescale: Int32(sampleRate))&#xA;&#xA;        // NOTE: Change bytesPerFrame if you change the block buffer value types. Currently we are using double.&#xA;        let bytesPerFrame: UInt32 = UInt32(MemoryLayout<float32>.size) * channelCount&#xA;        let memoryBlockByteLength = framesCount * Int(bytesPerFrame)&#xA;&#xA;//      var acl = AudioChannelLayout()&#xA;//      acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo&#xA;&#xA;        /* Sample Buffer Block buffer creation */&#xA;        var blockBuffer: CMBlockBuffer?&#xA;&#xA;        osStatus = CMBlockBufferCreateWithMemoryBlock(&#xA;            allocator: kCFAllocatorDefault,&#xA;            memoryBlock: nil,&#xA;            blockLength: memoryBlockByteLength,&#xA;            blockAllocator: nil,&#xA;            customBlockSource: nil,&#xA;            offsetToData: 0,&#xA;            dataLength: memoryBlockByteLength,&#xA;            flags: 0,&#xA;            blockBufferOut: &amp;blockBuffer&#xA;        )&#xA;&#xA;        assert(osStatus == kCMBlockBufferNoErr)&#xA;&#xA;        guard let eBlock = blockBuffer else { return nil }&#xA;&#xA;        osStatus = CMBlockBufferFillDataBytes(with: 0, blockBuffer: eBlock, offsetIntoDestination: 0, dataLength: memoryBlockByteLength)&#xA;        assert(osStatus == kCMBlockBufferNoErr)&#xA;&#xA;        TVBlockBufferHelper.fillAudioBlockBuffer(blockBuffer,&#xA;                                                 audioData: data,&#xA;                                                 frames: Int32(framesCount))&#xA;        /* Audio description creations */&#xA;&#xA;        absd = AudioStreamBasicDescription(&#xA;            mSampleRate: sampleRate,&#xA;            mFormatID: kAudioFormatLinearPCM,&#xA;            mFormatFlags: kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsFloat,&#xA;            mBytesPerPacket: bytesPerFrame,&#xA;            mFramesPerPacket: 1,&#xA;            mBytesPerFrame: bytesPerFrame,&#xA;            mChannelsPerFrame: channelCount,&#xA;            mBitsPerChannel: 32,&#xA;            mReserved: 0&#xA;        )&#xA;&#xA;        guard absd != nil else {&#xA;            print("\nCreating AudioStreamBasicDescription Failed.")&#xA;            return nil&#xA;        }&#xA;&#xA;        osStatus = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault,&#xA;                                                  asbd: &amp;absd,&#xA;                                                  layoutSize: 0,&#xA;                                                  layout: nil,&#xA;//                                                layoutSize: MemoryLayout<audiochannellayout>.size,&#xA;//                                                layout: &amp;acl,&#xA;                                                  magicCookieSize: 0,&#xA;                                                  magicCookie: nil,&#xA;                                                  extensions: nil,&#xA;                                                  formatDescriptionOut: &amp;audioFormatDescription)&#xA;&#xA;        guard osStatus == noErr else {&#xA;            print("\nCreating CMFormatDescription Failed.")&#xA;            return nil&#xA;        }&#xA;&#xA;        /* Create sample Buffer */&#xA;        var timmingInfo = CMSampleTimingInfo(duration: sampleDuration, presentationTimeStamp: presentationTimeStamp, decodeTimeStamp: .invalid)&#xA;&#xA;        osStatus = CMSampleBufferCreate(allocator: kCFAllocatorDefault,&#xA;                                        dataBuffer: eBlock,&#xA;                                        dataReady: true,&#xA;                                        makeDataReadyCallback: nil,&#xA;                                        refcon: nil,&#xA;                                        formatDescription: audioFormatDescription,&#xA;                                        sampleCount: framesCount,&#xA;                                        sampleTimingEntryCount: 1,&#xA;                                        sampleTimingArray: &amp;timmingInfo,&#xA;                                        sampleSizeEntryCount: 0, // Must be 0, 1, or numSamples.&#xA;            sampleSizeArray: nil, // Pointer ot Int. Don&#x27;t know the size. Don&#x27;t know if its bytes or bits?&#xA;            sampleBufferOut: &amp;sampleBuffer)&#xA;        return sampleBuffer&#xA;    }&#xA;&#xA;}&#xA;</audiochannellayout></float32></uint8>

    &#xA;&#xA;

    CMSampleBuffer gets filled with raw audio data from FFmpeg's data :

    &#xA;&#xA;

    @import Foundation;&#xA;@import CoreMedia;&#xA;&#xA;@interface BlockBufferHelper : NSObject&#xA;&#xA;&#x2B;(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer&#xA;                  audioData:(uint8_t *)data&#xA;                     frames:(int)framesCount;&#xA;&#xA;&#xA;&#xA;@end&#xA;&#xA;#import "TVBlockBufferHelper.h"&#xA;&#xA;@implementation BlockBufferHelper&#xA;&#xA;&#x2B;(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer&#xA;                  audioData:(uint8_t *)data&#xA;                     frames:(int)framesCount&#xA;{&#xA;    // Possibly dev error.&#xA;    if (framesCount == 0) {&#xA;        NSAssert(false, @"\nfillAudioBlockBuffer/audioData/frames will not be able to fill an blockBuffer which has no frames.");&#xA;        return;&#xA;    }&#xA;&#xA;    char *rawBuffer = NULL;&#xA;&#xA;    size_t size = 0;&#xA;&#xA;    OSStatus status = CMBlockBufferGetDataPointer(blockBuffer, 0, &amp;size, NULL, &amp;rawBuffer);&#xA;    if(status != noErr)&#xA;    {&#xA;        return;&#xA;    }&#xA;&#xA;    memcpy(rawBuffer, data, framesCount);&#xA;}&#xA;&#xA;@end&#xA;

    &#xA;&#xA;

    The LEARNING Core Audio book from Chris Adamson/Kevin Avila points me toward a multi channel mixer. &#xA;The multi channel mixer should have 2-n inputs and 1 output. I assume the output could be a buffer or something that could be put into a CMSampleBuffer for further consumption.

    &#xA;&#xA;

    This direction should lead me to AudioUnits, AUGraph and the AudioToolbox. I don't understand all of these classes and how they work together. I have found some code snippets on SO which could help me but most of them use AudioToolBox classes and don't use CMSampleBuffers as much as I need.

    &#xA;&#xA;

    Is there another way to merge audio buffers into a new one ?

    &#xA;&#xA;

    Is creating a multi channel mixer using AudioToolBox the right direction ?

    &#xA;

  • How to speed up creating video mosaic with ffmpeg

    7 janvier 2020, par DALER RAHIMOV

    I’m looking to speed up ffmpeg pipeline in some way (camera configuration, different filters or any other ideas would be appreciated).

    I have a device that captures videos streams and later creates mosaic video view from 4 cameras. The main issue I’m having is that it’s taking too long to create mosaic video. There is no GPU on the device that could be used to accelerate the process so I’m left with camera configurations (Hikvision).

    Here is what I have so far

    About 160 sec on Intel J-1900 :

    - 5 min video files,
    - 640*480 resolution,
    - h264 encoding,
    - 10 fps,
    - 1024 max bitrate,
    - 10 I-frame interval,      

    Command that I’m using :

    ffmpeg -y -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 \
      -filter_complex " \
         color=c=black:size=1280x720 [base]; \
         [0:v] setpts=PTS-STARTPTS, scale=640x360 [cam0]; \
         [1:v] setpts=PTS-STARTPTS, scale=640x360 [cam1]; \
         [2:v] setpts=PTS-STARTPTS, scale=640x360 [cam2]; \
         [3:v] setpts=PTS-STARTPTS, scale=640x360 [cam3]; \
         [base][cam0] overlay=shortest=1:x=0:y=0  [z1]; \
         [z1][cam1] overlay=shortest=1:x=640:y=0  [z2]; \
         [z2][cam2] overlay=shortest=1:x=0:y=360  [z3]; \
         [z3][cam3] overlay=shortest=1:x=640:y=360 \
       " \
       -an -c:v libx264  -x264-params keyint=10 \
       -movflags faststart -preset fast -nostats -loglevel quiet -r 10.000000 mosaic.mp4

    Thanks

    Here is full output as requested

    ffmpeg -y -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 \
    >    -filter_complex " \
    >       color=c=black:size=1280x720 [base]; \
    >       [0:v] setpts=PTS-STARTPTS, scale=640x360 [cam0]; \
    >       [1:v] setpts=PTS-STARTPTS, scale=640x360 [cam1]; \
    >       [2:v] setpts=PTS-STARTPTS, scale=640x360 [cam2]; \
    >       [3:v] setpts=PTS-STARTPTS, scale=640x360 [cam3]; \
    >       [base][cam0] overlay=shortest=1:x=0:y=0  [z1]; \
    >       [z1][cam1] overlay=shortest=1:x=640:y=0  [z2]; \
    >       [z2][cam2] overlay=shortest=1:x=0:y=360  [z3]; \
    >       [z3][cam3] overlay=shortest=1:x=640:y=360 \
    >     " \
    >     -an -c:v libx264  -x264-params keyint=10 \
    >     -movflags faststart -preset fast -nostats -r 10.000000 mosaic.mp4
    ffmpeg version 2.8.15-0ubuntu0.16.04.1 Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) 20160609
     configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg -y -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4 -i 1578324600-1-stitched.mp4    -filter_complex " \
         color=c=black:size=1280x720 [base]; \
         [0:v] setpts=PTS-STARTPTS, scale=640x360 [cam0]; \
         [1:v] setpts=PTS-STARTPTS, scale=640x360 [cam1]; \
         [2:v] setpts=PTS-STARTPTS, scale=640x360 [cam2]; \
         [3:v] setpts=PTS-STARTPTS, scale=640x360 [cam3]; \
         [base][cam0] overlay=shortest=1:x=0:y=0  [z1]; \
         [z1][cam1] overlay=shortest=1:x=640:y=0  [z2]; \
         [z2][cam2] overlay=shortest=1:x=0:y=360  [z3]; \
         [z3][cam3] overlay=shortest=1:x=640:y=360 \
       "     -an -c:v libx264  -x264-params keyint=10     -movflags faststart -preset fast -r 10.000000 mosaic.mp4
    ffmpeg version 2.8.15-0ubuntu0.16.04.1 Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) 20160609
     configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
     libavutil      54. 31.100 / 54. 31.100
     libavcodec     56. 60.100 / 56. 60.100
     libavformat    56. 40.101 / 56. 40.101
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 40.101 /  5. 40.101
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  2.101 /  1.  2.101
     libpostproc    53.  3.100 / 53.  3.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '1578324600-1-stitched.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf56.40.101
     Duration: 00:05:00.07, start: 0.000000, bitrate: 96 kb/s
       Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 640x480, 95 kb/s, 10 fps, 25 tbr, 10240 tbn, 20 tbc (default)
       Metadata:
         handler_name    : VideoHandler
    Input #1, mov,mp4,m4a,3gp,3g2,mj2, from '1578324600-1-stitched.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf56.40.101
     Duration: 00:05:00.07, start: 0.000000, bitrate: 96 kb/s
       Stream #1:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 640x480, 95 kb/s, 10 fps, 25 tbr, 10240 tbn, 20 tbc (default)
       Metadata:
         handler_name    : VideoHandler
    Input #2, mov,mp4,m4a,3gp,3g2,mj2, from '1578324600-1-stitched.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf56.40.101
     Duration: 00:05:00.07, start: 0.000000, bitrate: 96 kb/s
       Stream #2:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 640x480, 95 kb/s, 10 fps, 25 tbr, 10240 tbn, 20 tbc (default)
       Metadata:
         handler_name    : VideoHandler
    Input #3, mov,mp4,m4a,3gp,3g2,mj2, from '1578324600-1-stitched.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf56.40.101
     Duration: 00:05:00.07, start: 0.000000, bitrate: 96 kb/s
       Stream #3:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 640x480, 95 kb/s, 10 fps, 25 tbr, 10240 tbn, 20 tbc (default)
       Metadata:
         handler_name    : VideoHandler
    [libx264 @ 0x171c9e0] using SAR=1/1
    [libx264 @ 0x171c9e0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2
    [libx264 @ 0x171c9e0] profile High, level 3.1
    [libx264 @ 0x171c9e0] 264 - core 148 r2643 5c65704 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=2 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=6 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=1 keyint=10 keyint_min=1 scenecut=40 intra_refresh=0 rc_lookahead=10 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to 'mosaic.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf56.40.101
       Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], q=-1--1, 10 fps, 10240 tbn, 10 tbc (default)
       Metadata:
         encoder         : Lavc56.60.100 libx264
    Stream mapping:
     Stream #0:0 (h264) -> setpts
     Stream #1:0 (h264) -> setpts
     Stream #2:0 (h264) -> setpts
     Stream #3:0 (h264) -> setpts
     overlay -> Stream #0:0 (libx264)
    Press [q] to stop, [?] for help
    [mp4 @ 0x1730600] Starting second pass: moving the moov atom to the beginning of the filerop=4497
    frame= 3002 fps= 17 q=-1.0 Lsize=   51052kB time=00:05:00.00 bitrate=1394.1kbits/s dup=0 drop=4498
    video:51017kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.068308%
    [libx264 @ 0x171c9e0] frame I:301   Avg QP:15.80  size:154257
    [libx264 @ 0x171c9e0] frame P:959   Avg QP:21.69  size:  5486
    [libx264 @ 0x171c9e0] frame B:1742  Avg QP:22.73  size:   315
    [libx264 @ 0x171c9e0] consecutive B-frames: 22.0%  0.0%  5.8% 72.2%
    [libx264 @ 0x171c9e0] mb I  I16..4:  9.7% 31.0% 59.4%
    [libx264 @ 0x171c9e0] mb P  I16..4:  0.6%  0.7%  0.2%  P16..4: 14.5%  2.8%  2.4%  0.0%  0.0%    skip:78.7%
    [libx264 @ 0x171c9e0] mb B  I16..4:  0.0%  0.0%  0.0%  B16..8:  2.7%  0.3%  0.0%  direct: 1.4%  skip:95.5%  L0:25.9% L1:73.3% BI: 0.9%
    [libx264 @ 0x171c9e0] 8x8 transform intra:31.8% inter:43.5%
    [libx264 @ 0x171c9e0] coded y,uvDC,uvAC intra: 83.2% 85.2% 69.5% inter: 3.0% 6.3% 0.7%
    [libx264 @ 0x171c9e0] i16 v,h,dc,p: 35% 21%  8% 36%
    [libx264 @ 0x171c9e0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 27% 42% 10%  3%  2%  3%  6%  4%  5%
    [libx264 @ 0x171c9e0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 25% 33%  9%  6%  5%  5%  7%  5%  6%
    [libx264 @ 0x171c9e0] i8c dc,h,v,p: 44% 29% 20%  7%
    [libx264 @ 0x171c9e0] Weighted P-Frames: Y:0.0% UV:0.0%
    [libx264 @ 0x171c9e0] ref P L0: 93.8%  6.2%
    [libx264 @ 0x171c9e0] ref B L0: 91.7%  8.3%
    [libx264 @ 0x171c9e0] ref B L1: 89.3% 10.7%
    [libx264 @ 0x171c9e0] kb/s:1392.16
  • Revision 120059 : Report de r119426 : [Salvatore] ...

    25 janvier 2020, par cedric@… — Log

    Report de r119426 : [Salvatore] paquet-suivant_precedent Export depuis http://trad.spip.net
    Author : salvatore@…