Recherche avancée

Médias (91)

Autres articles (51)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

Sur d’autres sites (9792)

  • Python cv2 script that scans a giant image to a video. Why do I need pad two extra lines

    27 avril 2022, par Mahrarena

    I wrote a script that scans a giant image to make a video. Normally I just post my scripts straight to my Code Review account, but this script is ugly, needs to be refactored, implements only horizontal scrolling and most importantly I just fixed a bug but I don't completely understand why it works.

    


    Example :

    


    Original image (Google Drive)

    


    Video Output (Google Drive)

    


    As you can see from the video, everything is working properly except the fact that I don't know how it works.

    


    Full working code

    



    

    import cv2
import numpy as np
import random
import rpack
from fractions import Fraction
from math import prod

def resize_guide(image_size, target_area):
    aspect_ratio = Fraction(*image_size).limit_denominator()
    horizontal = aspect_ratio.numerator
    vertical = aspect_ratio.denominator
    unit_length = (target_area/(horizontal*vertical))**.5
    return (int(horizontal*unit_length), int(vertical*unit_length))

fourcc = cv2.VideoWriter_fourcc(*'mp4v')
FRAME = np.zeros((1080, 1920, 3), dtype=np.uint8)

def new_frame():
    return np.ndarray.copy(FRAME)

def center(image):
    frame = new_frame()
    h, w = image.shape[:2]
    yoff = round((1080-h)/2)
    xoff = round((1920-w)/2)
    frame[yoff:yoff+h, xoff:xoff+w] = image
    return frame

def image_scanning(file, fps=60, pan_increment=64, horizontal_increment=8):
    image = cv2.imread(file)
    height, width = image.shape[:2]
    assert width*height >= 1920*1080
    video_writer = cv2.VideoWriter(file+'.mp4', fourcc, fps, (1920, 1080))
    fit_height = True
    if height < 1080:
        width = width*1080/height
        image = cv2.resize(image, (width, 1080), interpolation = cv2.INTER_AREA)
    aspect_ratio = width / height
    zooming_needed = False
    if 4/9 <= aspect_ratio <= 16/9:
        new_width = round(width*1080/height)
        fit = cv2.resize(image, (new_width, 1080), interpolation = cv2.INTER_AREA)
        zooming_needed = True
    
    elif 16/9 < aspect_ratio <= 32/9:
        new_height = round(height*1920/width)
        fit = cv2.resize(image, (1920, new_height), interpolation = cv2.INTER_AREA)
        fit_height = False
        zooming_needed = True
    
    centered = center(fit)
    for i in range(fps):
        video_writer.write(centered)
    if fit_height:
        xoff = round((1920 - new_width)/2)
        while xoff:
            if xoff - pan_increment >= 0:
                xoff -= pan_increment
            else:
                xoff = 0
            frame = new_frame()
            frame[0:1080, xoff:xoff+new_width] = fit
            video_writer.write(frame)
    else:
        yoff = round((1080 - new_height)/2)
        while yoff:
            if yoff - pan_increment >= 0:
                yoff -= pan_increment
            else:
                yoff = 0
            frame = new_frame()
            frame[yoff:yoff+new_height, 0:1920] = fit
            video_writer.write(frame)
    
    if zooming_needed:
        if fit_height:
            width_1, height_1 = new_width, 1080
        else:
            width_1, height_1 = 1920, new_height
        new_area = width_1 * height_1
        original_area = width * height
        area_diff = original_area - new_area
        unit_diff = area_diff / fps
        for i in range(1, fps+1):
            zoomed = cv2.resize(image, resize_guide((width_1, height_1), new_area+unit_diff*i), interpolation=cv2.INTER_AREA)
            zheight, zwidth = zoomed.shape[:2]
            zheight = min(zheight, 1080)
            zwidth = min(zwidth, 1920)
            frame = new_frame()
            frame[0:zheight, 0:zwidth] = zoomed[0:zheight, 0:zwidth]
            video_writer.write(frame)
    
    if (width - 1920) % horizontal_increment:
        new_width = ((width - 1920) // horizontal_increment + 1) * horizontal_increment + 1920
        frame = np.zeros([height, new_width, 3], dtype=np.uint8)
        frame[0:height, 0:width] = image
        width = new_width
        image = frame
    
    if height % 1080:
        new_height = (height // 1080 + 2) * 1080
        frame = np.zeros([new_height, width, 3], dtype=np.uint8)
        frame[0:height, 0:width] = image
        height = new_height - 1080
        image = frame
    
    y, x = 0, 0
    for y in range(0, height, 1080):
        for x in range(0, width-1920, horizontal_increment):
            frame = image[y:y+1080, x:x+1920]
            video_writer.write(frame)
        x = width - 1920
        frame = image[y:y+1080, x:x+1920]
        for i in range(round(fps/3)):
            video_writer.write(frame)
    cv2.destroyAllWindows()
    video_writer.release()
    del video_writer


    


    I don't know why I need to pad two extra lines instead of one, meaning if I change this :

    


        if height % 1080:
        new_height = (height // 1080 + 2) * 1080
        frame = np.zeros([new_height, width, 3], dtype=np.uint8)
        frame[0:height, 0:width] = image
        height = new_height - 1080
        image = frame


    


    To this :

    


        if height % 1080:
        new_height = (height // 1080 + 1) * 1080
        frame = np.zeros([new_height, width, 3], dtype=np.uint8)
        frame[0:height, 0:width] = image
        height = new_height
        image = frame


    


    The program raises exceptions :

    


    OpenCV: FFMPEG: tag 0x34363268/&#x27;h264&#x27; is not supported with codec id 27 and format &#x27;mp4 / MP4 (MPEG-4 Part 14)&#x27;&#xA;OpenCV: FFMPEG: fallback to use tag 0x31637661/&#x27;avc1&#x27;&#xA;---------------------------------------------------------------------------&#xA;error                                     Traceback (most recent call last)&#xA; in <module>&#xA;----> 1 image_scanning("D:/collages/91f53ebcea2a.png")&#xA;&#xA; in image_scanning(file, fps, pan_increment, horizontal_increment, fast_decrement)&#xA;    122                     x &#x2B;= horizontal_increment&#xA;    123                     frame = image[y:y&#x2B;1080, x:x&#x2B;1920]&#xA;--> 124                     video_writer.write(frame)&#xA;    125     cv2.destroyAllWindows()&#xA;    126     video_writer.release()&#xA;&#xA;error: Unknown C&#x2B;&#x2B; exception from OpenCV code&#xA;</module>

    &#xA;

    I guess it was caused by indexing error because the last line would not have enough pixels so padding the height of the image to a multiple of 1080 should work.

    &#xA;

    But that's not the case, I need to pad two lines, why is that ? I really don't understand why it is working.

    &#xA;


    &#xA;

    No, I really wrote all of it, I understand all the principles, the ideas are all mine, but there is one small problem in implementation. I don't know why I need extra pixels in the bottom to make it work, because if I don't pad the height to a multiple of 1080, I can't get the bottom line, the lowest potion of height % 1080 would be lost.

    &#xA;

    If I tried to get the lowest part, the program will raise exceptions even if I pad the height to a multiple of 1080, I think it is related to indexing but I don't fully understand it, turns out I need to pad the height and add extra pixels, even 1 pixel would work.

    &#xA;

    I don't know why it raises exceptions and how add extra pixels got rid of the exception, but I understand everything else perfectly clear, after all I wrote it.

    &#xA;

    There's a bug in my program, I don't know what caused it, and I want you to help me debugging, and that's the entire point of the question !

    &#xA;

  • Decoding MediaRecorder produced webm stream

    15 août 2019, par sgmg

    I am trying to decode a video stream from the browser using the ffmpeg API. The stream is produced by the webcam and recorded with MediaRecorder as webm format. What I ultimately need is a vector of opencv cv::Mat objects for further processing.

    I have written a C++ webserver using the uWebsocket library. The video stream is sent via websocket from the browser to the server once per second. On the server, I append the received data to my custom buffer and decode it with the ffmpeg API.

    If I just save the data on the disk and later I play it with a media player, it works fine. So, whatever the browser sends is a valid video.

    I do not think that I correctly understand how should the custom IO behave with network streaming as nothing seems to be working.

    The custom buffer :

    struct Buffer
       {
           std::vector data;
           int currentPos = 0;
       };

    The readAVBuffer method for custom IO

    int MediaDecoder::readAVBuffer(void* opaque, uint8_t* buf, int buf_size)
    {
       MediaDecoder::Buffer* mbuf = (MediaDecoder::Buffer*)opaque;
       int count = 0;
       for(int i=0;icurrentPos;
           if(index >= (int)mbuf->data.size())
           {
               break;
           }
           count++;
           buf[i] = mbuf->data.at(index);
       }
       if(count > 0) mbuf->currentPos+=count;

       std::cout &lt;&lt; "read : "&lt;currentPos&lt;&lt;", buff size:"&lt;data.size() &lt;&lt; std::endl;
       if(count &lt;= 0) return AVERROR(EAGAIN); //is this error that should be returned? It cannot be EOF since we're not done yet, most likely
       return count;
    }

    The big decode method, that’s supposed to return whatever frames it could read

    std::vector MediaDecoder::decode(const char* data, size_t length)
    {
       std::vector frames;
       //add data to the buffer
       for(size_t i=0;i/do not invoke the decoders until we have 1MB of data
       if(((buf.data.size() - buf.currentPos) &lt; 1*1024*1024) &amp;&amp; !initializedCodecs) return frames;

       std::cout &lt;&lt; "decoding data length "&lt;/initialize ffmpeg objects. Custom I/O, format, decoder, etc.
       {      
           //these are just members of the class
           avioCtxPtr = std::unique_ptr(
                       avio_alloc_context((uint8_t*)av_malloc(4096),4096,0,&amp;buf,&amp;readAVBuffer,nullptr,nullptr),
                       avio_context_deleter());
           if(!avioCtxPtr)
           {
               std::cerr &lt;&lt; "Could not create IO buffer" &lt;&lt; std::endl;
               return frames;
           }                

           fmt_ctx = std::unique_ptr(avformat_alloc_context(),
                                                                             avformat_context_deleter());
           fmt_ctx->pb = avioCtxPtr.get();
           fmt_ctx->flags |= AVFMT_FLAG_CUSTOM_IO ;
           //fmt_ctx->max_analyze_duration = 2 * AV_TIME_BASE; // read 2 seconds of data
           {
               AVFormatContext *fmtCtxRaw = fmt_ctx.get();            
               if (avformat_open_input(&amp;fmtCtxRaw, "", nullptr, nullptr) &lt; 0) {
                   std::cerr &lt;&lt; "Could not open movie" &lt;&lt; std::endl;
                   return frames;
               }
           }
           if (avformat_find_stream_info(fmt_ctx.get(), nullptr) &lt; 0) {
               std::cerr &lt;&lt; "Could not find stream information" &lt;&lt; std::endl;
               return frames;
           }
           if((video_stream_idx = av_find_best_stream(fmt_ctx.get(), AVMEDIA_TYPE_VIDEO, -1, -1, nullptr, 0)) &lt; 0)
           {
               std::cerr &lt;&lt; "Could not find video stream" &lt;&lt; std::endl;
               return frames;
           }
           AVStream *video_stream = fmt_ctx->streams[video_stream_idx];
           AVCodec *dec = avcodec_find_decoder(video_stream->codecpar->codec_id);

           video_dec_ctx = std::unique_ptr (avcodec_alloc_context3(dec),
                                                                                 avcodec_context_deleter());
           if (!video_dec_ctx)
           {
               std::cerr &lt;&lt; "Failed to allocate the video codec context" &lt;&lt; std::endl;
               return frames;
           }
           avcodec_parameters_to_context(video_dec_ctx.get(),video_stream->codecpar);
           video_dec_ctx->thread_count = 1;
          /* video_dec_ctx->max_b_frames = 0;
           video_dec_ctx->frame_skip_threshold = 10;*/

           AVDictionary *opts = nullptr;
           av_dict_set(&amp;opts, "refcounted_frames", "1", 0);
           av_dict_set(&amp;opts, "deadline", "1", 0);
           av_dict_set(&amp;opts, "auto-alt-ref", "0", 0);
           av_dict_set(&amp;opts, "lag-in-frames", "1", 0);
           av_dict_set(&amp;opts, "rc_lookahead", "1", 0);
           av_dict_set(&amp;opts, "drop_frame", "1", 0);
           av_dict_set(&amp;opts, "error-resilient", "1", 0);

           int width = video_dec_ctx->width;
           videoHeight = video_dec_ctx->height;

           if(avcodec_open2(video_dec_ctx.get(), dec, &amp;opts) &lt; 0)
           {
               std::cerr &lt;&lt; "Failed to open the video codec context" &lt;&lt; std::endl;
               return frames;
           }

           AVPixelFormat  pFormat = AV_PIX_FMT_BGR24;
           img_convert_ctx = std::unique_ptr(sws_getContext(width, videoHeight,
                                            video_dec_ctx->pix_fmt,   width, videoHeight, pFormat,
                                            SWS_BICUBIC, nullptr, nullptr,nullptr),swscontext_deleter());

           frame = std::unique_ptr(av_frame_alloc(),avframe_deleter());
           frameRGB = std::unique_ptr(av_frame_alloc(),avframe_deleter());


           int numBytes = av_image_get_buffer_size(pFormat, width, videoHeight,32 /*https://stackoverflow.com/questions/35678041/what-is-linesize-alignment-meaning*/);
           std::unique_ptr imageBuffer((uint8_t *) av_malloc(numBytes*sizeof(uint8_t)),avbuffer_deleter());
           av_image_fill_arrays(frameRGB->data,frameRGB->linesize,imageBuffer.get(),pFormat,width,videoHeight,32);
           frameRGB->width = width;
           frameRGB->height = videoHeight;

           initializedCodecs = true;
       }    
       AVPacket pkt;
       av_init_packet(&amp;pkt);
       pkt.data = nullptr;
       pkt.size = 0;

       int read_frame_return = 0;
       while ( (read_frame_return=av_read_frame(fmt_ctx.get(), &amp;pkt)) >= 0)
       {
           readFrame(&amp;frames,&amp;pkt,video_dec_ctx.get(),frame.get(),img_convert_ctx.get(),
                     videoHeight,frameRGB.get());
           //if(cancelled) break;
       }
       avioCtxPtr->eof_reached = 0;
       avioCtxPtr->error = 0;


       //flush
      // readFrame(frames.get(),nullptr,video_dec_ctx.get(),frame.get(),
        //         img_convert_ctx.get(),videoHeight,frameRGB.get());

       avioCtxPtr->eof_reached = 0;
       avioCtxPtr->error = 0;

       if(frames->size() &lt;= 0)
       {
           std::cout &lt;&lt; "buffer pos: "&lt;code>

    What I would expect to happen would be for a continuous extraction of cv::Mat frames as I feed it more and more data. What actually happens is that after the the buffer is fully read I see :

    [matroska,webm @ 0x507b450] Read error at pos. 1278266 (0x13813a)
    [matroska,webm @ 0x507b450] Seek to desired resync point failed. Seeking to earliest point available instead.

    And then no more bytes are read from the buffer even if later I increase the size of it.

    There is something terribly wrong I’m doing here and I don’t understand what.

  • piping images from simulation into ffmpeg

    24 mars 2020, par Leonard Becker

    I’m writing a script that creates a video from images during their creation. A simulation software is writing out images with an unknown time interval (seconds to minutes). I’m doing this because mp4 is taking up much less space. Also, the images will end up in a video anyhow.

    Here is my code. I commented the most important parts (as far as I understand). I gathered this from other posts I can’t find the link to anymore.

    #!/usr/bin/bash
    # Create pipe and start piping it to ffmpeg process
    mkfifo pipe
    cat pipe | ffmpeg -y -f image2pipe -i pipe:0 -c:v libx264 -pix_fmt yuv420p ../output_video.mp4 &amp;

    # Open pipe for writing.
    exec 3>pipe

    while true; do # loop endlessly
     sleep .05;   # is there a new image? get the oldest of them
     if ls -A *.png 1> /dev/null 2>$1; then
       NEWFILE=$( ls -At *.png | tail -n 1 );
       echo "$NEWFILE"
       cat "$NEWFILE" >&amp;3 # add the png to the pipe
    #     rm out.png; # remove or backup the image
       mv "$NEWFILE" "./backup/$NEWFILE";
     fi
    done

    My script is actually quite far already. It works fine for images in a folder and when the pictures are delivered quickly. When the images come one by one with longer intervals ffmpeg just seems to stop listening to the pipe input.

    Here is the log from an example - not sure if this is helpful. Files 465 to 540 are already present in the folder, the others are added with 15s intervals. As we can see ffmpeg first waits for 8-10 images to arrive before starting to compile. Afterwards ffmpeg continues to wait for images and compiling them. At the end I abort the script, now ffmpeg completes the mp4.

    But when I write out the frames of the mp4 it includes every frame until image 610. Why did ffmpeg stop to listen to the pipe ? The ’chunk too big’ error could be a hint, but how do I fix it ?

    scene_image_00465.png
    ffmpeg version 2.6.8 Copyright (c) 2000-2016 the FFmpeg developers
     built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)
     configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --enable-bzlib --disable-crystalhd --enable-gnutls --enable-ladspa --enable-libass --enable-libcdio --enable-libdc1394 --enable-libfaac --enable-nonfree --enable-libfdk-aac --enable-nonfree --disable-indev=jack --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-openal --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libv4l2 --enable-libx264 --enable-libx265 --enable-libxvid --enable-x11grab --enable-avfilter --enable-avresample --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect
     libavutil      54. 20.100 / 54. 20.100
     libavcodec     56. 26.100 / 56. 26.100
     libavformat    56. 25.101 / 56. 25.101
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 11.102 /  5. 11.102
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    scene_image_00470.png
    scene_image_00475.png
    scene_image_00480.png
    scene_image_00485.png
    scene_image_00490.png
    scene_image_00495.png
    scene_image_00500.png
    scene_image_00505.png
    scene_image_00510.png
    Input #0, image2pipe, from 'pipe:0':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: png, rgb24, 1920x1200, 25 fps, 25 tbr, 25 tbn, 25 tbc
    [libx264 @ 0x19c0980] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX AVX2 FMA3 LZCNT BMI2
    [libx264 @ 0x19c0980] profile High, level 5.0
    [libx264 @ 0x19c0980] 264 - core 142 r2495 6a301b6 - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=18 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to '../output_video.mp4':
     Metadata:
       encoder         : Lavf56.25.101
       Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 1920x1200, q=-1--1, 25 fps, 12800 tbn, 25 tbc
       Metadata:
         encoder         : Lavc56.26.100 libx264
    Stream mapping:
     Stream #0:0 -> #0:0 (png (native) -> h264 (libx264))
    scene_image_00515.png
    scene_image_00520.png
    scene_image_00525.png
    scene_image_00530.png
    scene_image_00535.png
    scene_image_00540.png
    scene_image_00545.png
    scene_image_00550.png
    scene_image_00555.png=0.0 size=       0kB time=00:00:00.00 bitrate=N/A    
    scene_image_00560.png
    scene_image_00565.png
    scene_image_00570.png
    scene_image_00575.png
    scene_image_00580.png
    scene_image_00585.png
    scene_image_00590.png=0.0 size=       0kB time=00:00:00.00 bitrate=N/A    
    scene_image_00595.png
    scene_image_00600.png
    scene_image_00605.png
    scene_image_00610.png
    scene_image_00615.png
    scene_image_00620.png
    scene_image_00625.png
    scene_image_00630.png
    scene_image_00635.png
    scene_image_00640.png
    scene_image_00645.png
    scene_image_00650.png
    scene_image_00655.png
    scene_image_00660.png
    scene_image_00665.png
    scene_image_00670.png
    scene_image_00675.png
    scene_image_00680.png
    scene_image_00685.png
    scene_image_00690.png
    scene_image_00695.png
    scene_image_00700.png
    scene_image_00705.png
    scene_image_00710.png
    scene_image_00715.png
    [png @ 0x35d0660] chunk too big=0.0 size=       0kB time=00:00:00.00 bitrate=N/A    
    frame=   30 fps=0.1 q=-1.0 Lsize=     754kB time=00:00:01.12 bitrate=5517.3kbits/s    
    video:753kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.155077%
    [libx264 @ 0x19c0980] frame I:1     Avg QP:18.39  size:107795
    [libx264 @ 0x19c0980] frame P:8     Avg QP:24.90  size: 36026
    [libx264 @ 0x19c0980] frame B:21    Avg QP:28.37  size: 17835
    [libx264 @ 0x19c0980] consecutive B-frames:  3.3%  6.7% 10.0% 80.0%
    [libx264 @ 0x19c0980] mb I  I16..4: 32.2% 41.7% 26.0%
    [libx264 @ 0x19c0980] mb P  I16..4:  0.4%  1.2%  1.3%  P16..4:  4.9%  6.8%  7.6%  0.0%  0.0%    skip:77.8%
    [libx264 @ 0x19c0980] mb B  I16..4:  0.1%  0.4%  0.6%  B16..8: 10.1%  5.6%  3.3%  direct: 2.0%  skip:77.9%  L0:42.0% L1:44.5% BI:13.5%
    [libx264 @ 0x19c0980] 8x8 transform intra:40.7% inter:24.0%
    [libx264 @ 0x19c0980] coded y,uvDC,uvAC intra: 29.6% 31.1% 29.7% inter: 6.0% 11.0% 3.9%
    [libx264 @ 0x19c0980] i16 v,h,dc,p: 56% 38%  5%  0%
    [libx264 @ 0x19c0980] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 44% 19% 31%  1%  1%  1%  1%  0%  1%
    [libx264 @ 0x19c0980] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 20% 17%  5%  9%  7%  8%  5%  6%
    [libx264 @ 0x19c0980] i8c dc,h,v,p: 76% 12%  8%  4%
    [libx264 @ 0x19c0980] Weighted P-Frames: Y:0.0% UV:0.0%
    [libx264 @ 0x19c0980] ref P L0: 57.4% 13.9% 18.1% 10.7%
    [libx264 @ 0x19c0980] ref B L0: 78.3% 18.6%  3.1%
    [libx264 @ 0x19c0980] ref B L1: 93.4%  6.6%
    [libx264 @ 0x19c0980] kb/s:5136.93