
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (46)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...) -
Les statuts des instances de mutualisation
13 mars 2010, parPour des raisons de compatibilité générale du plugin de gestion de mutualisations avec les fonctions originales de SPIP, les statuts des instances sont les mêmes que pour tout autre objets (articles...), seuls leurs noms dans l’interface change quelque peu.
Les différents statuts possibles sont : prepa (demandé) qui correspond à une instance demandée par un utilisateur. Si le site a déjà été créé par le passé, il est passé en mode désactivé. publie (validé) qui correspond à une instance validée par un (...)
Sur d’autres sites (9053)
-
How you can use the Piwik AOM plugin to improve your data and make better online marketing decisions
13 septembre 2017, par André Kolell — Plugins, AdWords, Bing, Case Study, Criteo, eccommerce tracking, Facebook Ads, how to install piwik analytics, how-to piwik, integration, piwik, TaboolaHi, this is André, one of the authors of the Piwik Advanced Online Marketing plugin, which has just hit 5,000 downloads on the Piwik marketplace. In this blog post I’ll show you how Piwik AOM improves your data and enables you to make better online marketing decisions.
Piwik itself is excellent in tracking all kinds of visitor data, like where a visitor is coming from and what he’s doing on your page or app (pageviews, events, conversions). But what Piwik did not yet take a closer a look at, is how much you’ve invested into your marketing activities and how profitable they are.
With the Piwik AOM plugin you can integrate data like advertising costs, advertising campaign names, ad impressions etc. from advertising platforms (such as Google AdWords, Microsoft Bing, Criteo, Facebook Ads and Taboola) and individual campaigns (such as such as cost per view/click/acquisition and fixed price per months deals) into Piwik and combine that data with individual Piwik visits.
Piwik AOM adds a new marketing performance report to Piwik giving you a great overview of all your marketing activities with drill-down functionality :
When taking a look at a specific visitor, Piwik AOM shows you the exact cost of acquiring a specific visit :
Leveraging Piwik AOM’s full potential
But although you can access Piwik AOM’s valuable data directly in the Piwik UI for ad-hoc analyses, Piwik AOM’s true strength comes into play when working with the raw data in an external business intelligence application of your choice, where you can further integrate Piwik AOM’s data with your most accurate backend data (like conversion’s contribution margins after returns, new vs. existing customer, etc.).
Piwik AOM offers some API endpoints that allow you to fetch the data you need but you can also retrieve it directly from Piwik AOM’s
aom_visits
table, which includes all visits, all allocated advertising costs and advertising campaign details. As there is never data being deleted fromaom_visits
, the table can easily be connected to your ETL tool with its last update timestamp column. A third way to get data out of Piwik AOM is by developing your own Piwik plugin and listening to theAOM.aomVisitAddedOrUpdated
event, which is posted whenever anaom_visits
record is added or updated.Integrating Piwik AOM’s data with your backend data in the business intelligence application of your choice allows you to evaluate the real performance of your online marketing campaigns when applying different conversion attribution models, conduct customer journey analyses, create sophisticated forecasts and whatever you can think of.
AOM Use case
A company that followed this approach, is FINANZCHECK.de, one of Germany’s leading loan comparison websites. At the eMetrics summit 2016 in Berlin, Germany, I gave a talk about FINANZCHECK’s architectural online marketing setup. Until recently, FINANZCHECK used Pentaho data integration to integrate data from Piwik, Piwik AOM and additional internal tools like its proprietary CRM software into Jaspersoft, its data warehouse an BI solution. The enriched data in Jaspersoft was not only used for reporting to various stakeholders but also for optimising all kinds of marketing activities (e.g. bids for individual keywords in Google AdWords) and proactive alerting. Not long ago, FINANZCHECK started an initiative to improve its setup even further – I’ll hopefully be able to cover this in a more detailed case study soon.
Roadmap
In the past, we had the chance to make great progress in developing this plugin by solving specific requirements of different companies who use Piwik AOM. During the next months, we plan to integrate more advertising platforms, reimplement Facebook Ads, improve the support of individual campaigns and work on the general plugin stability and performance.
Before you install Piwik AOM
Before installing Piwik AOM, you should know that its initial setup and even its maintenance can be quite complex. Piwik AOM will heavily modify your Piwik installation and you will only benefit from Piwik AOM if you are willing to invest quite some time into it.
If you are not familiar with Piwik’s internals, PHP, MySQL, database backups, cronjobs, creating API accounts at the advertising platforms or adding parameters to your advertising campaign’s URLs, you should probably not install it on your own (at least not in your production environment).
Piwik AOM has successfully been tested with up to 25k visitors a day for a period of more than two years, running on an AWS server with 4 GB RAM, once CPU and a separate AWS RDS MySQL database.
Ideas and Support
If you have ideas for new features or need support with your Piwik AOM installation or leveraging your marketing data’s potential in general, feel free to get in touch with the plugin’s co-author Daniel or me. You can find our contact details on the plugin’s website http://www.advanced-online-marketing.com.
How to get the Piwik AOM plugin ?
The Piwik AOM plugin is freely available through the Piwik marketplace at https://plugins.piwik.org/AOM
Did you like this article ? If yes do not hesitate to share it or give your feedback about the topic you would like us to write about.
-
what is wrong about the I420 render from ffmpeg ?
9 mai 2022, par DLKUNI use glfw render YUV from ffmpeg ;the Y is ok(only use data Y ,and frag texture2D Y is ok ,the color is Grayscale).but when I add U,V ;the display show pink and green ; I try to change frag shader or the imgtexture ,there have no use .


#include <glad></glad>glad.h>
#include <glfw></glfw>glfw3.h>

#include<string>
#include<fstream>
#include<sstream>
#include<iostream>
#include

#include 

// settings
const unsigned int SCR_WIDTH = 544;
const unsigned int SCR_HEIGHT = 960;
const int len = 544 * 960 * 3/2;
BYTE YUVdata [len];//
BYTE Ydata [544 * 960];//
BYTE Udata [272 * 480];//
BYTE Vdata [272 * 480];//
unsigned int VBO = 0;
unsigned int VAO = 0;
unsigned int EBO = 0;
unsigned int texturePIC = 0;
int shaderProgram = 0;

GLuint texIndexarray[3];
GLuint texUniformY = 99;
GLuint texUniformU = 99;
GLuint texUniformV = 99;

void LoadPicture()
{


 glGenTextures(3, texIndexarray);

 glBindTexture(GL_TEXTURE_2D, texIndexarray[0]);
 
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

 glBindTexture(GL_TEXTURE_2D, texIndexarray[1]);
 
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
 glBindTexture(GL_TEXTURE_2D, texIndexarray[2]);
 
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);


 glValidateProgram(shaderProgram);

 texUniformY = glGetUniformLocation(shaderProgram, "dataY");//2
 texUniformU = glGetUniformLocation(shaderProgram, "dataU");//0
 texUniformV = glGetUniformLocation(shaderProgram, "dataV");//1

 
 FILE* fp = fopen("./output544_960.yuv","rb+");//I420
 int returns =fread(YUVdata,1,len,fp);
 int w = 544;
 int h = 960;
 int ysize = w*h;
 int uvsize = w * h / 4;

 void* uptr = &YUVdata[ysize];
 void* vptr = &YUVdata[ysize * 5 / 4];

 memcpy(Ydata,YUVdata,ysize);
 memcpy(Udata, uptr,uvsize);
 memcpy(Vdata, vptr,uvsize);
 glActiveTexture(GL_TEXTURE0);
 glBindTexture(GL_TEXTURE_2D, texIndexarray[0]);
 
 glTexImage2D(GL_TEXTURE_2D, 0 , GL_RED, w, h ,0, GL_RED,GL_UNSIGNED_BYTE ,Ydata);
 glUniform1i(texUniformY, texIndexarray[0]); 


 glActiveTexture(GL_TEXTURE1);
 glBindTexture(GL_TEXTURE_2D, texIndexarray[1]);
 glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, w/2, h/2, 0, GL_RED, GL_UNSIGNED_BYTE,Udata );

 glUniform1i(texUniformU, texIndexarray[1]);


 glActiveTexture(GL_TEXTURE2);
 glBindTexture(GL_TEXTURE_2D, texIndexarray[2]);
 glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, w/2, h/2, 0, GL_RED, GL_UNSIGNED_BYTE,Vdata);
 glUniform1i(texUniformV, texIndexarray[2]);

}


void render()
{
 glBindVertexArray(VAO);
 glUseProgram(shaderProgram);
 glDrawElements(GL_TRIANGLES,6,GL_UNSIGNED_INT,0);
 //glDrawArrays(GL_TRIANGLE_FAN,0,4);
 glUseProgram(0);
 glBindVertexArray(0);
}

void initmodule()
{
 
 float vertexs[] = {
 
 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 
 1.0f, -1.0f, 0.0f, 1.0f, 1.0f, 
 -1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 
 -1.0f, 1.0f, 0.0f, 0.0f, 0.0f 
 
 
 };
 
 unsigned int indexs[] = {
 0,1,3,
 1,2,3,
 };

 
 glGenVertexArrays(1,&VAO);
 glBindVertexArray(VAO);

 

 glGenBuffers(1, &VBO);
 glBindBuffer(GL_ARRAY_BUFFER, VBO);
 
 glBufferData(GL_ARRAY_BUFFER,sizeof(vertexs), vertexs, GL_STATIC_DRAW);

 
 glGenBuffers(1,&EBO);
 glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,EBO);
 glBufferData(GL_ELEMENT_ARRAY_BUFFER,sizeof(indexs),indexs,GL_STATIC_DRAW);
 
 LoadPicture();

 glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,5*sizeof(float),(void*)0);
 
 glEnableVertexAttribArray(0);
 
 glVertexAttribPointer(1,2, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)(3 * sizeof(float)));
 
 glEnableVertexAttribArray(1);

 
 glBindBuffer(GL_ARRAY_BUFFER,0);

 
 glBindVertexArray(0);



}

void initshader(const char* verpath,const char* fragpath)
{
 
 std::string VerCode("");
 std::string fregCode("");
 
 std::ifstream vShaderFile;
 std::ifstream fShaderFile;

 vShaderFile.exceptions(std::ifstream::failbit | std::ifstream::badbit);
 fShaderFile.exceptions(std::ifstream::failbit | std::ifstream::badbit);

 try
 {
 vShaderFile.open(verpath);
 fShaderFile.open(fragpath);

 std::stringstream vsstream, fsstream;
 vsstream << vShaderFile.rdbuf();
 fsstream << fShaderFile.rdbuf();
 VerCode = vsstream.str();
 fregCode = fsstream.str();
 
 }
 catch (const std::exception&)
 {
 std::cout << "read file error" << std::endl;
 }

 const char* vshader = VerCode.c_str();
 const char* fshader = fregCode.c_str();

 
 unsigned int vertexID = 0, fragID = 0;
 char infoLog[512];
 int successflag = 0;
 vertexID = glCreateShader(GL_VERTEX_SHADER);
 glShaderSource(vertexID,1,&vshader,NULL );
 glCompileShader(vertexID);
 
 glGetShaderiv(vertexID,GL_COMPILE_STATUS,&successflag);
 if (!successflag)
 {
 glGetShaderInfoLog(vertexID,512,NULL,infoLog);
 std::string errstr(infoLog);
 std::cout << "v shader err"</frag
 fragID = glCreateShader(GL_FRAGMENT_SHADER);
 glShaderSource(fragID, 1, &fshader, NULL);
 glCompileShader(fragID);
 
 glGetShaderiv(fragID, GL_COMPILE_STATUS, &successflag);
 if (!successflag)
 {
 glGetShaderInfoLog(fragID, 512, NULL, infoLog);
 std::string errstr(infoLog);
 std::cout << "f shader err"</
 initmodule();


 
 while (!glfwWindowShouldClose(window))
 {
 
 processInput(window);

 glClearColor(0.0f,0.0f,0.0f,1.0f);
 glClear(GL_COLOR_BUFFER_BIT);
 render();
 
 
 glfwSwapBuffers(window);
 
 glfwPollEvents();
 }

 
 glfwTerminate();
 return 0;
}
</iostream></sstream></fstream></string>


I get the Y data ,and run the code is ok ;the color is gray ;but when I add the U ,the color is Light green;and when i add the V is pink and green ;


#version 330 core
layout(location = 0) out vec4 FragColor;
in vec2 TexCoord;
uniform sampler2D dataY;
uniform sampler2D dataU;
uniform sampler2D dataV;
vec3 yuv;
vec3 rgb;
void main()
{


 yuv.x = texture2D(dataY, TexCoord).r-0.0625;
 yuv.y = texture2D(dataU, TexCoord).r-0.5;
 yuv.z = texture2D(dataV, TexCoord).r-0.5;

 rgb = mat3(1, 1, 1, 
 0, -0.18732, 1.8556, 
 1.57481, -0.46813, 0) * yuv; 
 FragColor = vec4(rgb.x, rgb.y,rgb.z,1); 
};



-
FFmpeg compose, multi layers and filters
10 octobre 2019, par jadeshohyPretty new to FFmpeg. We would like to use FFmpeg as a important part of an AR project.
Currently, we find it is not easy for us.
We want to compose the footages with FFmpeg.
We got 5 layers, wanted to blend them with specific mode, like the things in After Effects.
-
layer-1/ [A.webm] video,vp9 codec, which has a transparent BG,has to be added as [normal mode]
-
layer-2/ [B.mp4] video, optical-flare things with black BG,has to be added as [screen mode]
-
layer-3/ [C.mp4] video, some motion graphic things with light BG,has to be added as [overlay mode]
-
layer-4/ [BG.MP4], backgound things, has to be added as [normal mode]
After we blend those 4 (like pre-compose,use blend filter), we want to add another layer-5/[icon.png] which is the special icon.
Layer-5 need to overlay the pre-compose. We have to overlay it at the special position (use overlay filter ?).
Cause [icon.png] may change frequently. we want to deal with that after the 4 layer blending.
But at the first step, when we set normal mode for layer-1 in blend filter, layer-1 [A.webm] lost the transparent BG,it gave us a black BG which block all other things.
Blend filter can not handle the alpha channel of vp9 webm ?
When we set the mode of layer-1 to screen mode,the translucent thing was not what we need.Could you please give us some commands to achieve the blend above ?
The commands that really work will be extremely useful for our FFmpeg initiation.
ffmpeg -c:v libvpx-vp9 -i transparent.webm -i bg.mp4 -filter_complex "[0:v]format=yuva420p [a]; [1:v]format=yuv420p [b]; [a][b]blend=all_mode='normal':shortest=1:all_opacity=1,format=yuv420p" output.mp4 >log
ffmpeg version 4.1.4 Copyright (c) 2000-2019 the FFmpeg developers
built with Apple LLVM version 10.0.1 (clang-1001.0.46.4)
configuration: --prefix=/usr/local/Cellar/ffmpeg/4.1.4_1 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/adoptopenjdk-12.0.1.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-12.0.1.jdk/Contents/Home/include/darwin' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-videotoolbox --disable-libjack --disable-indev=jack --enable-libaom --enable-libsoxr
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
[libvpx-vp9 @ 0x7f8876008600] v1.8.0
Last message repeated 1 times
Input #0, matroska,webm, from 'transparent.webm':
Metadata:
encoder : Chrome
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0(eng): Video: vp9 (Profile 0), yuva420p(tv), 640x360, SAR 1:1 DAR 16:9, 60 fps, 60 tbr, 1k tbn, 1k tbc (default)
Metadata:
alpha_mode : 1
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'bg.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.83.100
Duration: 00:00:04.00, start: 0.000000, bitrate: 728 kb/s
Stream #1:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x360, 725 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
[libvpx-vp9 @ 0x7f8877806600] v1.8.0
Stream mapping:
Stream #0:0 (libvpx-vp9) -> format
Stream #1:0 (h264) -> format
format -> Stream #0:0 (libx264)
Press [q] to stop, [?] for help
[libvpx-vp9 @ 0x7f8877806600] v1.8.0
[libx264 @ 0x7f8877817200] using SAR=1/1
[libx264 @ 0x7f8877817200] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x7f8877817200] profile High, level 3.1
[libx264 @ 0x7f8877817200] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output.mp4':
Metadata:
encoder : Lavf58.20.100
Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 640x360 [SAR 1:1 DAR 16:9], q=-1--1, 60 fps, 15360 tbn, 60 tbc (default)
Metadata:
encoder : Lavc58.35.100 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
frame= 239 fps=113 q=-1.0 Lsize= 232kB time=00:00:03.93 bitrate= 482.5kbits/s dup=1 drop=2 speed=1.86x
video:228kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.586669%
[libx264 @ 0x7f8877817200] frame I:1 Avg QP:20.55 size: 5385
[libx264 @ 0x7f8877817200] frame P:62 Avg QP:24.42 size: 2373
[libx264 @ 0x7f8877817200] frame B:176 Avg QP:31.31 size: 456
[libx264 @ 0x7f8877817200] consecutive B-frames: 1.3% 0.8% 2.5% 95.4%
[libx264 @ 0x7f8877817200] mb I I16..4: 18.6% 68.4% 13.0%
[libx264 @ 0x7f8877817200] mb P I16..4: 1.6% 4.0% 0.7% P16..4: 14.8% 7.0% 4.5% 0.0% 0.0% skip:67.5%
[libx264 @ 0x7f8877817200] mb B I16..4: 0.2% 0.0% 0.0% B16..8: 17.4% 2.5% 0.4% direct: 0.5% skip:78.9% L0:53.1% L1:40.4% BI: 6.6%
[libx264 @ 0x7f8877817200] 8x8 transform intra:60.1% inter:60.4%
[libx264 @ 0x7f8877817200] coded y,uvDC,uvAC intra: 16.6% 27.4% 10.7% inter: 3.0% 2.2% 0.1%
[libx264 @ 0x7f8877817200] i16 v,h,dc,p: 56% 37% 6% 2%
[libx264 @ 0x7f8877817200] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 40% 6% 48% 1% 1% 1% 1% 1% 1%
[libx264 @ 0x7f8877817200] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 35% 22% 23% 3% 3% 4% 3% 4% 3%
[libx264 @ 0x7f8877817200] i8c dc,h,v,p: 57% 20% 21% 2%
[libx264 @ 0x7f8877817200] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x7f8877817200] ref P L0: 69.3% 12.8% 13.6% 4.3%
[libx264 @ 0x7f8877817200] ref B L0: 92.9% 5.9% 1.1%
[libx264 @ 0x7f8877817200] ref B L1: 96.1% 3.9%
[libx264 @ 0x7f8877817200] kb/s:467.59 -