
Recherche avancée
Médias (91)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
-
avec chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
sans chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
config chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (68)
-
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)
Sur d’autres sites (6885)
-
How Media Analytics for Piwik gives you the insights you need to measure how effective your video and audio marketing is – Part 2
2 février 2017, par InnoCraft — CommunityIn Part 1 we have covered some of the Media Analytics features and explained why you cannot afford to not measure the media usage on your website. Chances are, you are wasting or losing money and time by not making the most out of your marketing strategy this very second. In this part, we continue showing you some more insights you can expect to get from Media Analytics and how nicely it is integrated into Piwik.
Video, Audio and Media Player reports
Media Analytics adds several new reports around videos, audios and media players. They are all quite similar and give you similar insights so we will mainly focus on the Video Titles report.
Metrics
The above mentioned reports give you all the same insights and features so we will mainly focus on the “Video Titles” report. When you open such a report for the first time, you will see a report like this with the following metrics :
- “Impressions”, the number of times a visitor has viewed a page where this media was included.
- “Plays”, the number of times a visitor watched or listened to this media.
- “Play rate”, the percentage of visitors that watched or listened to a media after they have visited a page where this media was included.
- “Finishes”, the percentage of visitors who played a media and finished it.
- “Avg. time spent”, the average amount of time a visitor spent watching or listening to this media.
- “Avg. media length” the average length of a video or audio media file. This number may vary for example if the media is a stream.
- “Avg completion” the percentage of how much visitors have watched of a video.
If you are not sure what a certain metric means, simply hover the metric title in the UI and you will get a detailed explanation. By changing the visualization to the “All Columns Table” in the bottom of the report, you get to see even more metrics like “Plays by unique visitors”, “Impressions by unique visitors”, “Finish rate”, “Avg. time to play aka hesitation time”, “Fullscreen rate” and we are always adding more metrics.
These metrics are available for the following reports :
- “Video / Audio Titles” shows you all metrics aggregated by video or audio title
- “Video / Audio Resource URLs” shows you all metrics aggregated by the video or audio resource URL, for example “https://piwik.org/media.mp4”.
- “Video / Audio Resource URLs grouped” removes some information from the URLs like subdomain, file extensions and other information to get aggregated metrics when you provide the same media in different formats.
- “Videos per hour in website’s timezone” lets you find out how your media content is consumed depending on the hour of the day. You might realize that your media is consumed very differently in the morning vs at night.
- “Video Resolutions” lets you discover how your video is consumed depending on the resolution.
- “Media players” report is useful if you use different media players on your websites or apps and want to see how engagement with your media compares by media player.
Row evolution
At InnoCraft, we understand that static numbers are not so useful. When you see for example that yesterday 20 visitors played a certain media, would you know whether this is good or bad ? This is why we always give you the possibility to see the data in relation to the recorded data in the past. To see how a specific media performs over time, simply hover a media title or media resource URL and click on the “Row Evolution” icon.
Now you can see whether actually more or less visitors played your chosen video for the selected period. Simply click on any metric name and the chosen metrics will be plotted in the big evolution graph.
This feature is similar to the Media Overall evolution graph introduced in Part 1, but shows you a detailed evolution for an individual media title or resource.
Media details
Now that you know some of the most important media metrics, you might want to look a bit deeper into the user behaviour. For example we mentioned before the “Avg time spent on media” metric. Such an average number doesn’t let you know whether most visitors spent about the same time watching the video, or whether there were many more visitors that watched it only for a few seconds and a few that watched it for very long.
One of the ways to get this insight is by again hovering any media title or resource URL and clicking on the “Media details” icon. It will open a new popup showing you a new set of reports like these :
The “Time spent watching” and “How far visitors reached in the media” bar charts show you on the X-Axis how much time each visitor spent on watching a video and how far in the video they reached. On the Y-Axis you see the number of visitors. This lets you discover whether your users for example jump often to the middle or end of the video and which parts of your video was seen most often.
The “How often the media was watched in a certain hour” and “Which resolutions the media was watched” is similar to the reports introduced in Part 1 of the blog post. However, this time instead of showing aggregated video or audio content data, they display data for a specific media title or media resource URL.
Segmented audience log
In Part 1 we have already introduced the Audience Log and explained that it is useful to better understand the user behaviour. Just a quick recap : The Audience Log shows you chronologically every action a specific visitor has performed on your website : Which pages they viewed, how they interacted with your media, when they clicked somewhere, and much more.
By hovering a media title or a media resource and then selecting “Segmented audience log” you get to see the same log, but this time it will show only visitors that have interacted with the selected media. This will be useful for you for example when you notice an unusual value for a metric and then want to better understand why a metric is like that.
Applying segments
Media Analytics lets you apply any Piwik segment to the media reports allowing you to dice your visitors or personas multiplying the value that you get out of Media Analytics. For example you may want to apply a segment and analyze the media usage for visitors that have visited your website or mobile app for the first time vs. recurring visitors. Sometimes it may be interesting how visitors that converted a specific goal or purchased something consume your media, the possibilities are endless. We really recommend to take advantage of segments to understand your different target groups even better.
The plugin also adds a lot of new segments to your Piwik letting you segment any Piwik report by visitors that have viewed or interacted with your media. For example you could go to the “Visitors => Devices” report and apply a media segment to see which devices were used the most to view your media. You can also combine segments to see for example how often your goals were converted when a visitor viewed media for longer than 10 seconds after waiting for at least 20 seconds before playing your media and when they played at least 3 videos during their visit.
Widgets, Scheduled Reports, and more.
This is not where the fun ends. Media Analytics defines more than 15 new widgets that you can add to your dashboard or export it into a third party website. You can set up Scheduled Reports to receive the Media reports automatically via email or sms or download the report to share it with your colleagues. It works also very well with Custom Alerts and you can view the Media reports in the Piwik Mobile app for Android and iOS. Via the HTTP Reporting API you can fetch any report in various formats. The plugin is really nicely integrated into Piwik we would need some more blog posts to fully cover all the ways Media Analytics advances your Piwik experience and how you can use and dig into all the data to increase your conversions and sales.
How to get Media Analytics and related features
You can get Media Analytics on the Piwik Marketplace. If you want to learn more about this feature, you might be also interested in the Media Analytics User Guide and the Media Analytics FAQ.
-
My python script using ffmpeg captures video content, but the captured content freezes in the middle and jumps frames
11 novembre 2022, par Supriyo MitraI am new to ffmpeg and I am trying to use it through a python script. The python functions that captures the video content is given below. The problem I am facing is that the captured content freezes at (uneven) intervals and skips a few frames every time it happens.


` def capturelivestream(self, argslist):
 streamurl, outnum, feedid, outfilename = argslist[0], argslist[1], argslist[2], argslist[3]
 try:
 info = ffmpeg.probe(streamurl, select_streams='a')
 streams = info.get('streams', [])
 except:
 streams = []
 if len(streams) == 0:
 print('There are no streams available')
 stream = {}
 else:
 stream = streams[0]
 for stream in streams:
 if stream.get('codec_type') != 'audio':
 continue
 else:
 break
 if 'channels' in stream.keys():
 channels = stream['channels']
 samplerate = float(stream['sample_rate'])
 else:
 channels = None
 samplerate = 44100
 process = ffmpeg.input(streamurl).output('pipe:', pix_fmt='yuv420p', format='avi', vcodec='libx264', acodec='pcm_s16le', ac=channels, ar=samplerate, vsync=0, loglevel='quiet').run_async(pipe_stdout=True)
 fpath = os.path.dirname(outfilename)
 fnamefext = os.path.basename(outfilename)
 fname = fnamefext.split(".")[0]
 read_size = 320 * 180 * 3 # This is width * height * 3
 lastcaptured = time.time()
 maxtries = 12
 ntries = 0
 while True:
 if process:
 inbytes = process.stdout.read(read_size)
 if inbytes is not None and inbytes.__len__() > 0:
 try:
 frame = (np.frombuffer(inbytes, np.uint8).reshape([180, 320, 3]))
 except:
 print("Failed to reshape frame: %s"%sys.exc_info()[1].__str__())
 continue # This could be an issue if there is a continuous supply of frames that cannot be reshaped
 self.processq.put([outnum, frame])
 lastcaptured = time.time()
 ntries = 0
 else:
 if self.DEBUG:
 print("Could not read frame for feed ID %s"%feedid)
 t = time.time()
 if t - lastcaptured > 30: # If the frames can't be read for more than 30 seconds...
 print("Reopening feed identified by feed ID %s"%feedid)
 process = ffmpeg.input(streamurl).output('pipe:', pix_fmt='yuv420p', format='avi', vcodec='libx264', acodec='pcm_s16le', ac=channels, ar=samplerate, vsync=0, loglevel='quiet').run_async(pipe_stdout=True)
 ntries += 1
 if ntries > maxtries:
 if self.DEBUG:
 print("Stream %s is no longer available."%streamurl)
 # DB statements removed here
 
 break # Break out of infinite loop.
 continue
 
 return None`




The function that captures the frames is as follows :



` def framewriter(self, outlist):
 isempty = False
 endofrun = False
 while True:
 frame = None
 try:
 args = self.processq.get()
 except: # Sometimes, the program crashes at this point due to lack of memory...
 print("Error in framewriter while reading from queue: %s"%sys.exc_info()[1].__str__())
 continue
 outnum = args[0]
 frame = args[1]
 if outlist.__len__() > outnum:
 out = outlist[outnum]
 else:
 if self.DEBUG == 2:
 print("Could not get writer %s"%outnum)
 continue
 if frame is not None and out is not None:
 out.write(frame)
 isempty = False
 endofrun = False
 else:
 if self.processq.empty() and not isempty:
 isempty = True
 elif self.processq.empty() and isempty: # processq queue is empty now and was empty last time
 print("processq is empty")
 endofrun = True
 elif endofrun and isempty:
 print("Could not find any frames to process. Quitting")
 break
 print("Done writing feeds. Quitting.")
 return None`



The scenario is as follows : There are multiple video streams from a certain website at any time during the day, and the program containing these functions has to capture them as they get streamed. The memory available to this program is 6GB and there could be upto 3 streams running at any instant. Given below is the relevant main section of the script that uses the functions given above.






`itftennis = VideoBot(siteurl)
outlist = []
t = Thread(target=itftennis.framewriter, args=(outlist,))
t.daemon = True
t.start()
tp = Thread(target=handleprocesstermination, args=())
tp.daemon = True
tp.start()
# Create a database connection and as associated cursor object. We will handle database operations from main thread only.
# DB statements removed from here...
feedidlist = []
vidsdict = {}
streampattern = re.compile("\?vid=(\d+)$")
while True:
 streampageurls = itftennis.checkforlivestream()
 if itftennis.DEBUG:
 print("Checking for new urls...")
 print(streampageurls.__len__())
 if streampageurls.__len__() > 0:
 argslist = []
 newurlscount = 0
 for streampageurl in streampageurls:
 newstream = False
 sps = re.search(streampattern, streampageurl)
 if sps:
 streamnum = sps.groups()[0]
 if streamnum not in vidsdict.keys(): # Check if this stream has already been processed.
 vidsdict[streamnum] = 1
 newstream = True
 else:
 continue
 else:
 continue
 print("Detected new live stream... Getting it.")
 streamurl = itftennis.getstreamurlfrompage(streampageurl)
 print("Adding %s to list..."%streamurl)
 if streamurl is not None:
 # Now, get feed metadata...
 metadata = itftennis.getfeedmetadata(streampageurl)
 if metadata is None:
 continue
 # lines to get matchescounter omitted here...
 if matchescounter >= itftennis.__class__.MAX_CONCURRENT_MATCHES:
 break
 if newstream is True:
 newurlscount += 1
 outfilename = time.strftime("./videodump/" + "%Y%m%d%H%M%S",time.localtime())+".avi"
 out = open(outfilename, "wb")
 outlist.append(out) # Save it in the list and take down the number for usage in framewriter
 outnum = outlist.__len__() - 1
 # Save metadata in DB
 # lines omitted here....
 argslist.append([streamurl, outnum, feedid, outfilename]) 
 else:
 print("Couldn't get the stream url from page")
 if newurlscount > 0:
 for args in argslist:
 try:
 p = Process(target=itftennis.capturelivestream, args=(args,))
 p.start()
 processeslist.append(p)
 if itftennis.DEBUG:
 print("Started process with args %s"%args)
 except:
 print("Could not start process due to error: %s"%sys.exc_info()[1].__str__())
 print("Created processes, continuing now...")
 continue
 time.sleep(itftennis.livestreamcheckinterval)
t.join()
tp.join()
for out in outlist:
 out.close()`







Please accept my apologies for swamping with this amount of code. I wanted to provide maximum context to my problem. I have removed the absolutely irrelevant DB statements, but apart from that this is what the code looks like.


If you need to know anything else about the code, please let me know. What I would really like to know is if I am using the ffmpeg streams capturing statements correctly. The stream contains both video and audio components and I need to capture both. Hence I am making the following call :


process = ffmpeg.input(streamurl).output('pipe:', pix_fmt='yuv420p', format='avi', vcodec='libx264', acodec='pcm_s16le', ac=channels, ar=samplerate, vsync=0, loglevel='quiet').run_async(pipe_stdout=True)



Is this how it is supposed to be done ? More importantly, why do I keep getting the freezes in the output video. I have monitored the streams manually, and they are quite consistent. Frame losses do not happen when I view them on the website (at least it is not obviously noticeable). Also, I have run 'top' command on the host running the program. The CPU usage sometimes go over 100% (which, I came to understand from some answers on SO, is to be expected when running ffmpeg) but the memory usage usually remain below 30%. So what is the issue here. What do I need to do in order to fix this problem (other than learn more about how ffmpeg works).


Thanks


I have tried using various ffmpeg options (while trying to find similar issues that others encountered). I also tried running ffmpeg from command line for a limited period of time (11 mins), using the same options as used in the python code, and the captured content came out quite well. No freezes. No jumps in frames. But I need to use it in an automated way and there would be multiple streams at any time. Also, when I try playing the captured content using ffplay, I sometimes get the message "co located POCs unavailable" when these freezes happen. What does it mean ?


-
FFMPEG Audio/video out of sync after cutting and concatonating even after transcoding
4 mai 2020, par Ham789I am attempting to take cuts from a set of videos and concatonate them together with the concat demuxer.



However, the audio is out of sync of the video in the output. The audio seems to drift further out of sync as the video progresses. Interestingly, if I click to seek another time in the video with the progress bar on the player, the audio becomes synced up with the video but then gradually drifts out of sync again. Seeking to a new time in the player seems to reset the audio/video. It is like they are being played back at different rates or something. I get this behaviour in both Quicktime and VLC players.



For each video, I decode it, trim a clip from it and then encode it to 4k resolution at 25 fps with its audio :



ffmpeg -ss 0.5 -t 0.5 -i input_video1.mp4 -r 25 -vf scale=3840:2160 output_video1.mp4



I then take each of these videos and concatonate them together with the concat demuxer :



ffmpeg -f concat -safe 0 -i cut_videos.txt -c copy -y output.mp4



I am taking short cuts of each video (approximately 0.5s)



I am using Python's subprocess to automate the cutting and concatonating of the videos.



I am not sure if this happens because of the trimming or concatenation steps but when I play back the intermediate cut video files (
output_video1.mp4
in the above command), there seems to be some silence before the audio comes in at the start of the video.


When I concatonate the videos, I sometimes get a lot of these warnings however the audio still becomes out of sync even when I do not get them :



[mp4 @ 0000021a252ce080] Non-monotonous DTS in output stream 0:1; previous: 51792, current: 50009; changing to 51793. This may result in incorrect timestamps in the output file.



From this post, it seems to be a problem with cutting the videos and their timestamps. The solution proposed in the post is to decode, cut and then encode the video however I am already doing that.



How can I ensure the audio and video are in sync ? Am I transcoding incorrectly ? This seems to be the only solution I can find online however it does not seem to work.



UPDATE :



I took inspiration from this post and seperated the audio and video from
output_video1.mp4
using :


ffmpeg -i output_video1.mp4 -acodec copy -vn video.mp4



and



ffmpeg -i output_video1.mp4 -vcodec copy -an audio.mp4



I then compared the durations of
video.mp4
andaudio.mp4
and got 0.57s and 0.52s respectively. Since the video is longer, this explains why there is a period of silence in the videos. The post then suggests transcoding is the solution however as you can see from the code above that does not work for me.


Sample Output Log for the Trim Command



built with Apple LLVM version 10.0.0 (clang-1000.11.45.5)
 configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.2 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.0.1.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.0.1.jdk/Contents/Home/include/darwin' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input_video1.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Duration: 00:00:04.06, start: 0.000000, bitrate: 14266 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 3840x2160, 14268 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
 Metadata:
 handler_name : Core Media Video
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 94 kb/s (default)
 Metadata:
 handler_name : Core Media Audio
File 'output_video1.mp4' already exists. Overwrite ? [y/N] y
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
 Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[libx264 @ 0x7fcae4001e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x7fcae4001e00] profile High, level 5.1
[libx264 @ 0x7fcae4001e00] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output_video1.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Stream #0:0(und): Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 3840x2160, q=-1--1, 25 fps, 12800 tbn, 25 tbc (default)
 Metadata:
 handler_name : Core Media Video
 encoder : Lavc58.54.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 69 kb/s (default)
 Metadata:
 handler_name : Core Media Audio
 encoder : Lavc58.54.100 aac
frame= 14 fps=7.0 q=-1.0 Lsize= 928kB time=00:00:00.51 bitrate=14884.2kbits/s dup=0 drop=1 speed=0.255x 
video:922kB audio:5kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.194501%
[libx264 @ 0x7fcae4001e00] frame I:1 Avg QP:21.06 size:228519
[libx264 @ 0x7fcae4001e00] frame P:4 Avg QP:22.03 size: 85228
[libx264 @ 0x7fcae4001e00] frame B:9 Avg QP:22.88 size: 41537
[libx264 @ 0x7fcae4001e00] consecutive B-frames: 14.3% 0.0% 0.0% 85.7%
[libx264 @ 0x7fcae4001e00] mb I I16..4: 27.6% 64.3% 8.1%
[libx264 @ 0x7fcae4001e00] mb P I16..4: 9.1% 10.7% 0.2% P16..4: 48.5% 7.3% 3.9% 0.0% 0.0% skip:20.2%
[libx264 @ 0x7fcae4001e00] mb B I16..4: 1.1% 1.0% 0.0% B16..8: 44.5% 2.9% 0.2% direct: 8.3% skip:42.0% L0:45.6% L1:53.2% BI: 1.2%
[libx264 @ 0x7fcae4001e00] 8x8 transform intra:58.2% inter:93.4%
[libx264 @ 0x7fcae4001e00] coded y,uvDC,uvAC intra: 31.4% 62.2% 5.2% inter: 11.4% 30.9% 0.0%
[libx264 @ 0x7fcae4001e00] i16 v,h,dc,p: 15% 52% 12% 21%
[libx264 @ 0x7fcae4001e00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 33% 32% 2% 2% 2% 4% 2% 4%
[libx264 @ 0x7fcae4001e00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 39% 9% 3% 4% 4% 12% 3% 4%
[libx264 @ 0x7fcae4001e00] i8c dc,h,v,p: 43% 36% 18% 3%
[libx264 @ 0x7fcae4001e00] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x7fcae4001e00] ref P L0: 69.3% 8.0% 14.8% 7.9%
[libx264 @ 0x7fcae4001e00] ref B L0: 88.1% 9.2% 2.6%
[libx264 @ 0x7fcae4001e00] ref B L1: 90.2% 9.8%
[libx264 @ 0x7fcae4001e00] kb/s:13475.29
[aac @ 0x7fcae4012400] Qavg: 125.000```