Recherche avancée

Médias (0)

Mot : - Tags -/masques

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (4)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Configuration spécifique pour PHP5

    4 février 2011, par

    PHP5 est obligatoire, vous pouvez l’installer en suivant ce tutoriel spécifique.
    Il est recommandé dans un premier temps de désactiver le safe_mode, cependant, s’il est correctement configuré et que les binaires nécessaires sont accessibles, MediaSPIP devrait fonctionner correctement avec le safe_mode activé.
    Modules spécifiques
    Il est nécessaire d’installer certains modules PHP spécifiques, via le gestionnaire de paquet de votre distribution ou manuellement : php5-mysql pour la connectivité avec la (...)

Sur d’autres sites (2138)

  • Developing A Shader-Based Video Codec

    22 juin 2013, par Multimedia Mike — Outlandish Brainstorms

    Early last month, this thing called ORBX.js was in the news. It ostensibly has something to do with streaming video and codec technology, which naturally catches my interest. The hype was kicked off by Mozilla honcho Brendan Eich when he posted an article asserting that HD video decoding could be entirely performed in JavaScript. We’ve seen this kind of thing before using Broadway– an H.264 decoder implemented entirely in JS. But that exposes some very obvious limitations (notably CPU usage).

    But this new video codec promises 1080p HD playback directly in JavaScript which is a lofty claim. How could it possibly do this ? I got the impression that performance was achieved using WebGL, an extension which allows JavaScript access to accelerated 3D graphics hardware. Browsing through the conversations surrounding the ORBX.js announcement, I found this confirmation from Eich himself :

    You’re right that WebGL does heavy lifting.

    As of this writing, ORBX.js remains some kind of private tech demo. If there were a public demo available, it would necessarily be easy to reverse engineer the downloadable JavaScript decoder.

    But the announcement was enough to make me wonder how it could be possible to create a video codec which effectively leverages 3D hardware.

    Prior Art
    In theorizing about this, it continually occurs to me that I can’t possibly be the first person to attempt to do this (or the ORBX.js people, for that matter). In googling on the matter, I found various forums and Q&A posts where people asked if it were possible to, e.g., accelerate JPEG decoding and presentation using 3D hardware, with no answers. I also found a blog post which describes a plan to use 3D hardware to accelerate VP8 video decoding. It was a project done under the banner of Google’s Summer of Code in 2011, though I’m not sure which open source group mentored the effort. The project did not end up producing the shader-based VP8 codec originally chartered but mentions that “The ‘client side’ of the VP8 VDPAU implementation is working and is currently being reviewed by the libvdpau maintainers.” I’m not sure what that means. Perhaps it includes modifications to the public API that supports VP8, but is waiting for the underlying hardware to actually implement VP8 decoding blocks in hardware.

    What’s So Hard About This ?
    Video decoding is a computationally intensive task. GPUs are known to be really awesome at chewing through computationally intensive tasks. So why aren’t GPUs a natural fit for decoding video codecs ?

    Generally, it boils down to parallelism, or lack of opportunities thereof. GPUs are really good at doing the exact same operations over lots of data at once. The problem is that decoding compressed video usually requires multiple phases that cannot be parallelized, and the individual phases often cannot be parallelized. In strictly mathematical terms, a compressed data stream will need to be decoded by applying a function f(x) over each data element, x0 .. xn. However, the function relies on having applied the function to the previous data element, i.e. :

    f(xn) = f(f(xn-1))
    

    What happens when you try to parallelize such an algorithm ? Temporal rifts in the space/time continuum, if you’re in a Star Trek episode. If you’re in the real world, you’ll get incorrect, unusuable data as the parallel computation is seeded with a bunch of invalid data at multiple points (which is illustrated in some of the pictures in the aforementioned blog post about accelerated VP8).

    Example : JPEG
    Let’s take a very general look at the various stages involved in decoding the ubiquitous JPEG format :


    High level JPEG decoding flow

    What are the opportunities to parallelize these various phases ?

    • Huffman decoding (run length decoding and zig-zag reordering is assumed to be rolled into this phase) : not many opportunities for parallelizing the various Huffman formats out there, including this one. Decoding most Huffman streams is necessarily a sequential operation. I once hypothesized that it would be possible to engineer a codec to achieve some parallelism during the entropy decoding phase, and later found that On2′s VP8 codec employs the scheme. However, such a scheme is unlikely to break down to such a fine level that WebGL would require.
    • Reverse DC prediction : JPEG — and many other codecs — doesn’t store full DC coefficients. It stores differences in successive DC coefficients. Reversing this process can’t be parallelized. See the discussion in the previous section.
    • Dequantize coefficients : This could be very parallelized. It should be noted that software decoders often don’t dequantize all coefficients. Many coefficients are 0 and it’s a waste of a multiplication operation to dequantize. Thus, this phase is sometimes rolled into the Huffman decoding phase.
    • Invert discrete cosine transform : This seems like it could be highly parallelizable. I will be exploring this further in this post.
    • Convert YUV -> RGB for final display : This is a well-established use case for 3D acceleration.

    Crash Course in 3D Shaders and Humility
    So I wanted to see if I could accelerate some parts of JPEG decoding using something called shaders. I made an effort to understand 3D programming and its associated math throughout the 1990s but 3D technology left me behind a very long time ago while I got mixed up in this multimedia stuff. So I plowed through a few books concerning WebGL (thanks to my new Safari Books Online subscription). After I learned enough about WebGL/JS to be dangerous and just enough about shader programming to be absolutely lethal, I set out to try my hand at optimizing IDCT using shaders.

    Here’s my extremely high level (and probably hopelessly naive) view of the modern GPU shader programming model :


    Basic WebGL rendering pipeline

    The WebGL program written in JavaScript drives the show. It sends a set of vertices into the WebGL system and each vertex is processed through a vertex shader. Then, each pixel that falls within a set of vertices is sent through a fragment shader to compute the final pixel attributes (R, G, B, and alpha value). Another consideration is textures : This is data that the program uploads to GPU memory which can be accessed programmatically by the shaders).

    These shaders (vertex and fragment) are key to the GPU’s programmability. How are they programmed ? Using a special C-like shading language. Thought I : “C-like language ? I know C ! I should be able to master this in short order !” So I charged forward with my assumptions and proceeded to get smacked down repeatedly by the overall programming paradigm. I came to recognize this as a variation of the scientific method : Develop a hypothesis– in my case, a mental model of how the system works ; develop an experiment (short program) to prove or disprove the model ; realize something fundamental that I was overlooking ; formulate new hypothesis and repeat.

    First Approach : Vertex Workhorse
    My first pitch goes like this :

    • Upload DCT coefficients to GPU memory in the form of textures
    • Program a vertex mesh that encapsulates 16×16 macroblocks
    • Distribute the IDCT effort among multiple vertex shaders
    • Pass transformed Y, U, and V blocks to fragment shader which will convert the samples to RGB

    So the idea is that decoding of 16×16 macroblocks is parallelized. A macroblock embodies 6 blocks :


    JPEG macroblocks

    It would be nice to process one of these 6 blocks in each vertex. But that means drawing a square with 6 vertices. How do you do that ? I eventually realized that drawing a square with 6 vertices is the recommended method for drawing a square on 3D hardware. Using 2 triangles, each with 3 vertices (0, 1, 2 ; 3, 4, 5) :


    2 triangles make a square

    A vertex shader knows which (x, y) coordinates it has been assigned, so it could figure out which sections of coefficients it needs to access within the textures. But how would a vertex shader know which of the 6 blocks it should process ? Solution : Misappropriate the vertex’s z coordinate. It’s not used for anything else in this case.

    So I set all of that up. Then I hit a new roadblock : How to get the reconstructed Y, U, and V samples transported to the fragment shader ? I have found that communicating between shaders is quite difficult. Texture memory ? WebGL doesn’t allow shaders to write back to texture memory ; shaders can only read it. The standard way to communicate data from a vertex shader to a fragment shader is to declare variables as “varying”. Up until this point, I knew about varying variables but there was something I didn’t quite understand about them and it nagged at me : If 3 different executions of a vertex shader set 3 different values to a varying variable, what value is passed to the fragment shader ?

    It turns out that the varying variable varies, which means that the GPU passes interpolated values to each fragment shader invocation. This completely destroys this idea.

    Second Idea : Vertex Workhorse, Take 2
    The revised pitch is to work around the interpolation issue by just having each vertex shader invocation performs all 6 block transforms. That seems like a lot of redundant. However, I figured out that I can draw a square with only 4 vertices by arranging them in an ‘N’ pattern and asking WebGL to draw a TRIANGLE_STRIP instead of TRIANGLES. Now it’s only doing the 4x the extra work, and not 6x. GPUs are supposed to be great at this type of work, so it shouldn’t matter, right ?

    I wired up an experiment and then ran into a new problem : While I was able to transform a block (or at least pretend to), and load up a varying array (that wouldn’t vary since all vertex shaders wrote the same values) to transmit to the fragment shader, the fragment shader can’t access specific values within the varying block. To clarify, a WebGL shader can use a constant value — or a value that can be evaluated as a constant at compile time — to index into arrays ; a WebGL shader can not compute an index into an array. Per my reading, this is a WebGL security consideration and the limitation may not be present in other OpenGL(-ES) implementations.

    Not Giving Up Yet : Choking The Fragment Shader
    You might want to be sitting down for this pitch :

    • Vertex shader only interpolates texture coordinates to transmit to fragment shader
    • Fragment shader performs IDCT for a single Y sample, U sample, and V sample
    • Fragment shader converts YUV -> RGB

    Seems straightforward enough. However, that step concerning IDCT for Y, U, and V entails a gargantuan number of operations. When computing the IDCT for an entire block of samples, it’s possible to leverage a lot of redundancy in the math which equates to far fewer overall operations. If you absolutely have to compute each sample individually, for an 8×8 block, that requires 64 multiplication/accumulation (MAC) operations per sample. For 3 color planes, and including a few extra multiplications involved in the RGB conversion, that tallies up to about 200 MACs per pixel. Then there’s the fact that this approach means a 4x redundant operations on the color planes.

    It’s crazy, but I just want to see if it can be done. My approach is to pre-compute a pile of IDCT constants in the JavaScript and transmit them to the fragment shader via uniform variables. For a first order optimization, the IDCT constants are formatted as 4-element vectors. This allows computing 16 dot products rather than 64 individual multiplication/addition operations. Ideally, GPU hardware executes the dot products faster (and there is also the possibility of lining these calculations up as matrices).

    I can report that I actually got a sample correctly transformed using this approach. Just one sample, through. Then I ran into some new problems :

    Problem #1 : Computing sample #1 vs. sample #0 requires a different table of 64 IDCT constants. Okay, so create a long table of 64 * 64 IDCT constants. However, this suffers from the same problem as seen in the previous approach : I can’t dynamically compute the index into this array. What’s the alternative ? Maintain 64 separate named arrays and implement 64 branches, when branching of any kind is ill-advised in shader programming to begin with ? I started to go down this path until I ran into…

    Problem #2 : Shaders can only be so large. 64 * 64 floats (4 bytes each) requires 16 kbytes of data and this well exceeds the amount of shader storage that I can assume is allowed. That brings this path of exploration to a screeching halt.

    Further Brainstorming
    I suppose I could forgo pre-computing the constants and directly compute the IDCT for each sample which would entail lots more multiplications as well as 128 cosine calculations per sample (384 considering all 3 color planes). I’m a little stuck with the transform idea right now. Maybe there are some other transforms I could try.

    Another idea would be vector quantization. What little ORBX.js literature is available indicates that there is a method to allow real-time streaming but that it requires GPU assistance to yield enough horsepower to make it feasible. When I think of such severe asymmetry between compression and decompression, my mind drifts towards VQ algorithms. As I come to understand the benefits and limitations of GPU acceleration, I think I can envision a way that something similar to SVQ1, with its copious, hierarchical vector tables stored as textures, could be implemented using shaders.

    So far, this all pertains to intra-coded video frames. What about opportunities for inter-coded frames ? The only approach that I can envision here is to use WebGL’s readPixels() function to fetch the rasterized frame out of the GPU, and then upload it again as a new texture which a new frame processing pipeline could reference. Whether this idea is plausible would require some profiling.

    Using interframes in such a manner seems to imply that the entire codec would need to operate in RGB space and not YUV.

    Conclusions
    The people behind ORBX.js have apparently figured out a way to create a shader-based video codec. I have yet to even begin to reason out a plausible approach. However, I’m glad I did this exercise since I have finally broken through my ignorance regarding modern GPU shader programming. It’s nice to have a topic like multimedia that allows me a jumping-off point to explore other areas.

  • Audio clipping after amerge in FFmpeg

    23 avril 2021, par gooey_duck

    Hoping someone out there can help me with this audio issue I'm having using FFmpeg. I've written a bash script using FFmpeg that processes as source a broadcast quality ProRes (HQ) file with four mono tracks of audio. Source audio is LPCM 24-bit, 48kHz signed little endian, and I am exporting the same. Track 1 is full mix left, track 2 is full mix right, track 3 is music/FX left and track 4 is music/FX right. The script takes the source video and trims the video asset using in and out points from a sidecar XML. It also adds a custom slate at the head that uses a separate slate-only MOV file and is joined together via concat. Custom text is added to the slate via the drawtext filter and, finally, audio tracks 3 and 4 are removed and tracks 1 and 2 are merged together into a single stereo interleaved track using amerge.

    


    All of this, seemingly, works like a charm. The problem I'm noticing occurs when I run the resulting export through our in-house QC software. This software detects audio signal clipping at multiple points throughout the file created by FFmpeg. When I create this exact same file via Adobe Premiere or another transcode system, our QC does not detect any clipping. Our QC tools also do not detect any clipping in the source file. This leads me to believe that the clipping is being introduced either by FFmpeg or by my implementation of it. I've tried multiple additional filters within FFmpeg, including pan, amix, volume, etc., but nothing seems to help.

    


    Anyone have any ideas ? Pasting the relevant section of my script for reference :

    


    ffmpeg \
-ss "$hour":"$min":"$sec""$mil_fin" \
-t "$hour_minus":"$min_b":"$sec_b""$mil_fin_b" \
-i "$vid" \
-i "$elements_path"Slate_HD.mov \
-filter_complex \
"[0:a:0] [0:a:1] amerge=inputs=2 [stereo]; \
[1:0] [1:1] [0:0] [stereo] concat=n=2:v=1:a=1 [v] [a]; \
[v]drawtext=enable='between(t,0,28)':fontfile="$fonts"Arial.ttf:fontsize=50:fontcolor=white\
:x=170:y=170:text='$title', \
drawtext=enable='between(t,0,28)':fontfile="$fonts"Arial.ttf:fontsize=50:fontcolor=white\
:x=170:y=170+50:text='Series $series_number Episode $episode_number', \
drawtext=enable='between(t,0,28)':fontfile="$fonts"Arial.ttf:fontsize=50:fontcolor=white\
:x=170:y=170+150:text='$transcode_date', \
yadif=0:-1:0 [o]" \
-map '[o]' -map '[a]' \
-timecode 09:59:30:00 \
-c:v prores_ks -profile:v 3 \
-c:a pcm_s24le \
-threads 3 \
"$output_name".mov


    


    Per suggestion, simplified audio only command line was run as follows :

    


    ffmpeg -i test.mov -i Slate_HD.mov -filter_complex "[0:a:0] [0:a:1] amerge=inputs=2 [stereo];[1:a][stereo]concat=n=2:a=1:v=0" -c:a pcm_s24le test_output.mov


    


    ...and the log from that command line :

    


    ffmpeg version 4.3.2 Copyright (c) 2000-2021 the FFmpeg developers
  built with Apple clang version 12.0.0 (clang-1200.0.32.29)
  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.3.2_3 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
Guessed Channel Layout for Input Stream #0.1 : mono
Guessed Channel Layout for Input Stream #0.2 : mono
Guessed Channel Layout for Input Stream #0.3 : mono
Guessed Channel Layout for Input Stream #0.4 : mono
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/input/test.mov':
  Metadata:
    major_brand     : qt  
    minor_version   : 537199360
    compatible_brands: qt  
    creation_time   : 2021-04-22T18:58:58.000000Z
  Duration: 00:02:00.04, start: 0.000000, bitrate: 170739 kb/s
    Stream #0:0(eng): Video: prores (HQ) (apch / 0x68637061), yuv422p10le(tv, bt709/unknown/unknown, top coded first (swapped)), 1920x1080, 166061 kb/s, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 25 tbn, 25 tbc (default)
    Metadata:
      creation_time   : 2021-04-22T18:58:58.000000Z
      handler_name    : Apple Video Media Handler
      encoder         : Apple ProRes 422 HQ
      timecode        : 00:00:00:00
    Stream #0:1(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-04-22T18:58:58.000000Z
      handler_name    : Apple Sound Media Handler
      timecode        : 00:00:00:00
    Stream #0:2(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-04-22T18:58:58.000000Z
      handler_name    : Apple Sound Media Handler
      timecode        : 00:00:00:00
    Stream #0:3(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-04-22T18:58:58.000000Z
      handler_name    : Apple Sound Media Handler
      timecode        : 00:00:00:00
    Stream #0:4(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-04-22T18:58:58.000000Z
      handler_name    : Apple Sound Media Handler
      timecode        : 00:00:00:00
    Stream #0:5(eng): Data: none (tmcd / 0x64636D74) (default)
    Metadata:
      creation_time   : 2021-04-22T18:58:58.000000Z
      handler_name    : Time Code Media Handler
      timecode        : 00:00:00:00
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from '/input/Slate_HD.mov':
  Metadata:
    major_brand     : qt  
    minor_version   : 537199360
    compatible_brands: qt  
    creation_time   : 2021-03-22T17:23:16.000000Z
  Duration: 00:00:30.00, start: 0.000000, bitrate: 77599 kb/s
    Stream #1:0(eng): Video: prores (HQ) (apch / 0x68637061), yuv422p10le(tv, bt709, progressive), 1920x1080, 75783 kb/s, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 25 tbn, 25 tbc (default)
    Metadata:
      creation_time   : 2021-03-22T17:23:16.000000Z
      handler_name    : Apple Video Media Handler
      encoder         : Apple ProRes 422 HQ
      timecode        : 00:00:00:00
    Stream #1:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, stereo, s16, 1536 kb/s (default)
    Metadata:
      creation_time   : 2021-03-22T17:23:16.000000Z
      handler_name    : Apple Sound Media Handler
      timecode        : 00:00:00:00
    Stream #1:2(eng): Data: none (tmcd / 0x64636D74), 0 kb/s (default)
    Metadata:
      creation_time   : 2021-03-22T17:23:16.000000Z
      handler_name    : Time Code Media Handler
      timecode        : 00:00:00:00
Stream mapping:
  Stream #0:1 (pcm_s24le) -> amerge:in0 (graph 0)
  Stream #0:2 (pcm_s24le) -> amerge:in1 (graph 0)
  Stream #1:1 (pcm_s16le) -> concat:in0:a0 (graph 0)
  concat (graph 0) -> Stream #0:0 (pcm_s24le)
  Stream #0:0 -> #0:1 (prores (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0x7feadf81a600] using SAR=1/1
[libx264 @ 0x7feadf81a600] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x7feadf81a600] profile High 4:2:2, level 4.0, 4:2:2, 10-bit
[libx264 @ 0x7feadf81a600] 264 - core 161 r3048 b86ae3c - H.264/MPEG-4 AVC codec - Copyleft 2003-2021 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=81 qpstep=4 ip_ratio=1.40 aq=1:1.00
[Parsed_amerge_0 @ 0x7feade41dfc0] No channel layout for input 1
[Parsed_amerge_0 @ 0x7feade41dfc0] Input channel layouts overlap: output layout will be determined by the number of distinct input channels
Output #0, mov, to '/output/test_output.mov':
  Metadata:
    major_brand     : qt  
    minor_version   : 537199360
    compatible_brands: qt  
    encoder         : Lavf58.45.100
    Stream #0:0: Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, stereo, s32, 2304 kb/s (default)
    Metadata:
      encoder         : Lavc58.91.100 pcm_s24le
    Stream #0:1(eng): Video: h264 (libx264) (avc1 / 0x31637661), yuv422p10le(top coded first (swapped)), 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 0.04 fps, 12800 tbn, 25 tbc (default)
    Metadata:
      creation_time   : 2021-04-22T18:58:58.000000Z
      handler_name    : Apple Video Media Handler
      timecode        : 00:00:00:00
      encoder         : Lavc58.91.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
frame=   47 fps=0.0 q=0.0 size=       0kB time=00:00:00.02 bitrate=  15.0kbits/sframe=   88 fps= 80 q=40.0 size=     256kB time=00:00:01.17 bitrate=1791.7kbits/frame=  133 fps= 82 q=40.0 size=     768kB time=00:00:02.98 bitrate=2110.4kbits/frame=  164 fps= 76 q=40.0 size=    1280kB time=00:00:04.21 bitrate=2488.8kbits/frame=  185 fps= 70 q=40.0 size=    1792kB time=00:00:05.06 bitrate=2899.0kbits/frame=  195 fps= 61 q=40.0 size=    2048kB time=00:00:05.44 bitrate=3079.6kbits/frame=  206 fps= 55 q=40.0 size=    2560kB time=00:00:05.89 bitrate=3557.0kbits/frame=  218 fps= 51 q=40.0 size=    3072kB time=00:00:06.38 bitrate=3942.1kbits/frame=  230 fps= 47 q=40.0 size=    3584kB time=00:00:06.85 bitrate=4284.1kbits/frame=  242 fps= 44 q=40.0 size=    4096kB time=00:00:07.34 bitrate=4570.7kbits/frame=  251 fps= 42 q=37.0 size=    4352kB time=00:00:07.70 bitrate=4627.7kbits/frame=  265 fps= 41 q=40.0 size=    5376kB time=00:00:08.25 bitrate=5334.4kbits/frame=  272 fps= 39 q=40.0 size=    5632kB time=00:00:08.53 bitrate=5406.8kbits/frame=  283 fps= 37 q=40.0 size=    5888kB time=00:00:08.98 bitrate=5370.6kbits/frame=  296 fps= 37 q=40.0 size=    6400kB .04 bitrate=5549.0kbits/frame= 3001 fps= 21 q=-1.0 Lsize=  106530kB time=00:02:30.04 bitrate=5816.4kbits/s speed=1.07x    
video:64287kB audio:42199kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.041280%
[libx264 @ 0x7feadf81a600] frame I:33    Avg QP:33.56  size:113497
[libx264 @ 0x7feadf81a600] frame P:984   Avg QP:36.20  size: 35478
[libx264 @ 0x7feadf81a600] frame B:1984  Avg QP:37.44  size: 13696
[libx264 @ 0x7feadf81a600] consecutive B-frames: 11.5%  0.9%  0.9% 86.8%
[libx264 @ 0x7feadf81a600] mb I  I16..4:  3.4% 86.8%  9.7%
[libx264 @ 0x7feadf81a600] mb P  I16..4:  1.6% 13.7%  0.7%  P16..4: 44.3%  8.5%  7.0%  0.0%  0.0%    skip:24.2%
[libx264 @ 0x7feadf81a600] mb B  I16..4:  0.1%  1.1%  0.0%  B16..8: 47.6%  2.4%  0.3%  direct: 1.3%  skip:47.3%  L0:47.0% L1:51.3% BI: 1.7%
[libx264 @ 0x7feadf81a600] 8x8 transform intra:85.9% inter:88.3%
[libx264 @ 0x7feadf81a600] coded y,uvDC,uvAC intra: 67.9% 73.8% 13.9% inter: 20.8% 21.6% 0.6%
[libx264 @ 0x7feadf81a600] i16 v,h,dc,p: 48% 15%  4% 33%
[libx264 @ 0x7feadf81a600] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 11% 27%  6%  7%  7%  7%  8%  7%
[libx264 @ 0x7feadf81a600] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 15% 14%  7%  9%  9%  8%  6%  5%
[libx264 @ 0x7feadf81a600] i8c dc,h,v,p: 57% 12% 24%  7%
[libx264 @ 0x7feadf81a600] Weighted P-Frames: Y:0.1% UV:0.1%
[libx264 @ 0x7feadf81a600] ref P L0: 69.1% 21.7%  9.2%  0.0%
[libx264 @ 0x7feadf81a600] ref B L0: 87.2% 10.4%  2.5%
[libx264 @ 0x7feadf81a600] ref B L1: 94.9%  5.1%
[libx264 @ 0x7feadf81a600] kb/s:4387.16
WU:~ user$ 
ffmpeg version 4.3.2 Copyright (c) 2000-2021 the FFmpeg developers
  built with Apple clang version 12.0.0 (clang-1200.0.32.29)
  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.3.2_3 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
Guessed Channel Layout for Input Stream #0.1 : mono
Guessed Channel Layout for Input Stream #0.2 : mono
Guessed Channel Layout for Input Stream #0.3 : mono
Guessed Channel Layout for Input Stream #0.4 : mono
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/input/test.mov':
  Metadata:
    major_brand     : qt  
    minor_version   : 537199360
    compatible_brands: qt  
    creation_time   : 2021-04-22T18:58:58.000000Z
  Duration: 00:02:00.04, start: 0.000000, bitrate: 170739 kb/s
    Stream #0:0(eng): Video: prores (HQ) (apch / 0x68637061), yuv422p10le(tv, bt709/unknown/unknown, top coded first (swapped)), 1920x1080, 166061 kb/s, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 25 tbn, 25 tbc (default)
    Metadata:
      creation_time   : 2021-04-22T18:58:58.000000Z
      handler_name    : Apple Video Media Handler
      encoder         : Apple ProRes 422 HQ
      timecode        : 00:00:00:00
    Stream #0:1(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-04-22T18:58:58.000000Z
      handler_name    : Apple Sound Media Handler
      timecode        : 00:00:00:00
    Stream #0:2(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-04-22T18:58:58.000000Z
      handler_name    : Apple Sound Media Handler
      timecode        : 00:00:00:00
    Stream #0:3(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-04-22T18:58:58.000000Z
      handler_name    : Apple Sound Media Handler
      timecode        : 00:00:00:00
    Stream #0:4(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2021-04-22T18:58:58.000000Z
      handler_name    : Apple Sound Media Handler
      timecode        : 00:00:00:00
    Stream #0:5(eng): Data: none (tmcd / 0x64636D74) (default)
    Metadata:
      creation_time   : 2021-04-22T18:58:58.000000Z
      handler_name    : Time Code Media Handler
      timecode        : 00:00:00:00
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from '/input/Slate_HD.mov':
  Metadata:
    major_brand     : qt  
    minor_version   : 537199360
    compatible_brands: qt  
    creation_time   : 2021-03-22T17:23:16.000000Z
  Duration: 00:00:30.00, start: 0.000000, bitrate: 77599 kb/s
    Stream #1:0(eng): Video: prores (HQ) (apch / 0x68637061), yuv422p10le(tv, bt709, progressive), 1920x1080, 75783 kb/s, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 25 tbn, 25 tbc (default)
    Metadata:
      creation_time   : 2021-03-22T17:23:16.000000Z
      handler_name    : Apple Video Media Handler
      encoder         : Apple ProRes 422 HQ
      timecode        : 00:00:00:00
    Stream #1:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, stereo, s16, 1536 kb/s (default)
    Metadata:
      creation_time   : 2021-03-22T17:23:16.000000Z
      handler_name    : Apple Sound Media Handler
      timecode        : 00:00:00:00
    Stream #1:2(eng): Data: none (tmcd / 0x64636D74), 0 kb/s (default)
    Metadata:
      creation_time   : 2021-03-22T17:23:16.000000Z
      handler_name    : Time Code Media Handler
      timecode        : 00:00:00:00
Stream mapping:
  Stream #0:1 (pcm_s24le) -> amerge:in0 (graph 0)
  Stream #0:2 (pcm_s24le) -> amerge:in1 (graph 0)
  Stream #1:1 (pcm_s16le) -> concat:in0:a0 (graph 0)
  concat (graph 0) -> Stream #0:0 (pcm_s24le)
  Stream #0:0 -> #0:1 (prores (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0x7feadf81a600] using SAR=1/1
[libx264 @ 0x7feadf81a600] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x7feadf81a600] profile High 4:2:2, level 4.0, 4:2:2, 10-bit
[libx264 @ 0x7feadf81a600] 264 - core 161 r3048 b86ae3c - H.264/MPEG-4 AVC codec - Copyleft 2003-2021 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=81 qpstep=4 ip_ratio=1.40 aq=1:1.00
[Parsed_amerge_0 @ 0x7feade41dfc0] No channel layout for input 1
[Parsed_amerge_0 @ 0x7feade41dfc0] Input channel layouts overlap: output layout will be determined by the number of distinct input channels
Output #0, mov, to '/output/test_output.mov':
  Metadata:
    major_brand     : qt  
    minor_version   : 537199360
    compatible_brands: qt  
    encoder         : Lavf58.45.100
    Stream #0:0: Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, stereo, s32, 2304 kb/s (default)
    Metadata:
      encoder         : Lavc58.91.100 pcm_s24le
    Stream #0:1(eng): Video: h264 (libx264) (avc1 / 0x31637661), yuv422p10le(top coded first (swapped)), 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 0.04 fps, 12800 tbn, 25 tbc (default)
    Metadata:
      creation_time   : 2021-04-22T18:58:58.000000Z
      handler_name    : Apple Video Media Handler
      timecode        : 00:00:00:00
      encoder         : Lavc58.91.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
frame=   47 fps=0.0 q=0.0 size=       0kB time=00:00:00.02 bitrate=  15.0kbits/sframe=   88 fps= 80 q=40.0 size=     256kB time=00:00:01.17 bitrate=1791.7kbits/frame=  133 fps= 82 q=40.0 size=     768kB time=00:00:02.98 bitrate=2110.4kbits/frame=  164 fps= 76 q=40.0 size=    1280kB time=00:00:04.21 bitrate=2488.8kbits/frame=  185 fps= 70 q=40.0 size=    1792kB time=00:00:05.06 bitrate=2899.0kbits/frame=  195 fps= 61 q=40.0 size=    2048kB time=00:00:05.44 bitrate=3079.6kbits/frame=  206 fps= 55 q=40.0 size=    2560kB time=00:00:05.89 bitrate=3557.0kbits/frame=  218 fps= 51 q=40.0 size=    3072kB time=00:00:06.38 bitrate=3942.1kbits/frame=  230 fps= 47 q=40.0 size=    3584kB time=00:00:06.85 bitrate=4284.1kbits/frame=  242 fps= 44 q=40.0 size=    4096kB time=00:00:07.34 bitrate=4570.7kbits/frame=  251 fps= 42 q=37.0 size=    4352kB time=00:00:07.70 bitrate=4627.7kbits/frame=  265 fps= 41 q=40.0 size=    5376kB time=00:00:08.25 bitrate=5334.4kbits/frame=  272 fps= 39 q=40.0 size=    5632kB time=00:00:08.53 bitrate=5406.8kbits/frame=  283 fps= 37 q=40.0 size=    5888kB time=00:00:08.98 bitrate=5370.6kbits/frame=  296 fps= 37 q=40.0 size=    6400kB .04 bitrate=5549.0kbits/frame= 3001 fps= 21 q=-1.0 Lsize=  106530kB time=00:02:30.04 bitrate=5816.4kbits/s speed=1.07x    
video:64287kB audio:42199kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.041280%
[libx264 @ 0x7feadf81a600] frame I:33    Avg QP:33.56  size:113497
[libx264 @ 0x7feadf81a600] frame P:984   Avg QP:36.20  size: 35478
[libx264 @ 0x7feadf81a600] frame B:1984  Avg QP:37.44  size: 13696
[libx264 @ 0x7feadf81a600] consecutive B-frames: 11.5%  0.9%  0.9% 86.8%
[libx264 @ 0x7feadf81a600] mb I  I16..4:  3.4% 86.8%  9.7%
[libx264 @ 0x7feadf81a600] mb P  I16..4:  1.6% 13.7%  0.7%  P16..4: 44.3%  8.5%  7.0%  0.0%  0.0%    skip:24.2%
[libx264 @ 0x7feadf81a600] mb B  I16..4:  0.1%  1.1%  0.0%  B16..8: 47.6%  2.4%  0.3%  direct: 1.3%  skip:47.3%  L0:47.0% L1:51.3% BI: 1.7%
[libx264 @ 0x7feadf81a600] 8x8 transform intra:85.9% inter:88.3%
[libx264 @ 0x7feadf81a600] coded y,uvDC,uvAC intra: 67.9% 73.8% 13.9% inter: 20.8% 21.6% 0.6%
[libx264 @ 0x7feadf81a600] i16 v,h,dc,p: 48% 15%  4% 33%
[libx264 @ 0x7feadf81a600] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 11% 27%  6%  7%  7%  7%  8%  7%
[libx264 @ 0x7feadf81a600] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 15% 14%  7%  9%  9%  8%  6%  5%
[libx264 @ 0x7feadf81a600] i8c dc,h,v,p: 57% 12% 24%  7%
[libx264 @ 0x7feadf81a600] Weighted P-Frames: Y:0.1% UV:0.1%
[libx264 @ 0x7feadf81a600] ref P L0: 69.1% 21.7%  9.2%  0.0%
[libx264 @ 0x7feadf81a600] ref B L0: 87.2% 10.4%  2.5%
[libx264 @ 0x7feadf81a600] ref B L1: 94.9%  5.1%
[libx264 @ 0x7feadf81a600] kb/s:4387.16
WU:~ user$ 


    


  • Your introduction to personally identifiable information : What is PII ?

    15 janvier 2020, par Joselyn Khor — Analytics Tips, Privacy, Security

    When it comes to personally identifiable information (PII), people are becoming more concerned with data privacy. Identifiable information can be used for illegal purposes like identity theft and fraud. 

    So how can you protect yourself as an innocent web browser ?

    If you’re a website owner – how do you protect users and your company from falling prey to privacy breaches ?

    As one of the most trusted analytics companies, we feel our readers would benefit from being as informed as possible about data privacy issues and PII. Learn how you can keep yours or others’ information safe.

    what is pii

    Table of Contents

    What does PII stand for ?

    PII acronym

    PII is an acronym for personally identifiable information.

    PII definition

    Personally identifiable information (PII) is a term mainly used in the United States.

    The appendix of OMB M-10-23 (Guidance for Agency Use of Third-Party Website and Applications) gives this definition for PII :

    “The term ‘personally identifiable information’ refers to information which can be used to distinguish or trace an individual’s identity, such as their name, social security number, biometric records, etc. alone, or when combined with other personal or identifying information which is linked or linkable to a specific individual, such as date and place of birth, mother’s maiden name, etc.”

    What can be considered personally identifiable information (PII) ? Some PII examples :

    • Full name/usernames
    • Home address/mailing address
    • Email address
    • Credit card numbers
    • Date of birth
    • Phone numbers
    • Login details
    • Precise locations
    • Account numbers
    • Passwords
    • Security codes (including biometric records)
    • Personal identification numbers
    • Driver license number
    • Get a more comprehensive list here

    What’s non-PII ?

    Who is affected by the exploitation of PII ?

    Anyone can be affected by the misuse of personal data. Websites can compromise your privacy by mishandling or illegally selling/sharing your data. That may lead identity theft, account fraud and account takeovers. The fear is falling victim to such fraudulent activity. 

    PII can also be an issue when employees have access to the database and the data is not encrypted. For example, anyone working in a bank can access your accounts ; and anyone working at Facebook can read your messages. This shows how privacy breaches can easily happen when employees have access to PII.

    Website owner’s responsibility for data privacy (PII and analytics)

    If you’re using a web analytics tool like Google Analytics or Matomo, best practise is to not collect PII if possible. This is to better respect your website visitor’s privacy. 

    If you work in an industry which needs people to share personal information (e.g. healthcare, security industries, public sector), then you must collect and handle this data securely. 

    Protecting pii

    The US National Institute of Standards and Technology states : “The likelihood of harm caused by a breach involving PII is greatly reduced if an organisation minimises the amount of PII it uses, collects, and stores. For example, an organisation should only request PII in a new form if the PII is absolutely necessary.” 

    How you’re held accountable remains up to the privacy laws of the country you’re doing business in. Make sure you are fully aware of the privacy and data protection laws that relate specifically to you. 

    To reduce the risk of privacy breaches, try collecting as little PII as you can ; purging it as soon as you can ; and making sure your IT security is updated and protected against security threats. 

    With data collection tools like web analytics, data may be tracked through features like User ID, custom variables, and custom dimensions. Sometimes they are also harder to identify when they are present, for example, in page URLs, page titles, or referrers URLs. So make sure you’re optimising your web analytics tools’ settings to ensure you’re asking your users for consent and respecting users’ privacy.

    If you’re using a GDPR compliant tool like Matomo, learn how you can stop processing such personal data

    PII, GDPR and businesses in the US/EU

    You may get confused when considering PII and GDPR (which applies in the EU). The General Data Protection Regulation (GDPR) gives people in the EU more rights over “personal data” – which covers more identifiers than PII (more on PII vs personal data below). GDPR restricts the collection and processing of personal data so businesses need to handle this personal data carefully. 

    According to the GDPR, you can be fined up to 4% of their yearly revenue for data/privacy breaches or non-compliance. 

    GDPR and personal information

    In the US, there isn’t one overarching data protection law, but there are hundreds of laws on both the federal and state levels to protect PII of US residents. US Congress has enacted industry-specific statutes related to data privacy like HIPAA. Recently state of California also passed the California Consumer Privacy Act (CCPA). 

    To be on the safe side, if you’re using analytics, follow matters relating to “personal data” in the GDPR. It covers more when it comes to protecting user privacy. GDPR rules still apply whenever an EU citizen visits any non EU site (that processes personal data).

    Personally identifiable information (PII) vs personal data

    PII and “personal data” aren’t used interchangeably. All personal data can be PII, but not all PII can be defined as personal data.

    The definition of “personal data” according to the GDPR :

    GDPR personal data definition

    This means “personal data” covers more identifiers, including online identifiers. Examples include : IP addresses and URL names. As well as seemingly “innocent” data like height, job position, company etc. 

    What’s seen as personal data depends on the context. If a piece of information can be combined with others to establish someone’s identity then that can be considered personal data. 

    Under GDPR, when processing personal data, you need explicit consent. So best to be compliant according to GDPR definitions of “personal data” not just what’s considered “PII”.

    How do you keep PII safe ?

    • Try not to give your data away so easily. Read through terms and conditions.
    • Don’t just click ‘agree’ when faced with consent screens, as consent screens are majorly flawed. 
    • Disable third party cookies by default. 
    • Use strong passwords.
    • Be wary of public wifi – hackers can easily access your PII or sensitive data. Use a VPN (virtual private network)
    • Read more on how to keep PII safe. For businesses here’s a checklist on PII compliance.

    How Matomo deals with PII and personal data

    Although Matomo Analytics is a web analytics tool that tracks user activity on your website, we take privacy and PII very seriously – on both our Cloud and On-Premise offerings. 

    If you’re using Matomo and would like to know how you can be fully GDPR compliant and protect user privacy, read more :

    Disclaimer

    We are not lawyers and don’t claim to be. The information provided here is to help give an introduction to issues you may encounter when dealing with PII. We encourage every business and website to take data privacy seriously and discuss these issues with your lawyer if you have any concerns.