
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (49)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...)
Sur d’autres sites (8577)
-
Subtitling Sierra RBT Files
2 juin 2016, par Multimedia Mike — Game HackingThis is part 2 of the adventure started in my Subtitling Sierra VMD Files post. After I completed the VMD subtitling, The Translator discovered a wealth of animation files in a format called RBT (this apparently stands for “Robot” but I think “Ribbit” format could be more fun). What are we going to do ? We had come so far by solving the VMD subtitling problem for Phantasmagoria. It would be a shame if the effort ground to a halt due to this.
Fortunately, the folks behind the ScummVM project already figured out enough of the format to be able to decode the RBT files in Phantasmagoria.
In the end, I was successful in creating a completely standalone tool that can take a Robot file and a subtitle file and create a new Robot file with subtitles. The source code is here (subtitle-rbt.c). Here’s what the final result looks like :
“What’s in the refrigerator ?” I should note at this juncture that I am not sure if this particular Robot file even has sound or dialogue since I was conducting these experiments on a computer with non-working audio.
The RBT Format
I have created a new MultimediaWiki page describing the Robot Animation format based on the ScummVM source code. I have not worked with a format quite like this before. These are paletted animations which consist of a sequence of independent frames that are designed to be overlaid on top of static background. Because of these characteristics, each frame encodes its own unique dimensions and origin coordinate within the frame. While the Phantasmagoria VMD files are usually 288×144 (which are usually double-sized for the benefit of a 640×400 Super VGA canvas), these frames are meant to be plotted on a game field that was roughly 576×288 (288×144 doublesized).
For example, 2 minimalist animation frames from a desk investigation Robot file :
100×147
101×149As for compression, my first impression was that the algorithm was the same as VMD. This is wrong. It evidently uses an unmodified version of a standard algorithm called Lempel-Ziv-Stac (LZS). It shows up in several RFCs and was apparently used in MS-DOS’s transparent disk compression scheme.
Approach
Thankfully, many of the lessons I learned from the previous project are applicable to this project, including : subtitle library interfacing, subtitling in the paletted colorspace, and replacing encoded frames from the original file instead of trying to create a new file.Here is the pitch for this project :
- Create a C program that can traverse through an input file, piece by piece, and generate an output file. The result of this should be a bitwise identical file.
- Adapt the LZS compression decoding algorithm from ScummVM into the new tool. Make the tool dump raw Portable NetMap (PNM) files of varying dimensions and ensure that they look correct.
- Compress using LZS.
- Stretch the frames and draw subtitles.
- More compression. Find the minimum window for each frame.
Compression
Normally, my first goal is to decompress the video and store the data in a raw form. However, this turned out to be mathematically intractable. While the format does support both compressed and uncompressed frames (even though ScummVM indicates that the uncompressed path is yet unexercised), the goal of this project requires making the frames so large that they overflow certain parameters of the file.A Robot file has a sequence of frames and 2 tables describing the size of each frame. One table describes the entire frame size (audio + video) while the second table describes just the video frame size. Since these tables only use 16 bits to specify a size, the maximum frame size is 65536 bytes. Leaving space for the audio portion of the frame, this only leaves a per-frame byte budget of about 63000 bytes for the video. Expanding the frame to 576×288 (165,888 pixels) would overflow this limit.
Anyway, the upshot is that I needed to compress the data up front.
Fortunately, the LZS compressor is pretty straightforward, at least if you have experience writing VLC-oriented codecs. While the algorithm revolves around back references, my approach was to essentially write an RLE encoder. My compressor would search for runs of data (plentiful when I started to stretch the frame for subtitling purposes). When a run length of n=3 or more of the same pixel is found, encode the pixel by itself, and then store a back reference of offset -1 and length (n-1). It took a little while to iron out a few problems, but I eventually got it to work perfectly.
I have to say, however, that the format is a little bit weird in how it codes very large numbers. The length encoding is somewhat Golomb-like, i.e., smaller values are encoded with fewer bits. However, when it gets to large numbers, it starts encoding counts of 15 as blocks of 1111. For example, 24 is bigger than 7. Thus, emit 1111 into the bitstream and subtract 8 from 23 -> 16. Still bigger than 15, so stuff another 1111 into the bitstream and subtract 15. Now we’re at 1, so stuff 0001. So 24 is 11111111 0001. 12 bits is not too horrible. But the total number of bytes (value / 30). So a value of 300 takes around 10 bytes (80 bits) to encode.
Palette Slices
As in the VMD subtitling project, I took the subtitle color offered in the subtitle spec file as a suggestion and used Euclidean distance to match to the closest available color in the palette. One problem, however, is that the palette is a lot smaller in these animations. According to my notes, for the set of animations I scanned, only about 80 colors were specified, starting at palette index 55. I hypothesize that different slices of the palette are reserved for different uses. E.g., animation, background, and user interface. Thus, there is a smaller number of colors to draw upon for subtitling purposes.Scaling
One bit of residual weirdness in this format is the presence of a per-frame scale factor. While most frames set this to 100 (100% scale), I have observed 70%, 80%, and 90%. ScummVM is a bit unsure about how to handle these, so I am as well. However, I eventually realized I didn’t really need to care, at least not when decoding and re-encoding the frame. Just preserve the scale factor. I intend to modify the tool further to take scale factor into account when creating the subtitle.The Final Resolution
Right around the time that I was composing this post, The Translator emailed me and notified me that he had found a better way to subtitle the Robot files by modifying the scripts, rendering my entire approach moot. The result is much cleaner :
Turns out that the engine supported subtitles all along
It’s a good thing that I enjoyed the challenge or I might be annoyed at this point.
See Also
- Subtitling Sierra VMD Files : My effort to subtitle the main FMV files found in Sierra games.
-
Revision a49d80bfc8 : Squash commits from master to playground Moving RD-opt related code from vp9_en
26 juin 2014, par Yue ChenChanged Paths :
Modify /build/make/gen_msvs_proj.sh
Modify /build/make/gen_msvs_vcxproj.sh
Modify /build/make/iosbuild.sh
Modify /examples/vp9_spatial_svc_encoder.c
Modify /test/decode_test_driver.cc
Modify /test/decode_test_driver.h
Add /test/invalid_file_test.cc
Modify /test/svc_test.cc
Modify /test/test-data.sha1
Modify /test/test.mk
Modify /test/test_vectors.cc
Add /test/user_priv_test.cc
Add /third_party/libmkv/EbmlIDs.h
Add /third_party/libmkv/EbmlWriter.c
Add /third_party/libmkv/EbmlWriter.h
Modify /vp8/common/rtcd_defs.pl
Modify /vp8/encoder/x86/quantize_sse2.c
Delete /vp8/encoder/x86/quantize_sse4.asm
Add /vp8/encoder/x86/quantize_sse4.c
Modify /vp8/vp8cx.mk
Modify /vp9/common/arm/neon/vp9_convolve_neon.c
Modify /vp9/common/arm/neon/vp9_loopfilter_16_neon.c
Modify /vp9/common/vp9_alloccommon.c
Modify /vp9/common/vp9_alloccommon.h
Modify /vp9/common/vp9_convolve.c
Modify /vp9/common/vp9_mvref_common.c
Modify /vp9/common/vp9_mvref_common.h
Modify /vp9/common/vp9_quant_common.c
Modify /vp9/common/vp9_quant_common.h
Modify /vp9/common/vp9_scale.h
Modify /vp9/decoder/vp9_decodeframe.c
Modify /vp9/decoder/vp9_decoder.c
Modify /vp9/decoder/vp9_dthread.h
Modify /vp9/decoder/vp9_read_bit_buffer.c
Modify /vp9/encoder/vp9_bitstream.c
Modify /vp9/encoder/vp9_block.h
Modify /vp9/encoder/vp9_denoiser.c
Modify /vp9/encoder/vp9_denoiser.h
Modify /vp9/encoder/vp9_encodeframe.c
Modify /vp9/encoder/vp9_encoder.c
Modify /vp9/encoder/vp9_encoder.h
Modify /vp9/encoder/vp9_firstpass.c
Modify /vp9/encoder/vp9_firstpass.h
Modify /vp9/encoder/vp9_lookahead.c
Modify /vp9/encoder/vp9_lookahead.h
Modify /vp9/encoder/vp9_pickmode.c
Modify /vp9/encoder/vp9_pickmode.h
Modify /vp9/encoder/vp9_ratectrl.c
Modify /vp9/encoder/vp9_ratectrl.h
Modify /vp9/encoder/vp9_rdopt.c
Modify /vp9/encoder/vp9_rdopt.h
Modify /vp9/encoder/vp9_speed_features.c
Modify /vp9/encoder/vp9_speed_features.h
Modify /vp9/encoder/vp9_svc_layercontext.c
Modify /vp9/encoder/vp9_svc_layercontext.h
Modify /vp9/vp9_cx_iface.c
Modify /vp9/vp9_dx_iface.c
Modify /vp9/vp9cx.mk
Modify /vpx/src/svc_encodeframe.c
Modify /vpx/svc_context.h
Squash commits from master to playgroundMoving RD-opt related code from vp9_encoder.h to vp9_rdopt.h.
Squashed-Change-Id : I8fab776c8801e19d3f5027ed55a6aa69eee951de
gen_msvs_proj : fix in tree configure under cygwin
strip trailing ’/’ from paths, this is later converted to ’\’ which
causes execution errors for obj_int_extract/yasm. vs10+ wasn’t affected
by this issue, but make the same change for consistency.gen_msvs_proj :
+ add missing ’"’ to obj_int_extract call
unlike gen_msvs_vcproj, the block is duplicated
missed in : 1e3d9b9 build/msvs : fix builds in source dirs with spacesSquashed-Change-Id : I76208e6cdc66dc5a0a7ffa8aa1edbefe31e4b130
Improve vp9_rb_bytes_read
Squashed-Change-Id : I69eba120eb3d8ec43b5552451c8a9bd009390795
Removing decode_one_iter() function.
When superframe index is available we completely rely on it and use frame
size values from the index.Squashed-Change-Id : I0011d08b223303a8b912c2bcc8a02b74d0426ee0
iosbuild.sh : Add vpx_config.h and vpx_version.h to VPX.framework.
Rename build_targets to build_framework
Add functions for creating the vpx_config shim and obtaining
preproc symbols.Squashed-Change-Id : Ieca6938b9779077eefa26bf4cfee64286d1840b0
Implemented vp9_denoiser_alloc,free()
Squashed-Change-Id : I79eba79f7c52eec19ef2356278597e06620d5e27
Update running avg for VP9 denoiser
Squashed-Change-Id : I9577d648542064052795bf5770428fbd5c276b7b
Changed buf_2ds in vp9 denoiser to YV12 buffers
Changed alloc, free, and running average code as necessary.
Squashed-Change-Id : Ifc4d9ccca462164214019963b3768a457791b9c1
sse4 regular quantize
Squashed-Change-Id : Ibd95df0adf9cc9143006ee9032b4cb2ebfd5dd1b
Modify non-rd intra mode checking
Speed 6 uses small tx size, namely 8x8. max_intra_bsize needs to
be modified accordingly to ensure valid intra mode checking.
Borg test on RTC set showed an overall PSNR gain of 0.335% in speed6.
This also changes speed -5 encoding by allowing DC_PRED checking
for block32x32. Borg test on RTC set showed a slight PSNR gain of
0.145%, and no noticeable speed change.Squashed-Change-Id : I1502978d8fbe265b3bb235db0f9c35ba0703cd45
Implemented COPY_BLOCK case for vp9 denoiser
Squashed-Change-Id : Ie89ad1e3aebbd474e1a0db69c1961b4d1ddcd33e
Improved vp9 denoiser running avg update.
Squashed-Change-Id : Ie0aa41fb7957755544321897b3bb2dd92f392027
Separate rate-distortion modeling for DC and AC coefficients
This is the first step to rework the rate-distortion modeling used
in rtc coding mode. The overall goal is to make the modeling
customized for the statistics encountered in the rtc coding.This commit makes encoder to perform rate-distortion modeling for
DC and AC coefficients separately. No speed changes observed.
The coding performance for pedestrian_area_1080p is largely
improved :speed -5, from 79558 b/f, 37.871 dB -> 79598 b/f, 38.600 dB
speed -6, from 79515 b/f, 37.822 dB -> 79544 b/f, 38.130 dBOverall performance for rtc set at speed -6 is improved by 0.67%.
Squashed-Change-Id : I9153444567e5f75ccdcaac043c2365992c005c0c
Add superframe support for frame parallel decoding.
A superframe is a bunch of frames that bundled as one frame. It is mostly
used to combine one or more non-displayable frames and one displayable frame.For frame parallel decoding, libvpx decoder will only support decoding one
normal frame or a super frame with superframe index.If an application pass a superframe without superframe index or a chunk
of displayable frames without superframe index to libvpx decoder, libvpx
will not decode it in frame parallel mode. But libvpx decoder still could
decode it in serial mode.Squashed-Change-Id : I04c9f2c828373d64e880a8c7bcade5307015ce35
Fixes in VP9 alloc, free, and COPY_FRAME case
Squashed-Change-Id : I1216f17e2206ef521fe219b6d72d8e41d1ba1147
Remove labels from quantize
Use break instead of goto for early exit. Unbreaks Visual Studio
builds.Squashed-Change-Id : I96dee43a3c82145d4abe0d6a99af6e6e1a3991b5
Added CFLAG for outputting vp9 denoised signal
Squashed-Change-Id : Iab9b4e11cad927f3282e486c203564e1a658f377
Allow key frame more flexibility in mode search
This commit allows the key frame to search through more prediction
modes and more flexible block sizes. No speed change observed. The
coding performance for rtc set is improved by 1.7% for speed -5 and
3.0% for speed -6.Squashed-Change-Id : Ifd1bc28558017851b210b4004f2d80838938bcc5
VP9 denoiser bugfixes
s/stdint.h/vpx\/vpx_int.h
Added missing ’break ;’s
Also included other minor changes, mostly cosmetic.
Squashed-Change-Id : I852bba3e85e794f1d4af854c45c16a23a787e6a3
Don’t return value for void functions
Clears "warning : ’return’ with a value, in function returning void"
Squashed-Change-Id : I93972610d67e243ec772a1021d2fdfcfc689c8c2
Include type defines
Clears error : unknown type name ’uint8_t’
Squashed-Change-Id : I9b6eff66a5c69bc24aeaeb5ade29255a164ef0e2
Validate error checking code in decoder.
This patch adds a mechanism for insuring error checking on invalid files
by creating a unit test that runs the decoder and tests that the error
code matches what’s expected on each frame in the decoder.Disabled for now as this unit test will segfault with existing code.
Squashed-Change-Id : I896f9686d9ebcbf027426933adfbea7b8c5d956e
Introduce FrameWorker for decoding.
When decoding in serial mode, there will be only
one FrameWorker doing decoding. When decoding in
parallel mode, there will be several FrameWorkers
doing decoding in parallel.Squashed-Change-Id : If53fc5c49c7a0bf5e773f1ce7008b8a62fdae257
Add back libmkv ebml writer files.
Another project in ChromeOS is using these files. To make libvpx
rolls simpler, add these files back unitl the other project removes
the dependency.crbug.com/387246 tracking bug to remove dependency.
Squashed-Change-Id : If9c197081c845c4a4e5c5488d4e0190380bcb1e4
Added Test vector that tests more show existing frames.
Squashed-Change-Id : I0ddd7dd55313ee62d231ed4b9040e08c3761b3fe
fix peek_si to enable 1 byte show existing frames.
The test for this is in test vector code ( show existing frames will
fail ). I can’t check it in disabled as I’m changing the generic
test code to do this :https://gerrit.chromium.org/gerrit/#/c/70569/
Squashed-Change-Id : I5ab324f0cb7df06316a949af0f7fc089f4a3d466
Fix bug in error handling that causes segfault
See : https://code.google.com/p/chromium/issues/detail?id=362697
The code properly catches an invalid stream but seg faults instead of
returning an error due to a buffer not having been initialized. This
code fixes that.Squashed-Change-Id : I695595e742cb08807e1dfb2f00bc097b3eae3a9b
Revert 3 patches from Hangyu to get Chrome to build :
Avoids failures :
MSE_ClearKey/EncryptedMediaTest.Playback_VP9Video_WebM/0
MSE_ClearKey_Prefixed/EncryptedMediaTest.Playback_VP9Video_WebM/0
MSE_ExternalClearKey_Prefixed/EncryptedMediaTest.Playback_VP9Video_WebM/0
MSE_ExternalClearKey/EncryptedMediaTest.Playback_VP9Video_WebM/0
MSE_ExternalClearKeyDecryptOnly/EncryptedMediaTest.Playback_VP9Video_WebM/0
MSE_ExternalClearKeyDecryptOnly_Prefixed/EncryptedMediaTest.Playback_VP9Video_We
bM/0
SRC_ExternalClearKey/EncryptedMediaTest.Playback_VP9Video_WebM/0
SRC_ExternalClearKey_Prefixed/EncryptedMediaTest.Playback_VP9Video_WebM/0
SRC_ClearKey_Prefixed/EncryptedMediaTest.Playback_VP9Video_WebM/0Patches are
This reverts commit 9bc040859b0ca6869d31bc0efa223e8684eef37a
This reverts commit 6f5aba069a2c7ffb293ddce70219a9ab4a037441
This reverts commit 9bc040859b0ca6869d31bc0efa223e8684eef37aI1f250441 Revert "Refactor the vp9_get_frame code for frame parallel."
Ibfdddce5 Revert "Delay decreasing reference count in frame-parallel
decoding."
I00ce6771 Revert "Introduce FrameWorker for decoding."Need better testing in libvpx for these commits
Squashed-Change-Id : Ifa1f279b0cabf4b47c051ec26018f9301c1e130e
error check vp9 superframe parsing
This patch insures that the last byte of a chunk that contains a
valid superframe marker byte, actually has a proper superframe index.
If not it returns an error.As part of doing that the file : vp90-2-15-fuzz-flicker.webm now fails
to decode properly and moves to the invalid file test from the test
vector suite.Squashed-Change-Id : I5f1da7eb37282ec0c6394df5c73251a2df9c1744
Remove unused vp9_init_quant_tables function
This function is not effectively used, hence removed.
Squashed-Change-Id : I2e8e48fa07c7518931690f3b04bae920cb360e49
Actually skip blocks in skip segments in non-rd encoder.
Copy split from macroblock to pick mode context so it doesn’t get lost.
Squashed-Change-Id : Ie37aa12558dbe65c4f8076cf808250fffb7f27a8
Add Check for Peek Stream validity to decoder test.
Squashed-Change-Id : I9b745670a9f842582c47e6001dc77480b31fb6a1
Allocate buffers based on correct chroma format
The encoder currently allocates frame buffers before
it establishes what the chroma sub-sampling factor is,
always allocating based on the 4:4:4 format.This patch detects the chroma format as early as
possible allowing the encoder to allocate buffers of
the correct size.Future patches will change the encoder to allocate
frame buffers on demand to further reduce the memory
profile of the encoder and rationalize the buffer
management in the encoder and decoder.Squashed-Change-Id : Ifd41dd96e67d0011719ba40fada0bae74f3a0d57
Fork vp9_rd_pick_inter_mode_sb_seg_skip
Squashed-Change-Id : I549868725b789f0f4f89828005a65972c20df888
Switch active map implementation to segment based.
Squashed-Change-Id : Ibb841a1fa4d08d164cf5461246ec290f582b1f80
Experiment for mid group second arf.
This patch implements a mechanism for inserting a second
arf at the mid position of arf groups.It is currently disabled by default using the flag multi_arf_enabled.
Results are currently down somewhat in initial testing if
multi-arf is enabled. Most of the loss is attributable to the
fact that code to preserve the previous golden frame
(in the arf buffer) in cases where we are coding an overlay
frame, is currently disabled in the multi-arf case.Squashed-Change-Id : I1d777318ca09f147db2e8c86d7315fe86168c865
Clean out old CONFIG_MULTIPLE_ARF code.
Remove the old experimental multi arf code that was under
the flag CONFIG_MULTIPLE_ARF.Squashed-Change-Id : Ib24865abc11691d6ac8cb0434ada1da674368a61
Fix some bugs in multi-arf
Fix some bugs relating to the use of buffers
in the overlay frames.Fix bug where a mid sequence overlay was
propagating large partition and transform sizes into
the subsequent frame because of :-
sf->last_partitioning_redo_frequency > 1 and
sf->tx_size_search_method == USE_LARGESTALLSquashed-Change-Id : Ibf9ef39a5a5150f8cbdd2c9275abb0316c67873a
Further dual arf changes : multi_arf_allowed.
Add multi_arf_allowed flag.
Re-initialize buffer indices every kf.
Add some const indicators.Squashed-Change-Id : If86c39153517c427182691d2d4d4b7e90594be71
Fixed VP9 denoiser COPY_BLOCK case
Now copies the src to the correct location in the running average buffer.
Squashed-Change-Id : I9c83c96dc7a97f42c8df16ab4a9f18b733181f34
Fix test on maximum downscaling limits
There is a normative scaling range of (x1/2, x16)
for VP9. This patch fixes the maximum downscaling
tests that are applied in the convolve function.The code used a maximum downscaling limit of x1/5
for historic reasons related to the scalable
coding work. Since the downsampling in this
application is non-normative it will revert to
using a separate non-normative scaler.Squashed-Change-Id : Ide80ed712cee82fe5cb3c55076ac428295a6019f
Add unit test to test user_priv parameter.
Squashed-Change-Id : I6ba6171e43e0a43331ee0a7b698590b143979c44
vp9 : check tile column count
the max is 6. there are assumptions throughout the decode regarding
this ; fixes a crash with a fuzzed bitstream$ zzuf -s 5861 -r 0.01:0.05 -b 6- \
< vp90-2-00-quantizer-00.webm.ivf \
| dd of=invalid-vp90-2-00-quantizer-00.webm.ivf.s5861_r01-05_b6-.ivf \
bs=1 count=81883Squashed-Change-Id : I6af41bb34252e88bc156a4c27c80d505d45f5642
Adjust arf Q limits with multi-arf.
Adjust enforced minimum arf Q deltas for non primary arfs
in the middle of an arf/gf group.Squashed-Change-Id : Ie8034ffb3ac00f887d74ae1586d4cac91d6cace2
Dual ARF changes : Buffer index selection.
Add indirection to the section of buffer indices.
This is to help simplify things in the future if we
have other codec features that switch indices.Limit the max GF interval for static sections to fit
the gf_group structures.Squashed-Change-Id : I38310daaf23fd906004c0e8ee3e99e15570f84cb
Reuse inter prediction result in real-time speed 6
In real-time speed 6, no partition search is done. The inter
prediction results got from picking mode can be reused in the
following encoding process. A speed feature reuse_inter_pred_sby
is added to only enable the resue in speed 6.This patch doesn’t change encoding result. RTC set tests showed
that the encoding speed gain is 2% - 5%.Squashed-Change-Id : I3884780f64ef95dd8be10562926542528713b92c
Add vp9_ prefix to mv_pred and setup_pred_block functions
Make these two functions accessible by both RD and non-RD coding
modes.Squashed-Change-Id : Iecb39dbf3d65436286ea3c7ffaa9920d0b3aff85
Replace cpi->common with preset variable cm
This commit replaces a few use cases of cpi->common with preset
variable cm, to avoid unnecessary pointer fetch in the non-RD
coding mode.Squashed-Change-Id : I4038f1c1a47373b8fd7bc5d69af61346103702f6
[spatial svc]Implement lag in frames for spatial svc
Squashed-Change-Id : I930dced169c9d53f8044d2754a04332138347409
[spatial svc]Don’t skip motion search in first pass encoding
Squashed-Change-Id : Ia6bcdaf5a5b80e68176f60d8d00e9b5cf3f9bfe3
decode_test_driver : fix type size warning
like vpx_codec_decode(), vpx_codec_peek_stream_info() takes an unsigned
int, not size_t, parameter for buffer sizeSquashed-Change-Id : I4ce0e1fbbde461c2e1b8fcbaac3cd203ed707460
decode_test_driver : check HasFailure() in RunLoop
avoids unnecessary errors due to e.g., read (Next()) failures
Squashed-Change-Id : I70b1d09766456f1c55367d98299b5abd7afff842
Allow lossless breakout in non-rd mode decision.
This is very helpful for large moving windows in screencasts.
Squashed-Change-Id : I91b5f9acb133281ee85ccd8f843e6bae5cadefca
Revert "Revert 3 patches from Hangyu to get Chrome to build :"
This patch reverts the previous revert from Jim and also add a
variable user_priv in the FrameWorker to save the user_priv
passed from the application. In the decoder_get_frame function,
the user_priv will be binded with the img. This change is needed
or it will fail the unit test added here :
https://gerrit.chromium.org/gerrit/#/c/70610/This reverts commit 9be46e4565f553460a1bbbf58d9f99067d3242ce.
Squashed-Change-Id : I376d9a12ee196faffdf3c792b59e6137c56132c1
test.mk : remove renamed file
vp90-2-15-fuzz-flicker.webm was renamed in :
c3db2d8 error check vp9 superframe parsingSquashed-Change-Id : I229dd6ca4c662802c457beea0f7b4128153a65dc
vp9cx.mk : move avx c files outside of x86inc block
same reasoning as :
9f3a0db vp9_rtcd : correct avx2 referencesthese are all intrinsics, so don’t depend on x86inc.asm
Squashed-Change-Id : I915beaef318a28f64bfa5469e5efe90e4af5b827
Dual arf : Name changes.
Cosmetic patch only in response to comments on
previous patches suggesting a couple of name changes
for consistency and clarity.Squashed-Change-Id : Ida3a359b0d5755345660d304a7697a3a3686b2a3
Make non-RD intra mode search txfm size dependent
This commit fixes the potential issue in the non-RD mode decision
flow that only checks part of the block to estimate the cost. It
was due to the use of fixed transform size, in replacing the
largest transform block size. This commit enables per transform
block cost estimation of the intra prediction mode in the non-RD
mode decision.Squashed-Change-Id : I14ff92065e193e3e731c2bbf7ec89db676f1e132
Fix quality regression for multi arf off case.
Bug introduced during multiple iterations on : I3831*
gf_group->arf_update_idx[] cannot currently be used
to select the arf buffer index if buffer flipping on overlays
is enabled (still currently the case when multi arf OFF).Squashed-Change-Id : I4ce9ea08f1dd03ac3ad8b3e27375a91ee1d964dc
Enable real-time version reference motion vector search
This commit enables a fast reference motion vector search scheme.
It checks the nearest top and left neighboring blocks to decide the
most probable predicted motion vector. If it finds the two have
the same motion vectors, it then skip finding exterior range for
the second most probable motion vector, and correspondingly skips
the check for NEARMV.The runtime of speed -5 goes down
pedestrian at 1080p 29377 ms -> 27783 ms
vidyo at 720p 11830 ms -> 10990 ms
i.e., 6%-8% speed-up.For rtc set, the compression performance
goes down by about -1.3% for both speed -5 and -6.Squashed-Change-Id : I2a7794fa99734f739f8b30519ad4dfd511ab91a5
Add const mark to const values in non-RD coding mode
Squashed-Change-Id : I65209fd1e06fc06833f6647cb028b414391a7017
Change-Id : Ic0be67ac9ef48f64a8878a0b8f1b336f136bceac
-
Your Essential SOC 2 Compliance Checklist
With cloud-hosted applications becoming the norm, organisations face increasing data security and compliance challenges. SOC 2 (System and Organisation Controls 2) provides a structured framework for addressing these challenges. Established by the American Institute of Certified Public Accountants (AICPA), SOC 2 has become a critical standard for demonstrating trustworthiness to clients and partners.
A well-structured SOC 2 compliance checklist serves as your roadmap to successful audits and effective security practices. In this post, we’ll walk through the essential steps to achieve SOC 2 compliance and explain how proper analytics practices play a crucial role in maintaining this important certification.
What is SOC 2 compliance ?
SOC 2 compliance applies to service organisations that handle sensitive customer data. While not mandatory, this certification builds significant trust with customers and partners.
According to the AICPA, “SOC 2 reports are intended to meet the needs of a broad range of users that need detailed information and assurance about the controls at a service organisation relevant to security, availability, and processing integrity of the systems the service organisation uses to process users’ data and the confidentiality and privacy of the information processed by these systems.“
At its core, SOC 2 helps organisations protect customer data through five fundamental principles : security, availability, processing integrity, confidentiality, and privacy.
Think of it as a seal of approval that tells customers, “We take data protection seriously, and here’s the evidence.”
Companies undergo SOC 2 audits to evaluate their compliance with these standards. During these audits, independent auditors assess internal controls over data security, availability, processing integrity, confidentiality, and privacy.
What is a SOC 2 compliance checklist ?
A SOC 2 compliance checklist is a comprehensive guide that outlines all the necessary steps and controls an organisation needs to implement to achieve SOC 2 certification. It covers essential areas including :
- Security policies and procedures
- Access control measures
- Risk assessment protocols
- Incident response plans
- Disaster recovery procedures
- Vendor management practices
- Data encryption standards
- Network security controls
SOC 2 compliance checklist benefits
A structured SOC 2 compliance checklist offers several significant advantages :
Preparedness
Preparing for a SOC 2 examination involves many complex elements. A checklist provides a clear, structured path, breaking the process into manageable tasks that ensure nothing is overlooked.
Resource optimisation
A comprehensive checklist reduces time spent identifying requirements, minimises costly mistakes and oversights, and enables more precise budget planning for the compliance process.
Better team alignment
A SOC 2 checklist establishes clear responsibilities for team members and maintains consistent understanding across all departments, helping align internal processes with industry standards.
Risk reduction
Following a SOC 2 compliance checklist significantly reduces the risk of compliance violations. Systematically reviewing internal controls provides opportunities to catch security gaps early, mitigating the risk of data breaches and unauthorised access.
Audit readiness
A well-maintained checklist simplifies audit preparation, reduces stress during the audit process, and accelerates the certification timeline.
Business growth
A successful SOC 2 audit demonstrates your organisation’s commitment to data security, which can be decisive in winning new business, especially with enterprise clients who require this certification from their vendors.
Challenges in implementing SOC 2
Implementing SOC 2 presents several significant challenges :
Time-intensive documentation
Maintaining accurate records throughout the SOC 2 compliance process requires diligence and attention to detail. Many organisations struggle to compile comprehensive documentation of all controls, policies and procedures, leading to delays and increased costs.
Incorrect scoping of the audit
Misjudging the scope can result in unnecessary expenses and extended timelines. Including too many systems complicates the process and diverts resources from critical areas.
Maintaining ongoing compliance
After achieving initial compliance, continuous monitoring becomes essential but is often neglected. Regular internal control audits can be overwhelming, especially for smaller organisations without dedicated compliance teams.
Resource constraints
Many organisations lack sufficient resources to dedicate to compliance efforts. This limitation can lead to staff burnout or reliance on expensive external consultants.
Employee resistance
Staff members may view new security protocols as unnecessary hurdles. Employees who aren’t adequately trained on SOC 2 requirements might inadvertently compromise compliance efforts through improper data handling.
Analytics and SOC 2 compliance : A critical relationship
One often overlooked aspect of SOC 2 compliance is the handling of analytics data. User behaviour data collection directly impacts multiple Trust Service Criteria, particularly privacy and confidentiality.
Why analytics matters for SOC 2
Standard analytics platforms often collect significant amounts of personal data, creating potential compliance risks :
- Privacy concerns : Many analytics tools collect personal information without proper consent mechanisms
- Data ownership issues : When analytics data is processed on third-party servers, maintaining control becomes challenging
- Confidentiality risks : Analytics data might be shared with advertising networks or other third parties
- Processing integrity questions : When data is transformed or aggregated by third parties, verification becomes difficult
How Matomo supports SOC 2 compliance
Matomo’s privacy-first analytics approach directly addresses these concerns :
- Complete data ownership : With Matomo, all analytics data remains under your control, either on your own servers or in a dedicated cloud instance
- Consent management : Built-in tools for managing user consent align with privacy requirements
- Data minimisation : Configurable anonymisation features help reduce collection of sensitive personal data
- Transparency : Clear documentation of data flows supports audit requirements
- Configurable data retention : Set automated data deletion schedules to comply with your policies
By implementing Matomo as part of your SOC 2 compliance strategy, you address key requirements while maintaining the valuable insights your organisation needs for growth.
Conclusion
A SOC 2 compliance checklist helps organisations meet critical security and privacy standards. By taking a methodical approach to compliance and implementing privacy-respecting analytics, you can build trust with customers while protecting sensitive data.
Start your 21-day free trial — no credit card needed.