Recherche avancée

Médias (1)

Mot : - Tags -/punk

Autres articles (44)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (5874)

  • How to cheat on video encoder comparisons

    21 juin 2010, par Dark Shikari — benchmark, H.264, stupidity, test sequences

    Over the past few years, practically everyone and their dog has published some sort of encoder comparison. Sometimes they’re actually intended to be something for the world to rely on, like the old Doom9 comparisons and the MSU comparisons. Other times, they’re just to scratch an itch — someone wants to decide for themselves what is better. And sometimes they’re just there to outright lie in favor of whatever encoder the author likes best. The latter is practically an expected feature on the websites of commercial encoder vendors.

    One thing almost all these comparisons have in common — particularly (but not limited to !) the ones done without consulting experts — is that they are horribly done. They’re usually easy to spot : for example, two videos at totally different bitrates are being compared, or the author complains about one of the videos being “washed out” (i.e. he screwed up his colorspace conversion). Or the results are simply nonsensical. Many of these problems result from the person running the test not “sanity checking” the results to catch mistakes that he made in his test. Others are just outright intentional.

    The result of all these mistakes, both intentional and accidental, is that the results of encoder comparisons tend to be all over the map, to the point of absurdity. For any pair of encoders, it’s practically a given that a comparison exists somewhere that will “prove” any result you want to claim, even if the result would be beyond impossible in any sane situation. This often results in the appearance of a “controversy” even if there isn’t any.

    Keep in mind that every single mistake I mention in this article has actually been done, usually in more than one comparison. And before I offend anyone, keep in mind that when I say “cheating”, I don’t mean to imply that everyone that makes the mistake is doing it intentionally. Especially among amateur comparisons, most of the mistakes are probably honest.

    So, without further ado, we will investigate a wide variety of ways, from the blatant to the subtle, with which you too can cheat on your encoder comparisons.

    Blatant cheating

    1. Screw up your colorspace conversions. A common misconception is that converting from YUV to RGB and back is a simple process where nothing can go wrong. This is quite untrue. There are two primary attributes of YUV : PC range (0-255) vs TV range (16-235) and BT.709 vs BT.601 conversion coefficients. That sums up to a total of 4 possible different types of YUV. When people compare encoders, they often use different frontends, some of which make incorrect assumptions about these attributes.

    Incorrect assumptions are so common that it’s often a matter of luck whether the tool gets it right or not. It doesn’t help that most videos don’t even properly signal which they are to begin with ! Often even the tool that the person running the comparison is using to view the source material gets the conversion wrong.

    Subsampling YUV (aka what everyone uses) adds yet another dimension to the problem : the locations which the chroma data represents (“chroma siting”) isn’t constant. For example, JPEG and MPEG-2 define different positions. This is even worse because almost nobody actually handles this correctly — the best approach is to simply make sure none of your software is doing any conversion. A mistake in chroma siting is what created that infamous PSNR graph showing Theora beating x264, which has been cited for ages since despite the developers themselves retracting it after realizing their mistake.

    Keep in mind that the video encoder is not responsible for colorspace conversion — almost all video encoders operate in the YUV domain (usually subsampled 4:2:0 YUV, aka YV12). Thus any problem in colorspace conversion is usually the fault of the tools used, not the actual encoder.

    How to spot it : “The color is a bit off” or “the contrast of the video is a bit duller”. There were a staggering number of “H.264 vs Theora” encoder comparisons which came out in favor of one or the other solely based on “how well the encoder kept the color” — making the results entirely bogus.

    2. Don’t compare at the same (or nearly the same) bitrate. I saw a VP8 vs x264 comparison the other day that gave VP8 30% more bitrate and then proceeded to demonstrate that it got better PSNR. You would think this is blindingly obvious, but people still make this mistake ! The most common cause of this is assuming that encoders will successfully reach the target bitrate you ask of them — particularly with very broken encoders that don’t. Always check the output filesizes of your encodes.

    How to spot it : The comparison lists perfectly round bitrates for every single test, as opposed to the actual bitrates achieved by the encoders, which will never be exactly matching in any real test.

    3. Use unfair encoding settings. This is a bit of a wide topic : there are many ways to do this. We’ll cover the more blatant ones in this part. Here’s some common ones :

    a. Simply cheat. Intentionally pick awful settings for the encoder you don’t like.

    b. Don’t consider performance. Pick encoding settings without any regard for some particular performance goal. For example, it’s perfectly reasonable to say “use the best settings possible, regardless of speed”. It’s also reasonable to look for a particular encoding speed target. But what isn’t reasonable is to pick extremely fast settings for one encoder and extremely slow settings for another encoder.

    c. Don’t attempt match compatibility options when it’s reasonable to do so. Keyframe interval is a classic one of these : shorter values reduce compression but improve seeking. An easy way to cheat is to simply not set them to the same value, biasing towards whatever encoder has the longer interval. This is most common as an accidental mistake with comparisons involving ffmpeg, where the default keyframe interval is an insanely low 12 frames.

    How to spot it : The comparison doesn’t document its approach regarding choice of encoding settings.

    4. Use ratecontrol methods unfairly. Constant bitrate is not the same as average bitrate — using one instead of the other is a great way to completely ruin a comparison. Another method is to use 1-pass bitrate mode for one encoder and 2-pass or constant quality for another. A good general approach is that, for any given encoder, one should use 2-pass if available and constant quality if not (it may take a few runs to get the bitrate you want, of course).

    Of course, it’s also fine to run a comparison with a particular mode in mind — for example, a comparison targeted at streaming applications might want to test using 1-pass CBR. Of course, in such a case, if CBR is not available in an encoder, you can’t compare to that encoder.

    How to spot it : It’s usually pretty obvious if the encoding settings are given.

    5. Use incredibly old versions of encoders. As it happens, Debian stable is not the best source for the most recent encoding software. Equally, using recent versions known to be buggy.

    6. Don’t distinguish between video formats and the software that encodes them. This is incredibly common : I’ve seen tests that claim to compare “H.264″ against something else while in fact actually comparing “Quicktime” against something else. It’s impossible to compare all H.264 encoders at once, so don’t even try — just call the comparison “Quicktime versus X” instead of “H.264 versus X”. Or better yet, use a good H.264 encoder, like x264 and don’t bother testing awful encoders to begin with.

    Less-obvious cheating

    1. Pick a bitrate that’s way too low. Low bitrate testing is very effective at making differences between encoders obvious, particularly if doing a visual comparison. But past a certain point, it becomes impossible for some encoders to keep up. This is usually an artifact of the video format itself — a scalability limitation. Practically all DCT-based formats have this kind of limitation (wavelets are mostly immune).

    In reality, this is rarely a problem, because one could merely downscale the video to resolve the problem — lower resolutions need fewer bits. But people rarely do this in comparisons (it’s hard to do it fairly), so the best approach is to simply not use absurdly low bitrates. What is “absurdly low” ? That’s a hard question — it ends up being a matter of using one’s best judgement.

    This tends to be less of a problem in larger-scale tests that use many different bitrates.

    How to spot it : At least one of the encoders being compared falls apart completely and utterly in the screenshots.

    Biases towards, a lot : Video formats with completely scalable coding methods (Dirac, Snow, JPEG-2000, SVC).

    Biases towards, a little : Video formats with coding methods that improve scalability, such as arithmetic coding, B-frames, and run-length coding. For example, H.264 and Theora tend to be more scalable than MPEG-4.

    2. Pick a bitrate that’s way too high. This is staggeringly common mistake : pick a bitrate so high that all of the resulting encodes look absolutely perfect. The claim is then made that “there’s no significant difference” between any of the encoders tested. This is surprisingly easy to do inadvertently on sources like Big Buck Bunny, which looks transparent at relatively low bitrates. An equally common but similar mistake is to test at a bitrate that isn’t so high that the videos look perfect, but high enough that they all look very good. The claim is then made that “the difference between these encoders is small”. Well, of course, if you give everything tons of bitrate, the difference between encoders is small.

    How to spot it : You can’t tell which image is the source and which is the encode.

    3. Making invalid comparisons using objective metrics. I explained this earlier in the linked blog post, but in short, if you’re going to measure PSNR, make sure all the encoders are optimized for PSNR. Equally, if you’re going to leave the encoder optimized for visual quality, don’t measure PSNR — post screenshots instead. Same with SSIM or any other objective metric. Furthermore, don’t blindly do metric comparisons — always at least look at the output as a sanity test. Finally, do not claim that PSNR is particularly representative of visual quality, because it isn’t.

    How to spot it : Encoders with psy optimizations, such as x264 or Theora 1.2, do considerably worse than expected in PSNR tests, but look much better in visual comparisons.

    4. Lying with graphs. Using misleading scales on graphs is a great way to make the differences between encoders seem larger or smaller than they actually are. A common mistake is to scale SSIM linearly : in fact, 0.99 is about twice as good as 0.98, not 1% better. One solution for this is to use db to compare SSIM values.

    5. Using lossy screenshots. Posting screenshots as JPEG is a silly, pointless way to worsen an encoder comparison.

    Subtle cheating

    1. Unfairly pick screenshots for comparison. Comparing based on stills is not ideal, but it’s often vastly easier than comparing videos in motion. But it also opens up the door to unfairness. One of the most common mistakes is to pick a frame immediately after (or on) a keyframe for one encoder, but which isn’t for the other encoder. Particularly in the case of encoders that massively boost keyframe quality, this will unfairly bias in favor of the one with the recent keyframe.

    How to spot it : It’s very difficult to tell, if not impossible, unless they provide the video files to inspect.

    2. Cherry-pick source videos. Good source videos are incredibly hard to come by — almost everything is already compressed and what’s left is usually a very poor example of real content. Here’s some common ways to bias unfairly using cherry-picking :

    a. Pick source videos that are already heavily compressed. Pre-compressed source isn’t much of an issue if your target quality level for testing is much lower than that of the source, since any compression artifacts in the source will be a lot smaller than those created by the encoders. But if the source is already very compressed, or you’re testing at a relatively high quality level, this becomes a significant issue.

    Biases towards : Anything that uses a similar transform to the source content. For MPEG-2 source material, this biases towards formats that use the 8x8dct or a very close approximation : MPEG-1/2/4, H.263, and Theora. For H.264 source material, this biases towards formats that use a 4×4 transform : H.264 and VP8.

    b. Pick standard test clips that were not intended for this purpose. There are a wide variety of uncompressed “standard test clips“. Some of these are not intended for general-purpose use, but rather exist to test specific encoder capabilities. For example, Mobile Calendar (“mobcal”) is extremely sharp and low motion, serving to test interpolation capabilities. It will bias incredibly heavily towards whatever encoder uses more B-frames and/or has higher-precision motion compensation. Other test clips are almost completely static, such as the classic “akiyo”. These are also not particularly representative of real content.

    c. Pick very noisy content. Noise is — by definition — not particularly compressible. Both in terms of PSNR and visual quality, a very noisy test clip will tend to reduce the differences between encoders dramatically.

    d. Pick a test clip to exercise a specific encoder feature. I’ve often used short clips from Touhou games to demonstrate the effectiveness of x264′s macroblock-tree algorithm. I’ve sometimes even used it to compare to other encoders as part of such a demonstration. I’ve also used the standard test clip “parkrun” as a demonstration of adaptive quantization. But claiming that either is representative of most real content — and thus can be used as a general determinant of how good encoders are — is of course insane.

    e. Simply encode a bunch of videos and pick the one your favorite encoder does best on.

    3. Preprocessing the source. A encoder test is a test of encoders, not preprocessing. Some encoding apps may add preprocessors to the source, such as noise reduction. This may make the video look better — possibly even better than the source — but it’s not a fair part of comparing the actual encoders.

    4. Screw up decoding. People often forget that in addition to encoding, a test also involves decoding — a step which is equally possible to do wrong. One common error caused by this is in tests of Theora on content whose resolution isn’t divisible by 16. Decoding is often done with ffmpeg — which doesn’t crop the edges properly in some cases. This isn’t really a big deal visually, but in a PSNR comparison, misaligning the entire frame by 4 or 8 pixels is a great way of completely invalidating the results.

    The greatest mistake of all

    Above all, the biggest and most common mistake — and the one that leads to many of the problems mentioned here – is the mistaken belief that one, or even a few tests can really represent all usage fairly. Any comparison has to have some specific goal — to compare something in some particular case, whether it be “maximum offline compression ignoring encoding speed” or “real-time high-speed video streaming” or whatnot. And even then, no comparison can represent all use-cases in that category alone. An encoder comparison can only be honest if it’s aware of its limitations.

  • How to cheat on video encoder comparisons

    21 juin 2010, par Dark Shikari — H.264, benchmark, stupidity, test sequences

    Over the past few years, practically everyone and their dog has published some sort of encoder comparison. Sometimes they’re actually intended to be something for the world to rely on, like the old Doom9 comparisons and the MSU comparisons. Other times, they’re just to scratch an itch — someone wants to decide for themselves what is better. And sometimes they’re just there to outright lie in favor of whatever encoder the author likes best. The latter is practically an expected feature on the websites of commercial encoder vendors.

    One thing almost all these comparisons have in common — particularly (but not limited to !) the ones done without consulting experts — is that they are horribly done. They’re usually easy to spot : for example, two videos at totally different bitrates are being compared, or the author complains about one of the videos being “washed out” (i.e. he screwed up his colorspace conversion). Or the results are simply nonsensical. Many of these problems result from the person running the test not “sanity checking” the results to catch mistakes that he made in his test. Others are just outright intentional.

    The result of all these mistakes, both intentional and accidental, is that the results of encoder comparisons tend to be all over the map, to the point of absurdity. For any pair of encoders, it’s practically a given that a comparison exists somewhere that will “prove” any result you want to claim, even if the result would be beyond impossible in any sane situation. This often results in the appearance of a “controversy” even if there isn’t any.

    Keep in mind that every single mistake I mention in this article has actually been done, usually in more than one comparison. And before I offend anyone, keep in mind that when I say “cheating”, I don’t mean to imply that everyone that makes the mistake is doing it intentionally. Especially among amateur comparisons, most of the mistakes are probably honest.

    So, without further ado, we will investigate a wide variety of ways, from the blatant to the subtle, with which you too can cheat on your encoder comparisons.

    Blatant cheating

    1. Screw up your colorspace conversions. A common misconception is that converting from YUV to RGB and back is a simple process where nothing can go wrong. This is quite untrue. There are two primary attributes of YUV : PC range (0-255) vs TV range (16-235) and BT.709 vs BT.601 conversion coefficients. That sums up to a total of 4 possible different types of YUV. When people compare encoders, they often use different frontends, some of which make incorrect assumptions about these attributes.

    Incorrect assumptions are so common that it’s often a matter of luck whether the tool gets it right or not. It doesn’t help that most videos don’t even properly signal which they are to begin with ! Often even the tool that the person running the comparison is using to view the source material gets the conversion wrong.

    Subsampling YUV (aka what everyone uses) adds yet another dimension to the problem : the locations which the chroma data represents (“chroma siting”) isn’t constant. For example, JPEG and MPEG-2 define different positions. This is even worse because almost nobody actually handles this correctly — the best approach is to simply make sure none of your software is doing any conversion. A mistake in chroma siting is what created that infamous PSNR graph showing Theora beating x264, which has been cited for ages since despite the developers themselves retracting it after realizing their mistake.

    Keep in mind that the video encoder is not responsible for colorspace conversion — almost all video encoders operate in the YUV domain (usually subsampled 4:2:0 YUV, aka YV12). Thus any problem in colorspace conversion is usually the fault of the tools used, not the actual encoder.

    How to spot it : “The color is a bit off” or “the contrast of the video is a bit duller”. There were a staggering number of “H.264 vs Theora” encoder comparisons which came out in favor of one or the other solely based on “how well the encoder kept the color” — making the results entirely bogus.

    2. Don’t compare at the same (or nearly the same) bitrate. I saw a VP8 vs x264 comparison the other day that gave VP8 30% more bitrate and then proceeded to demonstrate that it got better PSNR. You would think this is blindingly obvious, but people still make this mistake ! The most common cause of this is assuming that encoders will successfully reach the target bitrate you ask of them — particularly with very broken encoders that don’t. Always check the output filesizes of your encodes.

    How to spot it : The comparison lists perfectly round bitrates for every single test, as opposed to the actual bitrates achieved by the encoders, which will never be exactly matching in any real test.

    3. Use unfair encoding settings. This is a bit of a wide topic : there are many ways to do this. We’ll cover the more blatant ones in this part. Here’s some common ones :

    a. Simply cheat. Intentionally pick awful settings for the encoder you don’t like.

    b. Don’t consider performance. Pick encoding settings without any regard for some particular performance goal. For example, it’s perfectly reasonable to say “use the best settings possible, regardless of speed”. It’s also reasonable to look for a particular encoding speed target. But what isn’t reasonable is to pick extremely fast settings for one encoder and extremely slow settings for another encoder.

    c. Don’t attempt match compatibility options when it’s reasonable to do so. Keyframe interval is a classic one of these : shorter values reduce compression but improve seeking. An easy way to cheat is to simply not set them to the same value, biasing towards whatever encoder has the longer interval. This is most common as an accidental mistake with comparisons involving ffmpeg, where the default keyframe interval is an insanely low 12 frames.

    How to spot it : The comparison doesn’t document its approach regarding choice of encoding settings.

    4. Use ratecontrol methods unfairly. Constant bitrate is not the same as average bitrate — using one instead of the other is a great way to completely ruin a comparison. Another method is to use 1-pass bitrate mode for one encoder and 2-pass or constant quality for another. A good general approach is that, for any given encoder, one should use 2-pass if available and constant quality if not (it may take a few runs to get the bitrate you want, of course).

    Of course, it’s also fine to run a comparison with a particular mode in mind — for example, a comparison targeted at streaming applications might want to test using 1-pass CBR. Of course, in such a case, if CBR is not available in an encoder, you can’t compare to that encoder.

    How to spot it : It’s usually pretty obvious if the encoding settings are given.

    5. Use incredibly old versions of encoders. As it happens, Debian stable is not the best source for the most recent encoding software. Equally, using recent versions known to be buggy.

    6. Don’t distinguish between video formats and the software that encodes them. This is incredibly common : I’ve seen tests that claim to compare “H.264″ against something else while in fact actually comparing “Quicktime” against something else. It’s impossible to compare all H.264 encoders at once, so don’t even try — just call the comparison “Quicktime versus X” instead of “H.264 versus X”. Or better yet, use a good H.264 encoder, like x264 and don’t bother testing awful encoders to begin with.

    Less-obvious cheating

    1. Pick a bitrate that’s way too low. Low bitrate testing is very effective at making differences between encoders obvious, particularly if doing a visual comparison. But past a certain point, it becomes impossible for some encoders to keep up. This is usually an artifact of the video format itself — a scalability limitation. Practically all DCT-based formats have this kind of limitation (wavelets are mostly immune).

    In reality, this is rarely a problem, because one could merely downscale the video to resolve the problem — lower resolutions need fewer bits. But people rarely do this in comparisons (it’s hard to do it fairly), so the best approach is to simply not use absurdly low bitrates. What is “absurdly low” ? That’s a hard question — it ends up being a matter of using one’s best judgement.

    This tends to be less of a problem in larger-scale tests that use many different bitrates.

    How to spot it : At least one of the encoders being compared falls apart completely and utterly in the screenshots.

    Biases towards, a lot : Video formats with completely scalable coding methods (Dirac, Snow, JPEG-2000, SVC).

    Biases towards, a little : Video formats with coding methods that improve scalability, such as arithmetic coding, B-frames, and run-length coding. For example, H.264 and Theora tend to be more scalable than MPEG-4.

    2. Pick a bitrate that’s way too high. This is staggeringly common mistake : pick a bitrate so high that all of the resulting encodes look absolutely perfect. The claim is then made that “there’s no significant difference” between any of the encoders tested. This is surprisingly easy to do inadvertently on sources like Big Buck Bunny, which looks transparent at relatively low bitrates. An equally common but similar mistake is to test at a bitrate that isn’t so high that the videos look perfect, but high enough that they all look very good. The claim is then made that “the difference between these encoders is small”. Well, of course, if you give everything tons of bitrate, the difference between encoders is small.

    How to spot it : You can’t tell which image is the source and which is the encode.

    3. Making invalid comparisons using objective metrics. I explained this earlier in the linked blog post, but in short, if you’re going to measure PSNR, make sure all the encoders are optimized for PSNR. Equally, if you’re going to leave the encoder optimized for visual quality, don’t measure PSNR — post screenshots instead. Same with SSIM or any other objective metric. Furthermore, don’t blindly do metric comparisons — always at least look at the output as a sanity test. Finally, do not claim that PSNR is particularly representative of visual quality, because it isn’t.

    How to spot it : Encoders with psy optimizations, such as x264 or Theora 1.2, do considerably worse than expected in PSNR tests, but look much better in visual comparisons.

    4. Lying with graphs. Using misleading scales on graphs is a great way to make the differences between encoders seem larger or smaller than they actually are. A common mistake is to scale SSIM linearly : in fact, 0.99 is about twice as good as 0.98, not 1% better. One solution for this is to use db to compare SSIM values.

    5. Using lossy screenshots. Posting screenshots as JPEG is a silly, pointless way to worsen an encoder comparison.

    Subtle cheating

    1. Unfairly pick screenshots for comparison. Comparing based on stills is not ideal, but it’s often vastly easier than comparing videos in motion. But it also opens up the door to unfairness. One of the most common mistakes is to pick a frame immediately after (or on) a keyframe for one encoder, but which isn’t for the other encoder. Particularly in the case of encoders that massively boost keyframe quality, this will unfairly bias in favor of the one with the recent keyframe.

    How to spot it : It’s very difficult to tell, if not impossible, unless they provide the video files to inspect.

    2. Cherry-pick source videos. Good source videos are incredibly hard to come by — almost everything is already compressed and what’s left is usually a very poor example of real content. Here’s some common ways to bias unfairly using cherry-picking :

    a. Pick source videos that are already heavily compressed. Pre-compressed source isn’t much of an issue if your target quality level for testing is much lower than that of the source, since any compression artifacts in the source will be a lot smaller than those created by the encoders. But if the source is already very compressed, or you’re testing at a relatively high quality level, this becomes a significant issue.

    Biases towards : Anything that uses a similar transform to the source content. For MPEG-2 source material, this biases towards formats that use the 8x8dct or a very close approximation : MPEG-1/2/4, H.263, and Theora. For H.264 source material, this biases towards formats that use a 4×4 transform : H.264 and VP8.

    b. Pick standard test clips that were not intended for this purpose. There are a wide variety of uncompressed “standard test clips“. Some of these are not intended for general-purpose use, but rather exist to test specific encoder capabilities. For example, Mobile Calendar (“mobcal”) is extremely sharp and low motion, serving to test interpolation capabilities. It will bias incredibly heavily towards whatever encoder uses more B-frames and/or has higher-precision motion compensation. Other test clips are almost completely static, such as the classic “akiyo”. These are also not particularly representative of real content.

    c. Pick very noisy content. Noise is — by definition — not particularly compressible. Both in terms of PSNR and visual quality, a very noisy test clip will tend to reduce the differences between encoders dramatically.

    d. Pick a test clip to exercise a specific encoder feature. I’ve often used short clips from Touhou games to demonstrate the effectiveness of x264′s macroblock-tree algorithm. I’ve sometimes even used it to compare to other encoders as part of such a demonstration. I’ve also used the standard test clip “parkrun” as a demonstration of adaptive quantization. But claiming that either is representative of most real content — and thus can be used as a general determinant of how good encoders are — is of course insane.

    e. Simply encode a bunch of videos and pick the one your favorite encoder does best on.

    3. Preprocessing the source. A encoder test is a test of encoders, not preprocessing. Some encoding apps may add preprocessors to the source, such as noise reduction. This may make the video look better — possibly even better than the source — but it’s not a fair part of comparing the actual encoders.

    4. Screw up decoding. People often forget that in addition to encoding, a test also involves decoding — a step which is equally possible to do wrong. One common error caused by this is in tests of Theora on content whose resolution isn’t divisible by 16. Decoding is often done with ffmpeg — which doesn’t crop the edges properly in some cases. This isn’t really a big deal visually, but in a PSNR comparison, misaligning the entire frame by 4 or 8 pixels is a great way of completely invalidating the results.

    The greatest mistake of all

    Above all, the biggest and most common mistake — and the one that leads to many of the problems mentioned here – is the mistaken belief that one, or even a few tests can really represent all usage fairly. Any comparison has to have some specific goal — to compare something in some particular case, whether it be “maximum offline compression ignoring encoding speed” or “real-time high-speed video streaming” or whatnot. And even then, no comparison can represent all use-cases in that category alone. An encoder comparison can only be honest if it’s aware of its limitations.

  • Privacy in Business : What Is It and Why Is It Important ?

    13 juillet 2022, par Erin — Privacy

    Privacy concerns loom large among consumers. Yet, businesses remain reluctant to change the old ways of doing things until they become an operational nuisance. 

    More and more businesses are slowly starting to feel the pressure to incorporate privacy best practices. But what exactly does privacy mean in business ? And why is it important for businesses to protect users’ privacy ? 

    In this blog, we’ll answer all of these questions and more. 

    What is Privacy in Business ?

    In the corporate world, privacy stands for the business decision to use collected consumer data in a safe, secure and compliant way. 

    Companies with a privacy-centred culture : 

    • Get explicit user consent to tracking, opt-ins and data sharing 
    • Collect strictly necessary data in compliance with regulations 
    • Ask for permissions to collect, process and store sensitive data 
    • Provide transparent explanations about data operationalisation and usage 
    • Have mechanisms for data collection opt-outs and data removal requests 
    • Implement security controls for storing collected data and limit access permissions to it 

    In other words : They treat consumers’ data with utmost integrity and security – and provide reassurances of ethical data usage. 

    What Are the Ethical Business Issues Related to Privacy ?

    Consumer data analytics has been around for decades. But digital technologies – ubiquitous connectivity, social media networks, data science and machine learning – increased the magnitude and sophistication of customer profiling.

    Big Tech companies like Google and Facebook, among others, capture millions of data points about users. These include general demographics data like “age” or “gender”, as well as more granular insights such as “income”, “past browsing history” or “recently visited geo-locations”. 

    When combined, such personally identifiable information (PII) can be used to approximate the user’s exact address, frequently purchased goods, political beliefs or past medical conditions. Then such information is shared with third parties such as advertisers. 

    That’s when ethical issues arise. 

    The Cambridge Analytica data scandal is a prime example of consumer data that was unethically exploited. 

    Over the years, Google also faced a series of regulatory issues surrounding consumer privacy breaches :

    • In 2021, a Google Chrome browser update put some 2.6 billion users at risk of “surveillance, manipulation and abuse” by providing third parties with data on device usage. 
    • The same year, Google was taken to court for failing to provide full disclosures on tracking performed in Google Chrome incognito mode. A $5 billion lawsuit is still pending.
    • As of 2022, Google Analytics 4 is considered GDPR non-compliant and was branded “illegal” by several European countries. 

    If you are curious, learn more about Google Analytics privacy issues

    The bigger issue ? Big Tech companies make the businesses that use their technologies (unknowingly) complicit in consumer data violations.

    In 2022, the Belgian data regulator found the official IAB Europe framework for user consent gathering in breach of GDPR. The framework was used by all major AdTech platforms to issue pop-ups for user consent to tracking. Now ad platforms must delete all data gathered through these. Biggest advertisers such as Procter & Gamble, Unilever, IBM and Mastercard among others, also received a notice about data removal and a regulatory warning on further repercussions if they fail to comply. 

    Big Tech firms have given brands unprecedented access to granular consumer data. Unrestricted access, however, also opened the door to data abuse and unethical use. 

    Examples of Unethical Data Usage by Businesses 

    • Data hoarding means excessively harvesting all available consumer data because a possibility to do so exists, often using murky consent mechanisms. Yet, 85% of collected Big Data is either dark or redundant, obsolete or trivial (ROT).
    • Invasive personalisation based on sensitive user information (or second-guesses), like a recent US marketing campaign, congratulating women on pregnancy (even if they weren’t expecting). Overall, 75% of consumers find most forms of personalisation somewhat creepy. 22% also said they’d leave for another brand due to creepy experiences.
    • Hyper-targeted advertising campaigns based on data consumers would prefer not to share. A recent investigation found that advertising platforms often assign sensitive labels to users (as part of their ad profiles), indicative of their religion, mental issues, history with abuse and so on. This allows advertisers to target such consumers with dubious ads. 

    Ultimately, excessive data collection, paired with poor data protection in business settings, results in major data breaches and costly damage control. Given that cyber attacks are on the rise, every business is vulnerable. 

    Why Should a Business Be Concerned About Protecting the Privacy of Its Customers ?

    Businesses must prioritise customer privacy because that’s what is expected of them. Globally, 89% of consumers say they care about their privacy. 

    As frequent stories about unethical data usage, excessive tracking and data breaches surface online, even more grow more concerned about protecting their data. Many publicly urge companies to take action. Others curtail their relationships with brands privately. 

    On average, 45% of consumers feel uncomfortable about sharing personal data. According to KPMG, 78% of American consumers have fears about the amount of data being collected. 40% of them also don’t trust companies to use their data ethically. Among Europeans, 41% are unwilling to share any personal data with businesses. 

    Because the demand for online privacy is rising, progressive companies now treat privacy as a competitive advantage. 

    For example, the encrypted messaging app Signal gained over 42 million active users in a year because it offers better data security and privacy protection. 

    ProtonMail, a privacy-centred email client, also amassed a 50 million user base in several years thanks to a “fundamentally stronger definition of privacy”.

    The growth of privacy-mindful businesses speaks volumes. And even more good things happen to privacy-mindful businesses : 

    • Higher consumer trust and loyalty 
    • Improved attractiveness to investors
    • Less complex compliance
    • Minimum cybersecurity exposure 
    • Better agility and innovation

    It’s time to start pursuing them ! Learn how to embed privacy and security into your operations.