Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (66)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (8899)

  • What Is Incrementality & Why Is It Important in Marketing ?

    26 mars 2024, par Erin

    Imagine this : you just launched your latest campaign and it was a major success.

    You blew last month’s results out of the water.

    You combined a variety of tactics, channels and ad creatives to make it work.

    Now, it’s time to build the next campaign.

    The only issue ?

    You don’t know what made it successful or how much your recent efforts impacted the results.

    You’ve been building your brand for years. You’ve built up a variety of marketing pillars that are working for you. So, how do you know how much of your campaign is from years of effort or a new tactic you just implemented ?

    The key is incrementality.

    This is a way to properly attribute the right weight to your marketing tactics.

    In this article, we break down what incrementality is in marketing, how it differs from traditional attribution and how you can calculate and track it to grow your business.

    What is incrementality in marketing ?

    Incrementality in marketing is growth that can be directly credited to a marketing effort above and beyond the success of the branding.

    It looks at how much a specific tactic positively impacted a campaign on top of overall branding and marketing strategies.

    What is incrementally in marketing?

    For example, this could be how much a specific tactic, campaign or channel helped increase conversions, email sign-ups or organic traffic.

    The primary purpose of incrementally in marketing is to more accurately determine the impact a single marketing variable had on the success of a project.

    It removes every other factor and isolates the specific method to help marketers double down on that strategy or move on to new tactics.

    With Matomo, you can track conversions simply. With our last non-direct channel attribution system, you’ll be able to quickly see what channels are converting (and which aren’t) so you can gain insights into incrementality. 

    See why over 1 million websites choose Matomo today.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    How incrementality differs from attribution

    In marketing and advertising, it’s crucial to understand what tactics and activities drive growth.

    Incrementality and attribution help marketers and business owners understand what efforts impact their results.

    But they’re not the same.

    Here’s how they differ :

    Incrementality vs. attribution

    Incrementality explained

    Incrementality measures how much a specific marketing campaign or activity drives additional sales or growth.

    Simply put, it’s analysing the difference between having never implemented the campaign (or tactic or channel) in the first place versus the impact of the activity.

    In other words, how much revenue would you have generated this month without campaign A ?

    And how much additional revenue did you generate directly due to campaign A ?

    The reality is that dozens of factors impact revenue and growth.

    You aren’t just pouring your marketing into one specific channel or campaign at a time.

    Chances are, you’ve got your hands on several marketing initiatives like SEO, PPC, organic social media, paid search, email marketing and more.

    Beyond that, you’ve built a brand with a not-so-tangible impact on your recurring revenue.

    So, the question is, if you took away your new campaign, would you still be generating the same amount of revenue ?

    And, if you add in that campaign, how much additional revenue and growth did it directly create ?

    That is incrementality. It’s how much a campaign went above and beyond to add new revenue that wouldn’t have been there otherwise.

    So, how does attribution play into all of this ?

    Attribution explained

    Attribution is simply the process of assigning credit for a conversion to a particular marketing touchpoint.

    While incrementality is about narrowing down the overall revenue impact from a particular campaign, attribution seeks to point to a specific channel to attribute a sale.

    For example, in any given marketing campaign, you have a few marketing tactics.

    Let’s say you’re launching a limited-time product.

    You might have :

    • Paid ads via Facebook and Instagram
    • A blog post sharing how the product works
    • Organic social media posts on Instagram and TikTok
    • Email waitlist campaign building excitement around the upcoming product
    • SMS campaigns to share a limited-time discount

    So, when the time comes for the sale launch, and you generate $30,000 in revenue, what channel gets the credit ?

    Do you give credit to the paid ads on Facebook ? What about Instagram ? They got people to follow you and got them on the email waitlist.

    Do you give credit to email for reminding people of the upcoming sale ? What about your social media posts that reminded people there ?

    Or do you credit your SMS campaign that shared a limited-time discount ?

    Which channel is responsible for the sale ?

    This is what attribution is all about.

    It’s about giving credit where credit is due.

    The reason you want to attribute credit ? So you know what’s working and can double down your efforts on the high-impact marketing activities and channels.

    Leveraging incrementality and attribution together

    Incrementality and attribution aren’t competing methods of analysing what’s working.

    They’re complementary to one another and go hand in hand.

    You can (and should) use attribution and incrementality in your marketing to help understand what activities, campaigns and channels are making the biggest incremental impact on your business growth.

    Why it’s important to measure incrementality

    Incrementality is crucial to measure if you want to pour your time, money and effort into the right marketing channels and tactics.

    Here are a few reasons why you need to measure incrementality if you want to be successful with your marketing and grow your business :

    1. Accurate data

    If you want to be an effective marketer, you need to be accurate.

    You can’t blindly start marketing campaigns in hopes that you will sell many products or services.

    That’s not how it works.

    Sure, you’ll probably make some sales here and there. But to truly be effective with your work, you must measure your activities and channels correctly.

    Incrementality helps you see how each channel, tactic or campaign made a difference in your marketing.

    Matomo gives you 100% accurate data on your website activities. Unlike Google Analytics, we don’t use data sampling which limits how much data is analysed.

    Screenshot example of the Matomo dashboard

    2. Helps you to best determine the right tactics for success

    How can you plan your marketing strategy if you don’t know what’s working ?

    Think about it.

    You’ll be blindly sailing the seas without a compass telling you where to go.

    Measuring incrementality in your marketing tactics and channels helps you understand the best tactics.

    It shows you what’s moving the needle (and what’s not).

    Once you can see the most impactful tactics and channels, you can forge future campaigns that you know will work.

    3. Allows you to get the most out of your marketing budget

    Since incrementality sheds light on what’s moving your business forward, you can confidently implement your efforts on the right tactics and channels.

    Guess what happens when you start doubling down on the most impactful activities ?

    You start increasing revenue, decreasing ad spend and getting a higher return on investment.

    The result is that you will get more out of your marketing budget.

    Not only will you boost revenue, but you’ll also be able to boost profit margins since you’re not wasting money on ineffective tactics.

    4. Increase traffic

    When you see what’s truly working in your business, you can figure out what channels and tactics you should be working.

    Incrementality helps you understand not only what your best revenue tactics are but also what channels and campaigns are bringing in the most traffic.

    When you can increase traffic, you can increase your overall marketing impact.

    5. Increase revenue

    Finally, with increased traffic, the inevitable result is more conversions.

    More conversions mean more revenue.

    Incrementality gives you a vision of the tactics and channels that are converting the best.

    If you can see that your SMS campaigns are driving the best ROI, then you know that you’ll grow your revenue by pouring more into acquiring SMS leads.

    By calculating incrementality regularly, you can rest assured that you’re only investing time and money into the most impactful activities in terms of revenue generation.

    How to calculate and test incrementality in marketing

    Now that you understand how incrementality works and why it’s important to calculate, the question is : 

    How do you calculate and conduct incrementality tests ?

    Given the ever-changing marketing landscape, it’s crucial to understand how to calculate and test incrementally in your business.

    If you’re not sure how incrementality testing works, then follow these simple steps :

    How to test and analyze incrementality in marketing?

    Your first step to get an incrementality measurement is to conduct what’s referred to as a “holdout test.”

    It’s not a robust test, but it’s an easy way to get the ball rolling with incrementality.

    Here’s how it works :

    1. Choose your target audience.

    With Matomo’s segmentation feature, you can get pretty specific with your target audience, such as :

      • Visitors from the UK
      • Returning visitors
      • Mobile users
      • Visitors who clicked on a specific ad
    1. Split your audience into two groups :
      • Control group (60% of the segment)
      • Test group (40% of the segment)
    1. Target the control group with your marketing tactic (the simpler the tactic, the better).
    1. Target the test group with a different marketing tactic.
    1. Analyse the results. The difference between the control and test groups is the incremental lift in results. The new marketing tactic is either more effective or not.
    1. Repeat the test with a new control group (with an updated tactic) and a new test group (with a new tactic).

    Matomo can help you analyse the results of your campaigns in our Goals feature. Set up business objectives so you can easily track different goals like conversions.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Here’s an example of how this incrementality testing could look in real life.

    Imagine a fitness retailer wants to start showing Facebook ads in their marketing mix.

    The marketing manager decided to conduct a holdout test. If we match our example below with the steps above, this is how the holdout test might look.

    1. They choose people who’ve purchased free weights in the past as their target audience (see how that segmentation works ?).
    2. They split this segment into a control group and a test group.
    3. For this test, they direct their regular marketing campaign to the control group (60% of the segment). The campaign includes promoting a 20% off sale on organic social media posts, email marketing, and SMS.
    4. They direct their regular marketing campaign plus Facebook ads to the test group (40% of the segment).
    5. They ran the campaign for three weeks with the goal for sale conversions and noticed :
      • The control group had a 1.5% conversion rate.
      • The test group (with Facebook ads) had a 2.1% conversion rate.
      • In this scenario, they could see the group who saw the Facebook ads convert better.
      • They created the following formula to measure the incremental lift of the Facebook ads :
    Calculation: Incrementality in marketing.
      • Here’s how the calculation works out : (2.1% – 1.5%) / 1.5% = 40%

    The Facebook ads had a positive 40% incremental lift in conversions during the sale.

    Incrementality testing isn’t a one-and-done process, though.

    While this first test is a great sign for the marketing manager, it doesn’t mean they should immediately throw all their money into Facebook ads.

    They should continue conducting tests to verify the initial test.

    Use Matomo to track incrementality today

    Incrementality can give you insights into exactly what’s working in your marketing (and what’s not) so you can design proven strategies to grow your business.

    If you want more help tracking your marketing efforts, try Matomo today.

    Our web analytics and behaviour analytics platform gives you firsthand data on your website visitors you can use to craft effective marketing strategies.

    Matomo provides 100% accurate data. Unlike other major web analytics platforms, we don’t do data sampling. What you see is what’s really going on in your website. That way, you can make more informed decisions for better results.

    At Matomo, we take privacy very seriously and include several advanced privacy protections to ensure you are in full control.

    As a fully compliant web analytics solution, we’re fully compliant with some of the world’s strictest privacy regulations like GDPR. With Matomo, you get peace of mind knowing you can make data-driven decisions while also being compliant. 

    If you’re ready to launch a data-driven marketing strategy today and grow your business, get started with our 21-day free trial now. No credit card required.

  • Anatomy of an optimization : H.264 deblocking

    26 mai 2010, par Dark Shikari — H.264, assembly, development, speed, x264

    As mentioned in the previous post, H.264 has an adaptive deblocking filter. But what exactly does that mean — and more importantly, what does it mean for performance ? And how can we make it as fast as possible ? In this post I’ll try to answer these questions, particularly in relation to my recent deblocking optimizations in x264.

    H.264′s deblocking filter has two steps : strength calculation and the actual filter. The first step calculates the parameters for the second step. The filter runs on all the edges in each macroblock. That’s 4 vertical edges of length 16 pixels and 4 horizontal edges of length 16 pixels. The vertical edges are filtered first, from left to right, then the horizontal edges, from top to bottom (order matters !). The leftmost edge is the one between the current macroblock and the left macroblock, while the topmost edge is the one between the current macroblock and the top macroblock.

    Here’s the formula for the strength calculation in progressive mode. The highest strength that applies is always selected.

    If we’re on the edge between an intra macroblock and any other macroblock : Strength 4
    If we’re on an internal edge of an intra macroblock : Strength 3
    If either side of a 4-pixel-long edge has residual data : Strength 2
    If the motion vectors on opposite sides of a 4-pixel-long edge are at least a pixel apart (in either x or y direction) or the reference frames aren’t the same : Strength 1
    Otherwise : Strength 0 (no deblocking)

    These values are then thrown into a lookup table depending on the quantizer : higher quantizers have stronger deblocking. Then the actual filter is run with the appropriate parameters. Note that Strength 4 is actually a special deblocking mode that performs a much stronger filter and affects more pixels.

    One can see somewhat intuitively why these strengths are chosen. The deblocker exists to get rid of sharp edges caused by the block-based nature of H.264, and so the strength depends on what exists that might cause such sharp edges. The strength calculation is a way to use existing data from the video stream to make better decisions during the deblocking process, improving compression and quality.

    Both the strength calculation and the actual filter (not described here) are very complex if naively implemented. The latter can be SIMD’d with not too much difficulty ; no H.264 decoder can get away with reasonable performance without such a thing. But what about optimizing the strength calculation ? A quick analysis shows that this can be beneficial as well.

    Since we have to check both horizontal and vertical edges, we have to check up to 32 pairs of coefficient counts (for residual), 16 pairs of reference frame indices, and 128 motion vector values (counting x and y as separate values). This is a lot of calculation ; a naive implementation can take 500-1000 clock cycles on a modern CPU. Of course, there’s a lot of shortcuts we can take. Here’s some examples :

    • If the macroblock uses the 8×8 transform, we only need to check 2 edges in each direction instead of 4, because we don’t deblock inside of the 8×8 blocks.
    • If the macroblock is a P-skip, we only have to check the first edge in each direction, since there’s guaranteed to be no motion vector differences, reference frame differences, or residual inside of the macroblock.
    • If the macroblock has no residual at all, we can skip that check.
    • If we know the partition type of the macroblock, we can do motion vector checks only along the edges of the partitions.
    • If the effective quantizer is so low that no deblocking would be performed no matter what, don’t bother calculating the strength.

    But even all of this doesn’t save us from ourselves. We still have to iterate over a ton of edges, checking each one. Stuff like the partition-checking logic greatly complicates the code and adds overhead even as it reduces the number of checks. And in many cases decoupling the checks to add such logic will make it slower : if the checks are coupled, we can avoid doing a motion vector check if there’s residual, since Strength 2 overrides Strength 1.

    But wait. What if we could do this in SIMD, just like the actual loopfilter itself ? Sure, it seems more of a problem for C code than assembly, but there aren’t any obvious things in the way. Many years ago, Loren Merritt (pengvado) wrote the first SIMD implementation that I know of (for ffmpeg’s decoder) ; it is quite fast, so I decided to work on porting the idea to x264 to see if we could eke out a bit more speed here as well.

    Before I go over what I had to do to make this change, let me first describe how deblocking is implemented in x264. Since the filter is a loopfilter, it acts “in loop” and must be done in both the encoder and decoder — hence why x264 has it too, not just decoders. At the end of encoding one row of macroblocks, x264 goes back and deblocks the row, then performs half-pixel interpolation for use in encoding the next frame.

    We do it per-row for reasons of cache coherency : deblocking accesses a lot of pixels and a lot of code that wouldn’t otherwise be used, so it’s more efficient to do it in a single pass as opposed to deblocking each macroblock immediately after encoding. Then half-pixel interpolation can immediately re-use the resulting data.

    Now to the change. First, I modified deblocking to implement a subset of the macroblock_cache_load function : spend an extra bit of effort loading the necessary data into a data structure which is much simpler to address — as an assembly implementation would need (x264_macroblock_cache_load_deblock). Then I massively cleaned up deblocking to move all of the core strength-calculation logic into a single, small function that could be converted to assembly (deblock_strength_c). Finally, I wrote the assembly functions and worked with Loren to optimize them. Here’s the result.

    And the timings for the resulting assembly function on my Core i7, in cycles :

    deblock_strength_c : 309
    deblock_strength_mmx : 79
    deblock_strength_sse2 : 37
    deblock_strength_ssse3 : 33

    Now that is a seriously nice improvement. 33 cycles on average to perform that many comparisons–that’s absurdly low, especially considering the SIMD takes no branchy shortcuts : it always checks every single edge ! I walked over to my performance chart and happily crossed off a box.

    But I had a hunch that I could do better. Remember, as mentioned earlier, we’re reloading all that data back into our data structures in order to address it. This isn’t that slow, but takes enough time to significantly cut down on the gain of the assembly code. And worse, less than a row ago, all this data was in the correct place to be used (when we just finished encoding the macroblock) ! But if we did the deblocking right after encoding each macroblock, the cache issues would make it too slow to be worth it (yes, I tested this). So I went back to other things, a bit annoyed that I couldn’t get the full benefit of the changes.

    Then, yesterday, I was talking with Pascal, a former Xvid dev and current video hacker over at Google, about various possible x264 optimizations. He had seen my deblocking changes and we discussed that a bit as well. Then two lines hit me like a pile of bricks :

    <_skal_> tried computing the strength at least ?
    <_skal_> while it’s fresh

    Why hadn’t I thought of that ? Do the strength calculation immediately after encoding each macroblock, save the result, and then go pick it up later for the main deblocking filter. Then we can use the data right there and then for strength calculation, but we don’t have to do the whole deblock process until later.

    I went and implemented it and, after working my way through a horde of bugs, eventually got a working implementation. A big catch was that of slices : deblocking normally acts between slices even though normal encoding does not, so I had to perform extra munging to get that to work. By midday today I was able to go cross yet another box off on the performance chart. And now it’s committed.

    Sometimes chatting for 10 minutes with another developer is enough to spot the idea that your brain somehow managed to miss for nearly a straight week.

    NB : the performance chart is on a specific test clip at a specific set of settings (super fast settings) relevant to the company I work at, so it isn’t accurate nor complete for, say, default settings.

    Update : Here’s a higher resolution version of the current chart, as requested in the comments.

  • Anatomy of an optimization : H.264 deblocking

    26 mai 2010, par Dark Shikari — H.264, assembly, development, speed, x264

    As mentioned in the previous post, H.264 has an adaptive deblocking filter. But what exactly does that mean — and more importantly, what does it mean for performance ? And how can we make it as fast as possible ? In this post I’ll try to answer these questions, particularly in relation to my recent deblocking optimizations in x264.

    H.264′s deblocking filter has two steps : strength calculation and the actual filter. The first step calculates the parameters for the second step. The filter runs on all the edges in each macroblock. That’s 4 vertical edges of length 16 pixels and 4 horizontal edges of length 16 pixels. The vertical edges are filtered first, from left to right, then the horizontal edges, from top to bottom (order matters !). The leftmost edge is the one between the current macroblock and the left macroblock, while the topmost edge is the one between the current macroblock and the top macroblock.

    Here’s the formula for the strength calculation in progressive mode. The highest strength that applies is always selected.

    If we’re on the edge between an intra macroblock and any other macroblock : Strength 4
    If we’re on an internal edge of an intra macroblock : Strength 3
    If either side of a 4-pixel-long edge has residual data : Strength 2
    If the motion vectors on opposite sides of a 4-pixel-long edge are at least a pixel apart (in either x or y direction) or the reference frames aren’t the same : Strength 1
    Otherwise : Strength 0 (no deblocking)

    These values are then thrown into a lookup table depending on the quantizer : higher quantizers have stronger deblocking. Then the actual filter is run with the appropriate parameters. Note that Strength 4 is actually a special deblocking mode that performs a much stronger filter and affects more pixels.

    One can see somewhat intuitively why these strengths are chosen. The deblocker exists to get rid of sharp edges caused by the block-based nature of H.264, and so the strength depends on what exists that might cause such sharp edges. The strength calculation is a way to use existing data from the video stream to make better decisions during the deblocking process, improving compression and quality.

    Both the strength calculation and the actual filter (not described here) are very complex if naively implemented. The latter can be SIMD’d with not too much difficulty ; no H.264 decoder can get away with reasonable performance without such a thing. But what about optimizing the strength calculation ? A quick analysis shows that this can be beneficial as well.

    Since we have to check both horizontal and vertical edges, we have to check up to 32 pairs of coefficient counts (for residual), 16 pairs of reference frame indices, and 128 motion vector values (counting x and y as separate values). This is a lot of calculation ; a naive implementation can take 500-1000 clock cycles on a modern CPU. Of course, there’s a lot of shortcuts we can take. Here’s some examples :

    • If the macroblock uses the 8×8 transform, we only need to check 2 edges in each direction instead of 4, because we don’t deblock inside of the 8×8 blocks.
    • If the macroblock is a P-skip, we only have to check the first edge in each direction, since there’s guaranteed to be no motion vector differences, reference frame differences, or residual inside of the macroblock.
    • If the macroblock has no residual at all, we can skip that check.
    • If we know the partition type of the macroblock, we can do motion vector checks only along the edges of the partitions.
    • If the effective quantizer is so low that no deblocking would be performed no matter what, don’t bother calculating the strength.

    But even all of this doesn’t save us from ourselves. We still have to iterate over a ton of edges, checking each one. Stuff like the partition-checking logic greatly complicates the code and adds overhead even as it reduces the number of checks. And in many cases decoupling the checks to add such logic will make it slower : if the checks are coupled, we can avoid doing a motion vector check if there’s residual, since Strength 2 overrides Strength 1.

    But wait. What if we could do this in SIMD, just like the actual loopfilter itself ? Sure, it seems more of a problem for C code than assembly, but there aren’t any obvious things in the way. Many years ago, Loren Merritt (pengvado) wrote the first SIMD implementation that I know of (for ffmpeg’s decoder) ; it is quite fast, so I decided to work on porting the idea to x264 to see if we could eke out a bit more speed here as well.

    Before I go over what I had to do to make this change, let me first describe how deblocking is implemented in x264. Since the filter is a loopfilter, it acts “in loop” and must be done in both the encoder and decoder — hence why x264 has it too, not just decoders. At the end of encoding one row of macroblocks, x264 goes back and deblocks the row, then performs half-pixel interpolation for use in encoding the next frame.

    We do it per-row for reasons of cache coherency : deblocking accesses a lot of pixels and a lot of code that wouldn’t otherwise be used, so it’s more efficient to do it in a single pass as opposed to deblocking each macroblock immediately after encoding. Then half-pixel interpolation can immediately re-use the resulting data.

    Now to the change. First, I modified deblocking to implement a subset of the macroblock_cache_load function : spend an extra bit of effort loading the necessary data into a data structure which is much simpler to address — as an assembly implementation would need (x264_macroblock_cache_load_deblock). Then I massively cleaned up deblocking to move all of the core strength-calculation logic into a single, small function that could be converted to assembly (deblock_strength_c). Finally, I wrote the assembly functions and worked with Loren to optimize them. Here’s the result.

    And the timings for the resulting assembly function on my Core i7, in cycles :

    deblock_strength_c : 309
    deblock_strength_mmx : 79
    deblock_strength_sse2 : 37
    deblock_strength_ssse3 : 33

    Now that is a seriously nice improvement. 33 cycles on average to perform that many comparisons–that’s absurdly low, especially considering the SIMD takes no branchy shortcuts : it always checks every single edge ! I walked over to my performance chart and happily crossed off a box.

    But I had a hunch that I could do better. Remember, as mentioned earlier, we’re reloading all that data back into our data structures in order to address it. This isn’t that slow, but takes enough time to significantly cut down on the gain of the assembly code. And worse, less than a row ago, all this data was in the correct place to be used (when we just finished encoding the macroblock) ! But if we did the deblocking right after encoding each macroblock, the cache issues would make it too slow to be worth it (yes, I tested this). So I went back to other things, a bit annoyed that I couldn’t get the full benefit of the changes.

    Then, yesterday, I was talking with Pascal, a former Xvid dev and current video hacker over at Google, about various possible x264 optimizations. He had seen my deblocking changes and we discussed that a bit as well. Then two lines hit me like a pile of bricks :

    <_skal_> tried computing the strength at least ?
    <_skal_> while it’s fresh

    Why hadn’t I thought of that ? Do the strength calculation immediately after encoding each macroblock, save the result, and then go pick it up later for the main deblocking filter. Then we can use the data right there and then for strength calculation, but we don’t have to do the whole deblock process until later.

    I went and implemented it and, after working my way through a horde of bugs, eventually got a working implementation. A big catch was that of slices : deblocking normally acts between slices even though normal encoding does not, so I had to perform extra munging to get that to work. By midday today I was able to go cross yet another box off on the performance chart. And now it’s committed.

    Sometimes chatting for 10 minutes with another developer is enough to spot the idea that your brain somehow managed to miss for nearly a straight week.

    NB : the performance chart is on a specific test clip at a specific set of settings (super fast settings) relevant to the company I work at, so it isn’t accurate nor complete for, say, default settings.

    Update : Here’s a higher resolution version of the current chart, as requested in the comments.