Recherche avancée

Médias (91)

Autres articles (56)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

Sur d’autres sites (11249)

  • How to cheat on video encoder comparisons

    21 juin 2010, par Dark Shikari — H.264, benchmark, stupidity, test sequences

    Over the past few years, practically everyone and their dog has published some sort of encoder comparison. Sometimes they’re actually intended to be something for the world to rely on, like the old Doom9 comparisons and the MSU comparisons. Other times, they’re just to scratch an itch — someone wants to decide for themselves what is better. And sometimes they’re just there to outright lie in favor of whatever encoder the author likes best. The latter is practically an expected feature on the websites of commercial encoder vendors.

    One thing almost all these comparisons have in common — particularly (but not limited to !) the ones done without consulting experts — is that they are horribly done. They’re usually easy to spot : for example, two videos at totally different bitrates are being compared, or the author complains about one of the videos being “washed out” (i.e. he screwed up his colorspace conversion). Or the results are simply nonsensical. Many of these problems result from the person running the test not “sanity checking” the results to catch mistakes that he made in his test. Others are just outright intentional.

    The result of all these mistakes, both intentional and accidental, is that the results of encoder comparisons tend to be all over the map, to the point of absurdity. For any pair of encoders, it’s practically a given that a comparison exists somewhere that will “prove” any result you want to claim, even if the result would be beyond impossible in any sane situation. This often results in the appearance of a “controversy” even if there isn’t any.

    Keep in mind that every single mistake I mention in this article has actually been done, usually in more than one comparison. And before I offend anyone, keep in mind that when I say “cheating”, I don’t mean to imply that everyone that makes the mistake is doing it intentionally. Especially among amateur comparisons, most of the mistakes are probably honest.

    So, without further ado, we will investigate a wide variety of ways, from the blatant to the subtle, with which you too can cheat on your encoder comparisons.

    Blatant cheating

    1. Screw up your colorspace conversions. A common misconception is that converting from YUV to RGB and back is a simple process where nothing can go wrong. This is quite untrue. There are two primary attributes of YUV : PC range (0-255) vs TV range (16-235) and BT.709 vs BT.601 conversion coefficients. That sums up to a total of 4 possible different types of YUV. When people compare encoders, they often use different frontends, some of which make incorrect assumptions about these attributes.

    Incorrect assumptions are so common that it’s often a matter of luck whether the tool gets it right or not. It doesn’t help that most videos don’t even properly signal which they are to begin with ! Often even the tool that the person running the comparison is using to view the source material gets the conversion wrong.

    Subsampling YUV (aka what everyone uses) adds yet another dimension to the problem : the locations which the chroma data represents (“chroma siting”) isn’t constant. For example, JPEG and MPEG-2 define different positions. This is even worse because almost nobody actually handles this correctly — the best approach is to simply make sure none of your software is doing any conversion. A mistake in chroma siting is what created that infamous PSNR graph showing Theora beating x264, which has been cited for ages since despite the developers themselves retracting it after realizing their mistake.

    Keep in mind that the video encoder is not responsible for colorspace conversion — almost all video encoders operate in the YUV domain (usually subsampled 4:2:0 YUV, aka YV12). Thus any problem in colorspace conversion is usually the fault of the tools used, not the actual encoder.

    How to spot it : “The color is a bit off” or “the contrast of the video is a bit duller”. There were a staggering number of “H.264 vs Theora” encoder comparisons which came out in favor of one or the other solely based on “how well the encoder kept the color” — making the results entirely bogus.

    2. Don’t compare at the same (or nearly the same) bitrate. I saw a VP8 vs x264 comparison the other day that gave VP8 30% more bitrate and then proceeded to demonstrate that it got better PSNR. You would think this is blindingly obvious, but people still make this mistake ! The most common cause of this is assuming that encoders will successfully reach the target bitrate you ask of them — particularly with very broken encoders that don’t. Always check the output filesizes of your encodes.

    How to spot it : The comparison lists perfectly round bitrates for every single test, as opposed to the actual bitrates achieved by the encoders, which will never be exactly matching in any real test.

    3. Use unfair encoding settings. This is a bit of a wide topic : there are many ways to do this. We’ll cover the more blatant ones in this part. Here’s some common ones :

    a. Simply cheat. Intentionally pick awful settings for the encoder you don’t like.

    b. Don’t consider performance. Pick encoding settings without any regard for some particular performance goal. For example, it’s perfectly reasonable to say “use the best settings possible, regardless of speed”. It’s also reasonable to look for a particular encoding speed target. But what isn’t reasonable is to pick extremely fast settings for one encoder and extremely slow settings for another encoder.

    c. Don’t attempt match compatibility options when it’s reasonable to do so. Keyframe interval is a classic one of these : shorter values reduce compression but improve seeking. An easy way to cheat is to simply not set them to the same value, biasing towards whatever encoder has the longer interval. This is most common as an accidental mistake with comparisons involving ffmpeg, where the default keyframe interval is an insanely low 12 frames.

    How to spot it : The comparison doesn’t document its approach regarding choice of encoding settings.

    4. Use ratecontrol methods unfairly. Constant bitrate is not the same as average bitrate — using one instead of the other is a great way to completely ruin a comparison. Another method is to use 1-pass bitrate mode for one encoder and 2-pass or constant quality for another. A good general approach is that, for any given encoder, one should use 2-pass if available and constant quality if not (it may take a few runs to get the bitrate you want, of course).

    Of course, it’s also fine to run a comparison with a particular mode in mind — for example, a comparison targeted at streaming applications might want to test using 1-pass CBR. Of course, in such a case, if CBR is not available in an encoder, you can’t compare to that encoder.

    How to spot it : It’s usually pretty obvious if the encoding settings are given.

    5. Use incredibly old versions of encoders. As it happens, Debian stable is not the best source for the most recent encoding software. Equally, using recent versions known to be buggy.

    6. Don’t distinguish between video formats and the software that encodes them. This is incredibly common : I’ve seen tests that claim to compare “H.264″ against something else while in fact actually comparing “Quicktime” against something else. It’s impossible to compare all H.264 encoders at once, so don’t even try — just call the comparison “Quicktime versus X” instead of “H.264 versus X”. Or better yet, use a good H.264 encoder, like x264 and don’t bother testing awful encoders to begin with.

    Less-obvious cheating

    1. Pick a bitrate that’s way too low. Low bitrate testing is very effective at making differences between encoders obvious, particularly if doing a visual comparison. But past a certain point, it becomes impossible for some encoders to keep up. This is usually an artifact of the video format itself — a scalability limitation. Practically all DCT-based formats have this kind of limitation (wavelets are mostly immune).

    In reality, this is rarely a problem, because one could merely downscale the video to resolve the problem — lower resolutions need fewer bits. But people rarely do this in comparisons (it’s hard to do it fairly), so the best approach is to simply not use absurdly low bitrates. What is “absurdly low” ? That’s a hard question — it ends up being a matter of using one’s best judgement.

    This tends to be less of a problem in larger-scale tests that use many different bitrates.

    How to spot it : At least one of the encoders being compared falls apart completely and utterly in the screenshots.

    Biases towards, a lot : Video formats with completely scalable coding methods (Dirac, Snow, JPEG-2000, SVC).

    Biases towards, a little : Video formats with coding methods that improve scalability, such as arithmetic coding, B-frames, and run-length coding. For example, H.264 and Theora tend to be more scalable than MPEG-4.

    2. Pick a bitrate that’s way too high. This is staggeringly common mistake : pick a bitrate so high that all of the resulting encodes look absolutely perfect. The claim is then made that “there’s no significant difference” between any of the encoders tested. This is surprisingly easy to do inadvertently on sources like Big Buck Bunny, which looks transparent at relatively low bitrates. An equally common but similar mistake is to test at a bitrate that isn’t so high that the videos look perfect, but high enough that they all look very good. The claim is then made that “the difference between these encoders is small”. Well, of course, if you give everything tons of bitrate, the difference between encoders is small.

    How to spot it : You can’t tell which image is the source and which is the encode.

    3. Making invalid comparisons using objective metrics. I explained this earlier in the linked blog post, but in short, if you’re going to measure PSNR, make sure all the encoders are optimized for PSNR. Equally, if you’re going to leave the encoder optimized for visual quality, don’t measure PSNR — post screenshots instead. Same with SSIM or any other objective metric. Furthermore, don’t blindly do metric comparisons — always at least look at the output as a sanity test. Finally, do not claim that PSNR is particularly representative of visual quality, because it isn’t.

    How to spot it : Encoders with psy optimizations, such as x264 or Theora 1.2, do considerably worse than expected in PSNR tests, but look much better in visual comparisons.

    4. Lying with graphs. Using misleading scales on graphs is a great way to make the differences between encoders seem larger or smaller than they actually are. A common mistake is to scale SSIM linearly : in fact, 0.99 is about twice as good as 0.98, not 1% better. One solution for this is to use db to compare SSIM values.

    5. Using lossy screenshots. Posting screenshots as JPEG is a silly, pointless way to worsen an encoder comparison.

    Subtle cheating

    1. Unfairly pick screenshots for comparison. Comparing based on stills is not ideal, but it’s often vastly easier than comparing videos in motion. But it also opens up the door to unfairness. One of the most common mistakes is to pick a frame immediately after (or on) a keyframe for one encoder, but which isn’t for the other encoder. Particularly in the case of encoders that massively boost keyframe quality, this will unfairly bias in favor of the one with the recent keyframe.

    How to spot it : It’s very difficult to tell, if not impossible, unless they provide the video files to inspect.

    2. Cherry-pick source videos. Good source videos are incredibly hard to come by — almost everything is already compressed and what’s left is usually a very poor example of real content. Here’s some common ways to bias unfairly using cherry-picking :

    a. Pick source videos that are already heavily compressed. Pre-compressed source isn’t much of an issue if your target quality level for testing is much lower than that of the source, since any compression artifacts in the source will be a lot smaller than those created by the encoders. But if the source is already very compressed, or you’re testing at a relatively high quality level, this becomes a significant issue.

    Biases towards : Anything that uses a similar transform to the source content. For MPEG-2 source material, this biases towards formats that use the 8x8dct or a very close approximation : MPEG-1/2/4, H.263, and Theora. For H.264 source material, this biases towards formats that use a 4×4 transform : H.264 and VP8.

    b. Pick standard test clips that were not intended for this purpose. There are a wide variety of uncompressed “standard test clips“. Some of these are not intended for general-purpose use, but rather exist to test specific encoder capabilities. For example, Mobile Calendar (“mobcal”) is extremely sharp and low motion, serving to test interpolation capabilities. It will bias incredibly heavily towards whatever encoder uses more B-frames and/or has higher-precision motion compensation. Other test clips are almost completely static, such as the classic “akiyo”. These are also not particularly representative of real content.

    c. Pick very noisy content. Noise is — by definition — not particularly compressible. Both in terms of PSNR and visual quality, a very noisy test clip will tend to reduce the differences between encoders dramatically.

    d. Pick a test clip to exercise a specific encoder feature. I’ve often used short clips from Touhou games to demonstrate the effectiveness of x264′s macroblock-tree algorithm. I’ve sometimes even used it to compare to other encoders as part of such a demonstration. I’ve also used the standard test clip “parkrun” as a demonstration of adaptive quantization. But claiming that either is representative of most real content — and thus can be used as a general determinant of how good encoders are — is of course insane.

    e. Simply encode a bunch of videos and pick the one your favorite encoder does best on.

    3. Preprocessing the source. A encoder test is a test of encoders, not preprocessing. Some encoding apps may add preprocessors to the source, such as noise reduction. This may make the video look better — possibly even better than the source — but it’s not a fair part of comparing the actual encoders.

    4. Screw up decoding. People often forget that in addition to encoding, a test also involves decoding — a step which is equally possible to do wrong. One common error caused by this is in tests of Theora on content whose resolution isn’t divisible by 16. Decoding is often done with ffmpeg — which doesn’t crop the edges properly in some cases. This isn’t really a big deal visually, but in a PSNR comparison, misaligning the entire frame by 4 or 8 pixels is a great way of completely invalidating the results.

    The greatest mistake of all

    Above all, the biggest and most common mistake — and the one that leads to many of the problems mentioned here – is the mistaken belief that one, or even a few tests can really represent all usage fairly. Any comparison has to have some specific goal — to compare something in some particular case, whether it be “maximum offline compression ignoring encoding speed” or “real-time high-speed video streaming” or whatnot. And even then, no comparison can represent all use-cases in that category alone. An encoder comparison can only be honest if it’s aware of its limitations.

  • error with ffmpeg, when trying to upload a video

    14 juillet 2021, par James

    I recently downloaded a script, which is similar to a social network. And all items in this script related to publish, for example. The button to publish images, video and polls are included. However the video upload button has a problem, the host I'm hosting this site doesn't accept ffmpeg conversion, and so far so good because in the script if there isn't you can leave the ffmpeg field (in the adm panel) empty , and I did that, the problem is that when I try to upload a video, I always get an error saying that it was not possible to relazie the operation, I went to take a look at the script to process the video, and found that it uploads because the video goes to the uploads folder, but it doesn't deliver anything to the bank and doesn't even preview the video on the publish page, that is, the ffmpeg code is running even with the host not allowing it, giving an error soon.

    


    I've tried to do everything like deleting the ffmpeg part of the code, trying to change the input names and many other small attempts like changing some name there or here, but the error still persists.

    


    The code is right below, could someone tell me what I can change in this code so that it uploads the normal video ? (this host allows video uploading, but without ffmpeg). And what could be wrong with him ?

    


    I tested it on localhost but everything worked fine

    


    the whole code

    


    <?php 

if (empty($cl['is_logged'])) {
    $data         = array(
        'code'    => 401,
        'data'    => array(),
        'message' => 'Unauthorized Access'
    );
}

else {
    $post_data  = $me['draft_post'];
    $media_type = fetch_or_get($_POST["type"], false);

    if (empty($media_type) || in_array($media_type, array("image", "video")) != true) {
        $data['code']    = 400;
        $data['message'] = "Media file type is missing or invalid";
        $data['data']    = array();
    }

    else {
        if ($media_type == "image") {
            if (not_empty($_FILES['file']) && not_empty($_FILES['file']['tmp_name'])) {
                if (empty($post_data)) {
                    $post_id   = cl_create_orphan_post($me['id'], "image");
                    $post_data = cl_get_orphan_post($post_id);

                    cl_update_user_data($me['id'], array(
                        'last_post' => $post_id
                    ));
                }
                
                if (not_empty($post_data) && $post_data["type"] == "image") {
                    if (empty($post_data['media']) || count($post_data['media']) < 10) {
                        $file_info      =  array(
                            'file'      => $_FILES['file']['tmp_name'],
                            'size'      => $_FILES['file']['size'],
                            'name'      => $_FILES['file']['name'],
                            'type'      => $_FILES['file']['type'],
                            'file_type' => 'image',
                            'folder'    => 'images',
                            'slug'      => 'original',
                            'crop'      => array('width' => 300, 'height' => 300),
                            'allowed'   => 'jpg,png,jpeg,gif'
                        );

                        $file_upload = cl_upload($file_info);

                        if (not_empty($file_upload['filename'])) {
          
                            $img_id      = cl_db_insert(T_PUBMEDIA, array(
                                "pub_id" => $post_data["id"],
                                "type"   => "image",
                                "src"    => $file_upload['filename'],
                                "time"   => time(),
                                "json_data" => json(array(
                                    "image_thumb" => $file_upload['cropped']
                                ),true)
                            ));

                            if (is_posnum($img_id)) {
                                $data['message'] = 'Media file uploaded successfully';
                                $data['code']    = 200;
                                $data['data']    = array(
                                    "media_id"   => $img_id, 
                                    "url"        => cl_get_media($file_upload['cropped']),
                                    "type"       => "Image"
                                );
                            }
                        }
                        else {
                            $data['code']    = 400;
                            $data['message'] = "Something went wrong while saving a uploaded media file. Please check your details and try again";
                            $data['data']    = array();
                        }
                    }
                    else {
                        $data['code']    = 400;
                        $data['message'] = "You cannot attach more than 10 images to a post";
                        $data['data']    = array();
                    }
                }
                else {
                    cl_delete_orphan_posts($me['id']);
                    cl_update_user_data($me['id'],array(
                        'last_post' => 0
                    ));

                    $data['code']    = 500;
                    $data['message'] = "An error occurred while processing your request. Please try again later.";
                    $data['data']    = array();
                }
            }
            else {
                $data['code']    = 500;
                $data['message'] = "Media file is missing or invalid";
                $data['data']    = array();
            }
        }

        else if($media_type == "video") {
            if (not_empty($_FILES['file']) && not_empty($_FILES['file']['tmp_name'])) {
                if (empty($post_data)) {
                    $post_id   = cl_create_orphan_post($me['id'], "video");
                    $post_data = cl_get_orphan_post($post_id);

                    cl_update_user_data($me['id'],array(
                        'last_post' => $post_id
                    ));
                }

                if (not_empty($post_data) && $post_data["type"] == "video") {
                    if (empty($post_data['media'])) {
                        $file_info      =  array(
                            'file'      => $_FILES['file']['tmp_name'],
                            'size'      => $_FILES['file']['size'],
                            'name'      => $_FILES['file']['name'],
                            'type'      => $_FILES['file']['type'],
                            'file_type' => 'video',
                            'folder'    => 'videos',
                            'slug'      => 'original',
                            'allowed'   => 'mp4,mov,3gp,webm',
                        );

                        $file_upload = cl_upload($file_info);
                        $upload_fail = false;
                        $post_id     = $post_data['id'];

                        if (not_empty($file_upload['filename'])) {
                            try {
                                require_once(cl_full_path("core/libs/ffmpeg-php/vendor/autoload.php"));

                                $ffmpeg         = new FFmpeg(cl_full_path($config['ffmpeg_binary']));
                                $thumb_path     = cl_gen_path(array(
                                    "folder"    => "images",
                                    "file_ext"  => "jpeg",
                                    "file_type" => "image",
                                    "slug"      => "poster",
                                ));

                                $ffmpeg->input($file_upload['filename']);
                                $ffmpeg->set('-ss','3');
                                $ffmpeg->set('-vframes','1');
                                $ffmpeg->set('-f','mjpeg');
                                $ffmpeg->output($thumb_path)->ready();
                            } 

                            catch (Exception $e) {
                                $upload_fail = true;
                            }

                            if (empty($upload_fail)) {
                                $vid_id      = cl_db_insert(T_PUBMEDIA, array(
                                    "pub_id" => $post_id,
                                    "type"   => "video",
                                    "src"    => $file_upload['filename'],
                                    "time"   => time(),
                                    "json_data" => json(array(
                                        "poster_thumb" => $thumb_path
                                    ),true)
                                ));

                                if (is_posnum($vid_id)) {
                                    $data['message'] = 'Media file uploaded successfully';
                                    $data['code']    = 200;
                                    $data['data']    = array(
                                        "media_id"   => $vid_id, 
                                        "type"       => "Video",
                                        "source"     => cl_get_media($file_upload['filename']),
                                        "poster"     => cl_get_media($thumb_path),
                                    );
                                }
                            }

                            else {
                                $data['code']    = 400;
                                $data['message'] = "Something went wrong while saving a uploaded media file. Please check your details and try again";
                                $data['data']    = array();
                            }
                        }
                    }
                    else {
                        $data['code']    = 400;
                        $data['message'] = "You cannot attach more than 1 video to a post";
                        $data['data']    = array();
                    }
                }
                else {
                    cl_delete_orphan_posts($me['id']);
                    cl_update_user_data($me['id'], array(
                        'last_post' => 0
                    ));
                }
            }

            else {
                $data['code']    = 500;
                $data['message'] = "Media file is missing or invalid";
                $data['data']    = array();
            }
        }
    }
}


    


    I dosen't made this code so, i dont know nothing about him

    


  • What is Multi-Touch Attribution ? (And How To Get Started)

    2 février 2023, par Erin — Analytics Tips

    Good marketing thrives on data. Or more precisely — its interpretation. Using modern analytics software, we can determine which marketing actions steer prospects towards the desired action (a conversion event). 

    An attribution model in marketing is a set of rules that determine how various marketing tactics and channels impact the visitor’s progress towards a conversion. 

    Yet, as customer journeys become more complicated and involve multiple “touches”, standard marketing reports no longer tell the full picture. 

    That’s when multi-touch attribution analysis comes to the fore. 

    What is Multi-Touch Attribution ?

    Multi-touch attribution (also known as multi-channel attribution or cross-channel attribution) measures the impact of all touchpoints on the consumer journey on conversion. 

    Unlike single-touch reporting, multi-touch attribution models give credit to each marketing element — a social media ad, an on-site banner, an email link click, etc. By seeing impacts from every touchpoint and channel, marketers can avoid false assumptions or subpar budget allocations.

    To better understand the concept, let’s interpret the same customer journey using a standard single-touch report vs a multi-touch attribution model. 

    Picture this : Jammie is shopping around for a privacy-centred web analytics solution. She saw a recommendation on Twitter and ended up on the Matomo website. After browsing a few product pages and checking comparisons with other web analytics tools, she signs up for a webinar. One week after attending, Jammie is convinced that Matomo is the right tool for her business and goes directly to the Matomo website a starts a free trial. 

    • A standard single-touch report would attribute 100% of the conversion to direct traffic, which doesn’t give an accurate view of the multiple touchpoints that led Jammie to start a free trial. 
    • A multi-channel attribution report would showcase all the channels involved in the free trial conversion — social media, website content, the webinar, and then the direct traffic source.

    In other words : Multi-touch attribution helps you understand how prospects move through the sales funnel and which elements tinder them towards the desired outcome. 

    Types of Attribution Models

    As marketers, we know that multiple factors play into a conversion — channel type, timing, user’s stage on the buyer journey and so on. Various attribution models exist to reflect this variability. 

    Types of Attribution Models

    First Interaction attribution model (otherwise known as first touch) gives all credit for the conversion to the first channel (for example — a referral link) and doesn’t report on all the other interactions a user had with your company (e.g., clicked a newsletter link, engaged with a landing page, or browsed the blog campaign).

    First-touch helps optimise the top of your funnel and establish which channels bring the best leads. However, it doesn’t offer any insight into other factors that persuaded a user to convert. 

    Last Interaction attribution model (also known as last touch) allocates 100% credit to the last channel before conversion — be it direct traffic, paid ad, or an internal product page.

    The data is useful for optimising the bottom-of-the-funnel (BoFU) elements. But you have no visibility into assisted conversions — interactions a user had prior to conversion. 

    Last Non-Direct attribution model model excludes direct traffic and assigns 100% credit for a conversion to the last channel a user interacted with before converting. For instance, a social media post will receive 100% of credit if a shopper buys a product three days later. 

    This model is more telling about the other channels, involved in the sales process. Yet, you’re seeing only one step backwards, which may not be sufficient for companies with longer sales cycles.

    Linear attribution model distributes an equal credit for a conversion between all tracked touchpoints.

    For instance, with a four touchpoint conversion (e.g., an organic visit, then a direct visit, then a social visit, then a visit and conversion from an ad campaign) each touchpoint would receive 25% credit for that single conversion.

    This is the simplest multi-channel attribution modelling technique many tools support. The nuance is that linear models don’t reflect the true impact of various events. After all, a paid ad that introduced your brand to the shopper and a time-sensitive discount code at the checkout page probably did more than the blog content a shopper browsed in between. 

    Position Based attribution model allocates a 40% credit to the first and the last touchpoints and then spreads the remaining 20% across the touchpoints between the first and last. 

    This attribution model comes in handy for optimising conversions across the top and the bottom of the funnel. But it doesn’t provide much insight into the middle, which can skew your decision-making. For instance, you may overlook cases when a shopper landed via a social media post, then was re-engaged via email, and proceeded to checkout after an organic visit. Without email marketing, that sale may not have happened.

    Time decay attribution model adjusts the credit, based on the timing of the interactions. Touchpoints that preceded the conversion get the highest score, while the first ones get less weight (e.g., 5%-5%-10%-15%-25%-30%).

    This multi-channel attribution model works great for tracking the bottom of the funnel, but it underestimates the impact of brand awareness campaigns or assisted conversions at mid-stage. 

    Why Use Multi-Touch Attribution Modelling

    Multi-touch attribution provides you with the full picture of your funnel. With accurate data across all touchpoints, you can employ targeted conversion rate optimisation (CRO) strategies to maximise the impact of each campaign. 

    Most marketers and analysts prefer using multi-touch attribution modelling — and for some good reasons.

    Issues multi-touch attribution solves 

    • Funnel visibility. Understand which tactics play an important role at the top, middle and bottom of your funnel, instead of second-guessing what’s working or not. 
    • Budget allocations. Spend money on channels and tactics that bring a positive return on investment (ROI). 
    • Assisted conversions. Learn how different elements and touchpoints cumulatively contribute to the ultimate goal — a conversion event — to optimise accordingly. 
    • Channel segmentation. Determine which assets drive the most qualified and engaged leads to replicate them at scale.
    • Campaign benchmarking. Compare how different marketing activities from affiliate marketing to social media perform against the same metrics.

    How To Get Started With Multi-Touch Attribution 

    To make multi-touch attribution part of your analytics setup, follow the next steps :

    1. Define Your Marketing Objectives 

    Multi-touch attribution helps you better understand what led people to convert on your site. But to capture that, you need to first map the standard purchase journeys, which include a series of touchpoints — instances, when a prospect forms an opinion about your business.

    Touchpoints include :

    • On-site interactions (e.g., reading a blog post, browsing product pages, using an on-site calculator, etc.)
    • Off-site interactions (e.g., reading a review, clicking a social media link, interacting with an ad, etc.)

    Combined these interactions make up your sales funnel — a designated path you’ve set up to lead people toward the desired action (aka a conversion). 

    Depending on your business model, you can count any of the following as a conversion :

    • Purchase 
    • Account registration 
    • Free trial request 
    • Contact form submission 
    • Online reservation 
    • Demo call request 
    • Newsletter subscription

    So your first task is to create a set of conversion objectives for your business and add them as Goals or Conversions in your web analytics solution. Then brainstorm how various touchpoints contribute to these objectives. 

    Web analytics tools with multi-channel attribution, like Matomo, allow you to obtain an extra dimension of data on touchpoints via Tracked Events. Using Event Tracking, you can analyse how many people started doing a desired action (e.g., typing details into the form) but never completed the task. This way you can quickly identify “leaking” touchpoints in your funnel and fix them. 

    2. Select an Attribution Model 

    Multi-attribution models have inherent tradeoffs. Linear attribution model doesn’t always represent the role and importance of each channel. Position-based attribution model emphasises the role of the last and first channel while diminishing the importance of assisted conversions. Time-decay model, on the contrary, downplays the role awareness-related campaigns played.

    To select the right attribution model for your business consider your objectives. Is it more important for you to understand your best top of funnel channels to optimise customer acquisition costs (CAC) ? Or would you rather maximise your on-site conversion rates ? 

    Your industry and the average cycle length should also guide your choice. Position-based models can work best for eCommerce and SaaS businesses where both CAC and on-site conversion rates play an important role. Manufacturing companies or educational services providers, on the contrary, will benefit more from a time-decay model as it better represents the lengthy sales cycles. 

    3. Collect and Organise Data From All Touchpoints 

    Multi-touch attribution models are based on available funnel data. So to get started, you will need to determine which data sources you have and how to best leverage them for attribution modelling. 

    Types of data you should collect : 

    • General web analytics data : Insights on visitors’ on-site actions — visited pages, clicked links, form submissions and more.
    • Goals (Conversions) : Reports on successful conversions across different types of assets. 
    • Behavioural user data : Some tools also offer advanced features such as heatmaps, session recording and A/B tests. These too provide ample data into user behaviours, which you can use to map and optimise various touchpoints.

    You can also implement extra tracking, for instance for contact form submissions, live chat contacts or email marketing campaigns to identify repeat users in your system. Just remember to stay on the good side of data protection laws and respect your visitors’ privacy. 

    Separately, you can obtain top-of-the-funnel data by analysing referral traffic sources (channel, campaign type, used keyword, etc). A Tag Manager comes in handy as it allows you to zoom in on particular assets (e.g., a newsletter, an affiliate, a social campaign, etc). 

    Combined, these data points can be parsed by an app, supporting multi-touch attribution (or a custom algorithm) and reported back to you as specific findings. 

    Sounds easy, right ? Well, the devil is in the details. Getting ample, accurate data for multi-touch attribution modelling isn’t easy. 

    Marketing analytics has an accuracy problem, mainly for two reasons :

    • Cookie consent banner rejection 
    • Data sampling application

    Please note that we are not able to provide legal advice, so it’s important that you consult with your own DPO to ensure compliance with all relevant laws and regulations.

    If you’re collecting web analytics in the EU, you know that showing a cookie consent banner is a GDPR must-do. But many consumers don’t often rush to accept cookie consent banners. The average consent rate for cookies in 2021 stood at 54% in Italy, 45% in France, and 44% in Germany. The consent rates are likely lower in 2023, as Google was forced to roll out a “reject all” button for cookie tracking in Europe, while privacy organisations lodge complaints against individual businesses for deceptive banners. 

    For marketers, cookie rejection means substantial gaps in analytics data. The good news is that you can fill in those gaps by using a privacy-centred web analytics tool like Matomo. 

    Matomo takes extra safeguards to protect user privacy and supports fully cookieless tracking. Because of that, Matomo is legally exempt from tracking consent in France. Plus, you can configure to use our analytics tool without consent banners in other markets outside of Germany and the UK. This way you get to retain the data you need for audience modelling without breaching any privacy regulations. 

    Data sampling application partially stems from the above. When a web analytics or multi-channel attribution tool cannot secure first-hand data, the “guessing game” begins. Google Analytics, as well as other tools, often rely on synthetic AI-generated data to fill in the reporting gaps. Respectively, your multi-attribution model doesn’t depict the real state of affairs. Instead, it shows AI-produced guesstimates of what transpired whenever not enough real-world evidence is available.

    4. Evaluate and Select an Attribution Tool 

    Google Analytics (GA) offers several multi-touch attribution models for free (linear, time-decay and position-based). The disadvantage of GA multi-touch attribution is its lower accuracy due to cookie rejection and data sampling application.

    At the same time, you cannot create custom credit allocations for the proposed models, unless you have the paid version of GA, Google Analytics 360. This version of GA comes with a custom Attribution Modeling Tool (AMT). The price tag, however, starts at USD $50,000 per year. 

    Matomo Cloud offers multi-channel conversion attribution as a feature and it is available as a plug-in on the marketplace for Matomo On-Premise. We support linear, position-based, first-interaction, last-interaction, last non-direct and time-decay modelling, based fully on first-hand data. You also get more precise insights because cookie consent isn’t an issue with us. 

    Most multi-channel attribution tools, like Google Analytics and Matomo, provide out-of-the-box multi-touch attribution models. But other tools, like Matomo On-Premise, also provide full access to raw data so you can develop your own multi-touch attribution models and do custom attribution analysis. The ability to create custom attribution analysis is particularly beneficial for data analysts or organisations with complex and unique buyer journeys. 

    Conclusion

    Ultimately, multi-channel attribution gives marketers greater visibility into the customer journey. By analysing multiple touchpoints, you can establish how various marketing efforts contribute to conversions. Then use this information to inform your promotional strategy, budget allocations and CRO efforts. 

    The key to benefiting the most from multi-touch attribution is accurate data. If your analytics solution isn’t telling you the full story, your multi-touch model won’t either. 

    Collect accurate visitor data for multi-touch attribution modelling with Matomo. Start your free 21-day trial now