Recherche avancée

Médias (91)

Sur d’autres sites (346)

  • Using PyAV to encode mono audio to file, params match docs, but still causes Errno 22

    20 février 2023, par andrew8088

    While trying to use PyAV to encode live mono audio from a microphone to a compressed audio stream (using mp2 or flac as encoder), the program kept raising an exception ValueError: [Errno 22] Invalid argument.

    


    To remove the live microphone source as a cause of the problem, and to make the problematic code easier for others to run/test, I have removed the mic source and now just generate a pure tone as a sequence of input buffers.

    


    All attempts to figure out the missing or mismatched or incorrect argument have just resulted in seeing documentation and examples that are the same as my code.

    


    I would like to know from someone who has used PyAV successfully for mono audio what the correct method and parameters are for encoding mono frames into the mono stream.

    


    The package used is av 10.0.0 installed with
pip3 install av --no-binary av
so it uses my package-manager provided ffmpeg library, which is version 4.2.7.

    


    The problematic python code is :

    


    #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Recreating an error 22 when encoding sound with PyAV.

Created on Sun Feb 19 08:10:29 2023
@author: andrewm
"""
import typing
import sys
import math
import fractions

import av
from av import AudioFrame

""" Ensure some PyAudio constants are still defined without changing 
    the PyAudio recording callback function and without depending 
    on PyAudio simply for reproducing the PyAV bug [Errno 22] thrown in 
    File "av/filter/context.pyx", line 89, in av.filter.context.FilterContext.push
"""
class PA_Stub():
    paContinue = True
    paComplete= False

pyaudio = PA_Stub()


"""Generate pure tone at given frequency with amplitude 0...1.0 at 
   sampling frewuency fs and beginning at phase offset 'phase'.
   Returns the new phase after the sinusoid has cycled over the 
   sampling window length.
"""
def generate_tone(
        freq:int, phase:float, amp:float, fs, samp_fmt, buffer:bytearray
) -> float:
    assert samp_fmt == "s16", "Only s16 supported atm"
    samp_size_bytes = 2
    n_samples = int(len(buffer)/samp_size_bytes)
    window = [int(0) for i in range(n_samples)]
    theta = phase
    phase_inc = 2*math.pi * freq / fs
    for i in range(n_samples):
        v = amp * math.sin(theta)
        theta += phase_inc
        s = int((2**15-1)*v)
        window[i] = s
    for sample_i in range(len(window)):
        byte_i = sample_i * samp_size_bytes
        enc = window[sample_i].to_bytes(
                2, byteorder=sys.byteorder, signed=True
        )
        buffer[byte_i] = enc[0]
        buffer[byte_i+1] = enc[1]
    return theta


channels = 1
fs = 44100  # Record at 44100 samples per second
fft_size_samps = 256
chunk_samps = fft_size_samps * 10  # Record in chunks that are multiples of fft windows.

# print(f"fft_size_samps={fft_size_samps}\nchunk_samps={chunk_samps}")

seconds = 3.0
out_filename = "testoutput.wav"

# Store data in chunks for 3 seconds
sample_limit = int(fs * seconds)
sample_len = 0
frames = []  # Initialize array to store frames

ffmpeg_codec_name = 'mp2'  # flac, mp3, or libvorbis make same error.

sample_size_bytes = 2
buffer = bytearray(int(chunk_samps*sample_size_bytes))
chunkperiod = chunk_samps / fs
total_chunks = int(math.ceil(seconds / chunkperiod))
phase = 0.0

### uncomment if you want to see the synthetic data being used as a mic input.
# with open("test.raw","wb") as raw_out:
#     for ci in range(total_chunks):
#         phase = generate_tone(2600, phase, 0.8, fs, "s16", buffer)
#         raw_out.write(buffer)
# print("finished gen test")
# sys.exit(0)
# #---- 

# Using mp2 or mkv as the container format gets the same error.
with av.open(out_filename+'.mp2', "w", format="mp2") as output_con:
    output_con.metadata["title"] = "My title"
    output_con.metadata["key"] = "value"
    channel_layout = "mono"
    sample_fmt = "s16p"

    ostream = output_con.add_stream(ffmpeg_codec_name, fs, layout=channel_layout)
    assert ostream is not None, "No stream!"
    cctx = ostream.codec_context
    cctx.sample_rate = fs
    cctx.time_base = fractions.Fraction(numerator=1,denominator=fs)
    cctx.format = sample_fmt
    cctx.channels = channels
    cctx.layout = channel_layout
    print(cctx, f"layout#{cctx.channel_layout}")
    
    # Define PyAudio-style callback for recording plus PyAV transcoding.
    def rec_callback(in_data, frame_count, time_info, status):
        global sample_len
        global ostream
        frames.append(in_data)
        nsamples = int(len(in_data) / (channels*sample_size_bytes))
        
        frame = AudioFrame(format=sample_fmt, layout=channel_layout, samples=nsamples)
        frame.sample_rate = fs
        frame.time_base = fractions.Fraction(numerator=1,denominator=fs)
        frame.pts = sample_len
        frame.planes[0].update(in_data)
        print(frame, len(in_data))
        
        for out_packet in ostream.encode(frame):
            output_con.mux(out_packet)
        for out_packet in ostream.encode(None):
            output_con.mux(out_packet)
        
        sample_len += nsamples
        retflag = pyaudio.paContinue if sample_lencode>

    


    If you uncomment the RAW output part you will find the generated data can be imported as PCM s16 Mono 44100Hz into Audacity and plays the expected tone, so the generated audio data does not seem to be the problem.

    


    The normal program console output up until the exception is :

    


    mp2 at 0x7f8e38202cf0> layout#4
Beginning
 5120
. 5120


    


    The stack trace is :

    


    Traceback (most recent call last):&#xA;&#xA;  File "Dev/multichan_recording/av_encode.py", line 147, in <module>&#xA;    ret_data, ret_flag = rec_callback(buffer, ci, {}, 1)&#xA;&#xA;  File "Dev/multichan_recording/av_encode.py", line 121, in rec_callback&#xA;    for out_packet in ostream.encode(frame):&#xA;&#xA;  File "av/stream.pyx", line 153, in av.stream.Stream.encode&#xA;&#xA;  File "av/codec/context.pyx", line 484, in av.codec.context.CodecContext.encode&#xA;&#xA;  File "av/audio/codeccontext.pyx", line 42, in av.audio.codeccontext.AudioCodecContext._prepare_frames_for_encode&#xA;&#xA;  File "av/audio/resampler.pyx", line 101, in av.audio.resampler.AudioResampler.resample&#xA;&#xA;  File "av/filter/graph.pyx", line 211, in av.filter.graph.Graph.push&#xA;&#xA;  File "av/filter/context.pyx", line 89, in av.filter.context.FilterContext.push&#xA;&#xA;  File "av/error.pyx", line 336, in av.error.err_check&#xA;&#xA;ValueError: [Errno 22] Invalid argument&#xA;&#xA;</module>

    &#xA;

    edit : It's interesting that the error happens on the 2nd AudioFrame, as apparently the first one was encoded okay, because they are given the same attribute values aside from the Presentation Time Stamp (pts), but leaving this out and letting PyAV/ffmpeg generate the PTS by itself does not fix the error, so an incorrect PTS does not seem the cause.

    &#xA;

    After a brief glance in av/filter/context.pyx the exception must come from a bad return value from res = lib.av_buffersrc_write_frame(self.ptr, frame.ptr)
    &#xA;Trying to dig into av_buffersrc_write_frame from the ffmpeg source it is not clear what could be causing this error. The only obvious one is a mismatch between channel layouts, but my code is setting the layout the same in the Stream and the Frame. That problem had been found by an old question pyav - cannot save stream as mono and their answer (that one parameter required is undocumented) is the only reason the code now has the layout='mono' argument when making the stream.

    &#xA;

    The program output shows layout #4 is being used, and from https://github.com/FFmpeg/FFmpeg/blob/release/4.2/libavutil/channel_layout.h you can see this is the value for symbol AV_CH_FRONT_CENTER which is the only channel in the MONO layout.

    &#xA;

    The mismatch is surely some other object property or an undocumented parameter requirement.

    &#xA;

    How do you encode mono audio to a compressed stream with PyAV ?

    &#xA;

  • What permission ffmpeg-static need in AWS Lambda ?

    17 février 2023, par János

    I have this code. It download a image, made a video from it and upload it to S3. It runs on Lambda. Added packages, intalled, zipped, uploaded.

    &#xA;

    npm install --production&#xA;zip -r my-lambda-function.zip ./&#xA;

    &#xA;

    But get an error code 126

    &#xA;

    2023-02-17T09:27:55.236Z    5c845bb6-02c1-41b0-8759-4459591b57b0    INFO    Error: ffmpeg exited with code 126&#xA;    at ChildProcess.<anonymous> (/var/task/node_modules/fluent-ffmpeg/lib/processor.js:182:22)&#xA;    at ChildProcess.emit (node:events:513:28)&#xA;    at ChildProcess._handle.onexit (node:internal/child_process:291:12)&#xA;2023-02-17T09:27:55.236Z 5c845bb6-02c1-41b0-8759-4459591b57b0 INFO Error: ffmpeg exited with code 126 at ChildProcess.<anonymous> (/var/task/node_modules/fluent-ffmpeg/lib/processor.js:182:22) at ChildProcess.emit (node:events:513:28) at ChildProcess._handle.onexit (node:internal/child_process:291:12)&#xA;</anonymous></anonymous>

    &#xA;

    Do I need to set a specific premission for ffmpeg ?

    &#xA;

    import { PutObjectCommand, S3Client } from &#x27;@aws-sdk/client-s3&#x27;&#xA;import { fromNodeProviderChain } from &#x27;@aws-sdk/credential-providers&#x27;&#xA;import axios from &#x27;axios&#x27;&#xA;import pathToFfmpeg from &#x27;ffmpeg-static&#x27;&#xA;import ffmpeg from &#x27;fluent-ffmpeg&#x27;&#xA;import fs from &#x27;fs&#x27;&#xA;ffmpeg.setFfmpegPath(pathToFfmpeg)&#xA;const credentials = fromNodeProviderChain({&#xA;    clientConfig: {&#xA;        region: &#x27;eu-central-1&#x27;,&#xA;    },&#xA;})&#xA;const client = new S3Client({ credentials })&#xA;&#xA;export const handler = async (event, context) => {&#xA;    try {&#xA;        let body&#xA;        let statusCode = 200&#xA;        const query = event?.queryStringParameters&#xA;        if (!query?.imgId &amp;&amp; !query?.video1Id &amp;&amp; !query?.video2Id) {&#xA;            return&#xA;        }&#xA;&#xA;        const imgId = query?.imgId&#xA;        const video1Id = query?.video1Id&#xA;        const video2Id = query?.video2Id&#xA;        console.log(&#xA;            `Parameters received, imgId: ${imgId}, video1Id: ${video1Id}, video2Id: ${video2Id}`&#xA;        )&#xA;        const imgURL = getFileURL(imgId)&#xA;        const video1URL = getFileURL(`${video1Id}.mp4`)&#xA;        const video2URL = getFileURL(`${video2Id}.mp4`)&#xA;        const imagePath = `/tmp/${imgId}`&#xA;        const video1Path = `/tmp/${video1Id}.mp4`&#xA;        const video2Path = `/tmp/${video2Id}.mp4`&#xA;        const outputPath = `/tmp/${imgId}.mp4`&#xA;        await Promise.all([&#xA;            downloadFile(imgURL, imagePath),&#xA;            downloadFile(video1URL, video1Path),&#xA;            downloadFile(video2URL, video2Path),&#xA;        ])&#xA;        await new Promise((resolve, reject) => {&#xA;            console.log(&#x27;Input files downloaded&#x27;)&#xA;            ffmpeg()&#xA;                .input(imagePath)&#xA;                .inputFormat(&#x27;image2&#x27;)&#xA;                .inputFPS(30)&#xA;                .loop(1)&#xA;                .size(&#x27;1080x1080&#x27;)&#xA;                .videoCodec(&#x27;libx264&#x27;)&#xA;                .format(&#x27;mp4&#x27;)&#xA;                .outputOptions([&#xA;                    &#x27;-tune animation&#x27;,&#xA;                    &#x27;-pix_fmt yuv420p&#x27;,&#xA;                    &#x27;-profile:v baseline&#x27;,&#xA;                    &#x27;-level 3.0&#x27;,&#xA;                    &#x27;-preset medium&#x27;,&#xA;                    &#x27;-crf 23&#x27;,&#xA;                    &#x27;-movflags &#x2B;faststart&#x27;,&#xA;                    &#x27;-y&#x27;,&#xA;                ])&#xA;                .output(outputPath)&#xA;                .on(&#x27;end&#x27;, () => {&#xA;                    console.log(&#x27;Output file generated&#x27;)&#xA;                    resolve()&#xA;                })&#xA;                .on(&#x27;error&#x27;, (e) => {&#xA;                    console.log(e)&#xA;                    reject()&#xA;                })&#xA;                .run()&#xA;            &#xA;        })&#xA;        await uploadFile(outputPath, imgId &#x2B; &#x27;.mp4&#x27;)&#xA;            .then((url) => {&#xA;                body = JSON.stringify({&#xA;                    url,&#xA;                })&#xA;            })&#xA;            .catch((error) => {&#xA;                console.error(error)&#xA;                statusCode = 400&#xA;                body = error?.message ?? error&#xA;            })&#xA;        console.log(`File uploaded to S3`)&#xA;        const headers = {&#xA;            &#x27;Content-Type&#x27;: &#x27;application/json&#x27;,&#xA;            &#x27;Access-Control-Allow-Headers&#x27;: &#x27;Content-Type&#x27;,&#xA;            &#x27;Access-Control-Allow-Origin&#x27;: &#x27;https://tikex.com, https://borespiac.hu&#x27;,&#xA;            &#x27;Access-Control-Allow-Methods&#x27;: &#x27;GET&#x27;,&#xA;        }&#xA;        return {&#xA;            statusCode,&#xA;            body,&#xA;            headers,&#xA;        }&#xA;    } catch (error) {&#xA;        console.error(error)&#xA;        return {&#xA;            statusCode: 500,&#xA;            body: JSON.stringify(&#x27;Error fetching data&#x27;),&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;const downloadFile = async (url, path) => {&#xA;    try {&#xA;        console.log(`Download will start: ${url}`)&#xA;        const response = await axios(url, {&#xA;            responseType: &#x27;stream&#x27;,&#xA;        })&#xA;        if (response.status !== 200) {&#xA;            throw new Error(&#xA;                `Failed to download file, status code: ${response.status}`&#xA;            )&#xA;        }&#xA;        response.data&#xA;            .pipe(fs.createWriteStream(path))&#xA;            .on(&#x27;finish&#x27;, () => console.log(`File downloaded to ${path}`))&#xA;            .on(&#x27;error&#x27;, (e) => {&#xA;                throw new Error(`Failed to save file: ${e}`)&#xA;            })&#xA;    } catch (e) {&#xA;        console.error(`Error downloading file: ${e}`)&#xA;    }&#xA;}&#xA;const uploadFile = async (path, id) => {&#xA;    const buffer = fs.readFileSync(path)&#xA;    const params = {&#xA;        Bucket: &#x27;t44-post-cover&#x27;,&#xA;        ACL: &#x27;public-read&#x27;,&#xA;        Key: id,&#xA;        ContentType: &#x27;video/mp4&#x27;,&#xA;        Body: buffer,&#xA;    }&#xA;    await client.send(new PutObjectCommand(params))&#xA;    return getFileURL(id)&#xA;}&#xA;const getFileURL = (id) => {&#xA;    const bucket = &#x27;t44-post-cover&#x27;&#xA;    const url = `https://${bucket}.s3.eu-central-1.amazonaws.com/${id}`&#xA;    return url&#xA;}&#xA;

    &#xA;

    Added AWSLambdaBasicExecutionRole-16e770c8-05fa-4c42-9819-12c468cb5b49 permission, with policy :

    &#xA;

    {&#xA;    "Version": "2012-10-17",&#xA;    "Statement": [&#xA;        {&#xA;            "Effect": "Allow",&#xA;            "Action": "logs:CreateLogGroup",&#xA;            "Resource": "arn:aws:logs:eu-central-1:634617701827:*"&#xA;        },&#xA;        {&#xA;            "Effect": "Allow",&#xA;            "Action": [&#xA;                "logs:CreateLogStream",&#xA;                "logs:PutLogEvents"&#xA;            ],&#xA;            "Resource": [&#xA;                "arn:aws:logs:eu-central-1:634617701827:log-group:/aws/lambda/promo-video-composer-2:*"&#xA;            ]&#xA;        },&#xA;        {&#xA;            "Effect": "Allow",&#xA;            "Action": [&#xA;                "s3:GetObject",&#xA;                "s3:PutObject",&#xA;                "s3:ListBucket"&#xA;            ],&#xA;            "Resource": [&#xA;                "arn:aws:s3:::example-bucket",&#xA;                "arn:aws:s3:::example-bucket/*"&#xA;            ]&#xA;        },&#xA;        {&#xA;            "Effect": "Allow",&#xA;            "Action": [&#xA;                "logs:CreateLogGroup",&#xA;                "logs:CreateLogStream",&#xA;                "logs:PutLogEvents"&#xA;            ],&#xA;            "Resource": [&#xA;                "arn:aws:logs:*:*:*"&#xA;            ]&#xA;        },&#xA;        {&#xA;            "Effect": "Allow",&#xA;            "Action": [&#xA;                "ec2:DescribeNetworkInterfaces"&#xA;            ],&#xA;            "Resource": [&#xA;                "*"&#xA;            ]&#xA;        },&#xA;        {&#xA;            "Effect": "Allow",&#xA;            "Action": [&#xA;                "sns:*"&#xA;            ],&#xA;            "Resource": [&#xA;                "*"&#xA;            ]&#xA;        },&#xA;        {&#xA;            "Effect": "Allow",&#xA;            "Action": [&#xA;                "cloudwatch:*"&#xA;            ],&#xA;            "Resource": [&#xA;                "*"&#xA;            ]&#xA;        },&#xA;        {&#xA;            "Effect": "Allow",&#xA;            "Action": [&#xA;                "kms:Decrypt"&#xA;            ],&#xA;            "Resource": [&#xA;                "*"&#xA;            ]&#xA;        }&#xA;    ]&#xA;}&#xA;

    &#xA;

    What do I miss ?

    &#xA;

    janoskukoda@Janoss-MacBook-Pro promo-video-composer-2 % ls -l $(which ffmpeg)&#xA;lrwxr-xr-x  1 janoskukoda  admin  35 Feb 10 12:50 /opt/homebrew/bin/ffmpeg -> ../Cellar/ffmpeg/5.1.2_4/bin/ffmpeg&#xA;

    &#xA;

  • What is Multi-Touch Attribution ? (And How To Get Started)

    2 février 2023, par Erin — Analytics Tips

    Good marketing thrives on data. Or more precisely — its interpretation. Using modern analytics software, we can determine which marketing actions steer prospects towards the desired action (a conversion event). 

    An attribution model in marketing is a set of rules that determine how various marketing tactics and channels impact the visitor’s progress towards a conversion. 

    Yet, as customer journeys become more complicated and involve multiple “touches”, standard marketing reports no longer tell the full picture. 

    That’s when multi-touch attribution analysis comes to the fore. 

    What is Multi-Touch Attribution ?

    Multi-touch attribution (also known as multi-channel attribution or cross-channel attribution) measures the impact of all touchpoints on the consumer journey on conversion. 

    Unlike single-touch reporting, multi-touch attribution models give credit to each marketing element — a social media ad, an on-site banner, an email link click, etc. By seeing impacts from every touchpoint and channel, marketers can avoid false assumptions or subpar budget allocations.

    To better understand the concept, let’s interpret the same customer journey using a standard single-touch report vs a multi-touch attribution model. 

    Picture this : Jammie is shopping around for a privacy-centred web analytics solution. She saw a recommendation on Twitter and ended up on the Matomo website. After browsing a few product pages and checking comparisons with other web analytics tools, she signs up for a webinar. One week after attending, Jammie is convinced that Matomo is the right tool for her business and goes directly to the Matomo website a starts a free trial. 

    • A standard single-touch report would attribute 100% of the conversion to direct traffic, which doesn’t give an accurate view of the multiple touchpoints that led Jammie to start a free trial. 
    • A multi-channel attribution report would showcase all the channels involved in the free trial conversion — social media, website content, the webinar, and then the direct traffic source.

    In other words : Multi-touch attribution helps you understand how prospects move through the sales funnel and which elements tinder them towards the desired outcome. 

    Types of Attribution Models

    As marketers, we know that multiple factors play into a conversion — channel type, timing, user’s stage on the buyer journey and so on. Various attribution models exist to reflect this variability. 

    Types of Attribution Models

    First Interaction attribution model (otherwise known as first touch) gives all credit for the conversion to the first channel (for example — a referral link) and doesn’t report on all the other interactions a user had with your company (e.g., clicked a newsletter link, engaged with a landing page, or browsed the blog campaign).

    First-touch helps optimise the top of your funnel and establish which channels bring the best leads. However, it doesn’t offer any insight into other factors that persuaded a user to convert. 

    Last Interaction attribution model (also known as last touch) allocates 100% credit to the last channel before conversion — be it direct traffic, paid ad, or an internal product page.

    The data is useful for optimising the bottom-of-the-funnel (BoFU) elements. But you have no visibility into assisted conversions — interactions a user had prior to conversion. 

    Last Non-Direct attribution model model excludes direct traffic and assigns 100% credit for a conversion to the last channel a user interacted with before converting. For instance, a social media post will receive 100% of credit if a shopper buys a product three days later. 

    This model is more telling about the other channels, involved in the sales process. Yet, you’re seeing only one step backwards, which may not be sufficient for companies with longer sales cycles.

    Linear attribution model distributes an equal credit for a conversion between all tracked touchpoints.

    For instance, with a four touchpoint conversion (e.g., an organic visit, then a direct visit, then a social visit, then a visit and conversion from an ad campaign) each touchpoint would receive 25% credit for that single conversion.

    This is the simplest multi-channel attribution modelling technique many tools support. The nuance is that linear models don’t reflect the true impact of various events. After all, a paid ad that introduced your brand to the shopper and a time-sensitive discount code at the checkout page probably did more than the blog content a shopper browsed in between. 

    Position Based attribution model allocates a 40% credit to the first and the last touchpoints and then spreads the remaining 20% across the touchpoints between the first and last. 

    This attribution model comes in handy for optimising conversions across the top and the bottom of the funnel. But it doesn’t provide much insight into the middle, which can skew your decision-making. For instance, you may overlook cases when a shopper landed via a social media post, then was re-engaged via email, and proceeded to checkout after an organic visit. Without email marketing, that sale may not have happened.

    Time decay attribution model adjusts the credit, based on the timing of the interactions. Touchpoints that preceded the conversion get the highest score, while the first ones get less weight (e.g., 5%-5%-10%-15%-25%-30%).

    This multi-channel attribution model works great for tracking the bottom of the funnel, but it underestimates the impact of brand awareness campaigns or assisted conversions at mid-stage. 

    Why Use Multi-Touch Attribution Modelling

    Multi-touch attribution provides you with the full picture of your funnel. With accurate data across all touchpoints, you can employ targeted conversion rate optimisation (CRO) strategies to maximise the impact of each campaign. 

    Most marketers and analysts prefer using multi-touch attribution modelling — and for some good reasons.

    Issues multi-touch attribution solves 

    • Funnel visibility. Understand which tactics play an important role at the top, middle and bottom of your funnel, instead of second-guessing what’s working or not. 
    • Budget allocations. Spend money on channels and tactics that bring a positive return on investment (ROI). 
    • Assisted conversions. Learn how different elements and touchpoints cumulatively contribute to the ultimate goal — a conversion event — to optimise accordingly. 
    • Channel segmentation. Determine which assets drive the most qualified and engaged leads to replicate them at scale.
    • Campaign benchmarking. Compare how different marketing activities from affiliate marketing to social media perform against the same metrics.

    How To Get Started With Multi-Touch Attribution 

    To make multi-touch attribution part of your analytics setup, follow the next steps :

    1. Define Your Marketing Objectives 

    Multi-touch attribution helps you better understand what led people to convert on your site. But to capture that, you need to first map the standard purchase journeys, which include a series of touchpoints — instances, when a prospect forms an opinion about your business.

    Touchpoints include :

    • On-site interactions (e.g., reading a blog post, browsing product pages, using an on-site calculator, etc.)
    • Off-site interactions (e.g., reading a review, clicking a social media link, interacting with an ad, etc.)

    Combined these interactions make up your sales funnel — a designated path you’ve set up to lead people toward the desired action (aka a conversion). 

    Depending on your business model, you can count any of the following as a conversion :

    • Purchase 
    • Account registration 
    • Free trial request 
    • Contact form submission 
    • Online reservation 
    • Demo call request 
    • Newsletter subscription

    So your first task is to create a set of conversion objectives for your business and add them as Goals or Conversions in your web analytics solution. Then brainstorm how various touchpoints contribute to these objectives. 

    Web analytics tools with multi-channel attribution, like Matomo, allow you to obtain an extra dimension of data on touchpoints via Tracked Events. Using Event Tracking, you can analyse how many people started doing a desired action (e.g., typing details into the form) but never completed the task. This way you can quickly identify “leaking” touchpoints in your funnel and fix them. 

    2. Select an Attribution Model 

    Multi-attribution models have inherent tradeoffs. Linear attribution model doesn’t always represent the role and importance of each channel. Position-based attribution model emphasises the role of the last and first channel while diminishing the importance of assisted conversions. Time-decay model, on the contrary, downplays the role awareness-related campaigns played.

    To select the right attribution model for your business consider your objectives. Is it more important for you to understand your best top of funnel channels to optimise customer acquisition costs (CAC) ? Or would you rather maximise your on-site conversion rates ? 

    Your industry and the average cycle length should also guide your choice. Position-based models can work best for eCommerce and SaaS businesses where both CAC and on-site conversion rates play an important role. Manufacturing companies or educational services providers, on the contrary, will benefit more from a time-decay model as it better represents the lengthy sales cycles. 

    3. Collect and Organise Data From All Touchpoints 

    Multi-touch attribution models are based on available funnel data. So to get started, you will need to determine which data sources you have and how to best leverage them for attribution modelling. 

    Types of data you should collect : 

    • General web analytics data : Insights on visitors’ on-site actions — visited pages, clicked links, form submissions and more.
    • Goals (Conversions) : Reports on successful conversions across different types of assets. 
    • Behavioural user data : Some tools also offer advanced features such as heatmaps, session recording and A/B tests. These too provide ample data into user behaviours, which you can use to map and optimise various touchpoints.

    You can also implement extra tracking, for instance for contact form submissions, live chat contacts or email marketing campaigns to identify repeat users in your system. Just remember to stay on the good side of data protection laws and respect your visitors’ privacy. 

    Separately, you can obtain top-of-the-funnel data by analysing referral traffic sources (channel, campaign type, used keyword, etc). A Tag Manager comes in handy as it allows you to zoom in on particular assets (e.g., a newsletter, an affiliate, a social campaign, etc). 

    Combined, these data points can be parsed by an app, supporting multi-touch attribution (or a custom algorithm) and reported back to you as specific findings. 

    Sounds easy, right ? Well, the devil is in the details. Getting ample, accurate data for multi-touch attribution modelling isn’t easy. 

    Marketing analytics has an accuracy problem, mainly for two reasons :

    • Cookie consent banner rejection 
    • Data sampling application

    Please note that we are not able to provide legal advice, so it’s important that you consult with your own DPO to ensure compliance with all relevant laws and regulations.

    If you’re collecting web analytics in the EU, you know that showing a cookie consent banner is a GDPR must-do. But many consumers don’t often rush to accept cookie consent banners. The average consent rate for cookies in 2021 stood at 54% in Italy, 45% in France, and 44% in Germany. The consent rates are likely lower in 2023, as Google was forced to roll out a “reject all” button for cookie tracking in Europe, while privacy organisations lodge complaints against individual businesses for deceptive banners. 

    For marketers, cookie rejection means substantial gaps in analytics data. The good news is that you can fill in those gaps by using a privacy-centred web analytics tool like Matomo. 

    Matomo takes extra safeguards to protect user privacy and supports fully cookieless tracking. Because of that, Matomo is legally exempt from tracking consent in France. Plus, you can configure to use our analytics tool without consent banners in other markets outside of Germany and the UK. This way you get to retain the data you need for audience modelling without breaching any privacy regulations. 

    Data sampling application partially stems from the above. When a web analytics or multi-channel attribution tool cannot secure first-hand data, the “guessing game” begins. Google Analytics, as well as other tools, often rely on synthetic AI-generated data to fill in the reporting gaps. Respectively, your multi-attribution model doesn’t depict the real state of affairs. Instead, it shows AI-produced guesstimates of what transpired whenever not enough real-world evidence is available.

    4. Evaluate and Select an Attribution Tool 

    Google Analytics (GA) offers several multi-touch attribution models for free (linear, time-decay and position-based). The disadvantage of GA multi-touch attribution is its lower accuracy due to cookie rejection and data sampling application.

    At the same time, you cannot create custom credit allocations for the proposed models, unless you have the paid version of GA, Google Analytics 360. This version of GA comes with a custom Attribution Modeling Tool (AMT). The price tag, however, starts at USD $50,000 per year. 

    Matomo Cloud offers multi-channel conversion attribution as a feature and it is available as a plug-in on the marketplace for Matomo On-Premise. We support linear, position-based, first-interaction, last-interaction, last non-direct and time-decay modelling, based fully on first-hand data. You also get more precise insights because cookie consent isn’t an issue with us. 

    Most multi-channel attribution tools, like Google Analytics and Matomo, provide out-of-the-box multi-touch attribution models. But other tools, like Matomo On-Premise, also provide full access to raw data so you can develop your own multi-touch attribution models and do custom attribution analysis. The ability to create custom attribution analysis is particularly beneficial for data analysts or organisations with complex and unique buyer journeys. 

    Conclusion

    Ultimately, multi-channel attribution gives marketers greater visibility into the customer journey. By analysing multiple touchpoints, you can establish how various marketing efforts contribute to conversions. Then use this information to inform your promotional strategy, budget allocations and CRO efforts. 

    The key to benefiting the most from multi-touch attribution is accurate data. If your analytics solution isn’t telling you the full story, your multi-touch model won’t either. 

    Collect accurate visitor data for multi-touch attribution modelling with Matomo. Start your free 21-day trial now