Recherche avancée

Médias (39)

Mot : - Tags -/audio

Autres articles (89)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (8932)

  • Benefits and Shortcomings of Multi-Touch Attribution

    13 mars 2023, par Erin — Analytics Tips

    Few sales happen instantly. Consumers take their time to discover, evaluate and become convinced to go with your offer. 

    Multi-channel attribution (also known as multi-touch attribution or MTA) helps businesses better understand which marketing tactics impact consumers’ decisions at different stages of their buying journey. Then double down on what’s working to secure more sales. 

    Unlike standard analytics, multi-channel modelling combines data from various channels to determine their cumulative and independent impact on your conversion rates. 

    The main benefit of multi-touch attribution is obvious : See top-performing channels, as well as those involved in assisted conversions. The drawback of multi-touch attribution : It comes with a more complex setup process. 

    If you’re on the fence about getting started with multi-touch attribution, here’s a summary of the main arguments for and against it. 

    What Are the Benefits of Multi-Touch Attribution ?

    Remember an old parable of blind men and an elephant ?

    Each one touched the elephant and drew conclusions about how it might look. The group ended up with different perceptions of the animal and thought the others were lying…until they decided to work together on establishing the truth.

    Multi-channel analytics works in a similar way : It reconciles data from various channels and campaign types into one complete picture. So that you can get aligned on the efficacy of different campaign types and gain some other benefits too. 

    Better Understanding of Customer Journeys 

    On average, it takes 8 interactions with a prospect to generate a conversion. These interactions happen in three stages : 

    • Awareness : You need to introduce your company to the target buyers and pique their interest in your solution (top-of-the-funnel). 
    • Consideration : The next step is to channel this casual interest into deliberate research and evaluation of your offer (middle-of-the-funnel). 
    • Decision : Finally, you need to get the buyer to commit to your offer and close the deal (bottom-of-the-funnel). 

    You can analyse funnels using various attribution models — last-click, fist-click, position-based attribution, etc. Each model, however, will spotlight the different element(s) of your sales funnel. 

    For example, a single-touch attribution model like last-click zooms in on the bottom-of-the-funnel stage. You can evaluate which channels (or on-site elements) sealed the deal for the prospect. For example, a site visitor arrived from an affiliate link and started a free trial. In this case, the affiliate (referral traffic) gets 100% credit for the conversion. 

    This measurement tactic, however, doesn’t show which channels brought the customer to the very bottom of your funnel. For instance, they may have interacted with a social media post, your landing pages or a banner ad before that. 

    Multi-touch attribution modelling takes funnel analysis a notch further. In this case, you map more steps in the customer journey — actions, events, and pages that triggered a visitor’s decision to convert — in your website analytics tool.

    Funnels Report Matomo

    Then, select a multi-touch attribution model, which provides more backward visibility aka allows you to track more than one channel, preceding the conversion. 

    For example, a Position Based attribution model reports back on all interactions a site visitor had between their first visit and conversion. 

    A prospect first lands at your website via search results (Search traffic), which gets a 40% credit in this model. Two days later, the same person discovers a mention of your website on another blog and visits again (Referral traffic). This time, they save the page as a bookmark and revisit it again in two more days (Direct traffic). Each of these channels will get a 10% credit. A week later, the prospect lands again on your site via Twitter (Social) and makes a request for a demo. Social would then receive a 40% credit for this conversion. Last-click would have only credited social media and first-click — search engines. 

    The bottom line : Multi-channel attribution models show how different channels (and marketing tactics) contribute to conversions at different stages of the customer journey. Without it, you get an incomplete picture.

    Improved Budget Allocation 

    Understanding causal relationships between marketing activities and conversion rates can help you optimise your budgets.

    First-click/last-click attribution models emphasise the role of one channel. This can prompt you toward the wrong conclusions. 

    For instance, your Facebook ads campaigns do great according to a first-touch model. So you decide to increase the budget. What you might be missing though is that you could have an even higher conversion rate and revenue if you fix “funnel leaks” — address high drop-off rates during checkout, improve page layout and address other possible reasons for exiting the page.

    Matomo Customisable Goal Funnels
    Funnel reports at Matomo allow you to see how many people proceed to the next conversion stage and investigate why they drop off.

    By knowing when and why people abandon their purchase journey, you can improve your marketing velocity (aka the speed of seeing the campaign results) and your marketing costs (aka the budgets you allocate toward different assets, touchpoints and campaign types). 

    Or as one of the godfathers of marketing technology, Dan McGaw, explained in a webinar :

    “Once you have a multi-touch attribution model, you [can] actually know the return on ad spend on a per-campaign basis. Sometimes, you can get it down to keywords. Sometimes, you can get down to all kinds of other information, but you start to realise, “Oh, this campaign sucks. I should shut this off.” And then really, that’s what it’s about. It’s seeing those campaigns that suck and turning them off and then taking that budget and putting it into the campaigns that are working”.

    More Accurate Measurements 

    The big boon of multi-channel marketing attribution is that you can zoom in on various elements of your funnel and gain granular data on the asset’s performance. 

    In other words : You get more accurate insights into the different elements involved in customer journeys. But for accurate analytics measurements, you must configure accurate tracking. 

    Define your objectives first : How do you want a multi-touch attribution tool to help you ? Multi-channel attribution analysis helps you answer important questions such as :

    • How many touchpoints are involved in the conversions ? 
    • How long does it take for a lead to convert on average ? 
    • When and where do different audience groups convert ? 
    • What is your average win rate for different types of campaigns ?

    Your objectives will dictate which multi-channel modelling approach will work best for your business — as well as the data you’ll need to collect. 

    At the highest level, you need to collect two data points :

    • Conversions : Desired actions from your prospects — a sale, a newsletter subscription, a form submission, etc. Record them as tracked Goals
    • Touchpoints : Specific interactions between your brand and targets — specific page visits, referral traffic from a particular marketing channel, etc. Record them as tracked Events

    Your attribution modelling software will then establish correlation patterns between actions (conversions) and assets (touchpoints), which triggered them. 

    The accuracy of these measurements, however, will depend on the quality of data and the type of attribution modelling used. 

    Data quality stands for your ability to procure accurate, complete and comprehensive information from various touchpoints. For instance, some data won’t be available if the user rejected a cookie consent banner (unless you’re using a privacy-focused web analytics tool like Matomo). 

    Different attribution modelling techniques come with inherent shortcomings too as they don’t accurately represent the average sales cycle length or track visitor-level data, which allows you to understand which customer segments convert best.

    Learn more about selecting the optimal multi-channel attribution model for your business.

    What Are the Limitations of Multi-Touch Attribution ?

    Overall, multi-touch attribution offers a more comprehensive view of the conversion paths. However, each attribution model (except for custom ones) comes with inherent assumptions about the contribution of different channels (e.g,. 25%-25%-25%-25% in linear attribution or 40%-10%-10%-40% in position-based attribution). These conversion credit allocations may not accurately represent the realities of your industry. 

    Also, most attribution models don’t reflect incremental revenue you gain from existing customers, which aren’t converting through analysed channels. For example, account upgrades to a higher tier, triggered via an in-app offer. Or warranty upsell, made via a marketing email. 

    In addition, you should keep in mind several other limitations of multi-touch attribution software.

    Limited Marketing Mix Analysis 

    Multi-touch attribution tools work in conjunction with your website analytics app (as they draw most data from it). Because of that, such models inherit the same visibility into your marketing mix — a combo of tactics you use to influence consumer decisions.

    Multi-touch attribution tools cannot evaluate the impact of :

    • Dark social channels 
    • Word-of-mouth 
    • Offline promotional events
    • TV or out-of-home ad campaigns 

    If you want to incorporate this data into your multi-attribution reporting, you’ll have to procure extra data from other systems — CRM, ad measurement partners, etc, — and create complex custom analytics models for its evaluation.

    Time-Based Constraints 

    Most analytics apps provide a maximum 90-day lookback window for attribution. This can be short for companies with longer sales cycles. 

    Source : Marketing Charts

    Marketing channels can be overlooked or underappreciated when your attribution window is too short. Because of that, you may curtail spending on brand awareness campaigns, which, in turn, will reduce the number of people entering the later stages of your funnel. 

    At the same time, many businesses would also want to track a look-forward window — the revenue you’ll get from one customer over their lifetime. In this case, not all tools may allow you to capture accurate information on repeat conversions — through re-purchases, account tier updates, add-ons, upsells, etc. 

    Again, to get an accurate picture you’ll need to understand how far into the future you should track conversions. Will you only record your first sales as a revenue number or monitor customer lifetime value (CLV) over 3, 6 or 12 months ? 

    The latter is more challenging to do. But CLV data can add another depth of dimension to your modelling accuracy. With Matomo, you set up this type of tracking by using our visitors’ tracking feature. We can help you track select visitors with known identifiers (e.g. name or email address) to discover their visiting patterns over time. 

    Visitor User IDs in Matomo

    Limited Access to Raw Data 

    In web analytics, raw data stands for unprocessed website visitor information, stripped from any filters, segmentation or sampling applied. 

    Data sampling is a practice of analysing data subsets (instead of complete records) to extrapolate findings towards the entire data set. Google Analytics 4 applies data sampling once you hit over 500k sessions at the property level. So instead of accurate, real-life reporting, you receive approximations, generated by machine learning models. Data sampling is one of the main reasons behind Google Analytics’ accuracy issues

    In multi-channel attribution modelling, usage of sampled data creates further inconsistencies between the reports and the actual state of affairs. For instance, if your website generates 5 million page views, GA multi-touch analytical reports are based on the 500K sample size aka only 90% of the collected information. This hardly represents the real effect of all marketing channels and can lead to subpar decision-making. 

    With Matomo, the above is never an issue. We don’t apply data sampling to any websites (no matter the volume of traffic) and generate all the reports, including multi-channel attribution ones, based on 100% real user data. 

    AI Application 

    On the other hand, websites with smaller traffic volumes often have limited sampling datasets for building attribution models. Some tracking data may also be not available because the visitor rejected a cookie banner, for instance. On average, less than 50% of users in Australia, France, Germany, Denmark and the US among other countries always consent to all cookies. 

    To compensate for such scenarios, some multi-touch attribution solutions apply AI algorithms to “fill in the blanks”, which impacts the reporting accuracy. Once again, you get approximate data of what probably happened. However, Matomo is legally exempt from showing a cookie consent banner in most EU markets. Meaning you can collect 100% accurate data to make data-driven decisions.

    Difficult Technical Implementation 

    Ever since attribution modelling got traction in digital marketing, more and more tools started to emerge.

    Most web analytics apps include multi-touch attribution reports. Then there are standalone multi-channel attribution platforms, offering extra features for conversion rate optimization, offline channel tracking, data-driven custom modelling, etc. 

    Most advanced solutions aren’t available out of the box. Instead, you have to install several applications, configure integrations with requested data sources, and then use the provided interfaces to code together custom data models. Such solutions are great if you have a technical marketer or a data science team. But a steep learning curve and high setup costs make them less attractive for smaller teams. 

    Conclusion 

    Multi-touch attribution modelling lifts the curtain in more steps, involved in various customer journeys. By understanding which touchpoints contribute to conversions, you can better plan your campaign types and budget allocations. 

    That said, to benefit from multi-touch attribution modelling, marketers also need to do the preliminary work : Determine the key goals, set up event and conversion tracking, and then — select the optimal attribution model type and tool. 

    Matomo combines simplicity with sophistication. We provide marketers with familiar, intuitive interfaces for setting up conversion tracking across the funnel. Then generate attribution reports, based on 100% accurate data (without any sampling or “guesstimation” applied). You can also get access to raw analytics data to create custom attribution models or plug it into another tool ! 

    Start using accurate, easy-to-use multi-channel attribution with Matomo. Start your free 21-day trial now. No credit card requried. 

  • Google Optimize vs Matomo A/B Testing : Everything You Need to Know

    17 mars 2023, par Erin — Analytics Tips

    Google Optimize is a popular A/B testing tool marketers use to validate the performance of different marketing assets, website design elements and promotional offers. 

    But by September 2023, Google will sunset both free and paid versions of the Optimize product. 

    If you’re searching for an equally robust, but GDPR compliant, privacy-friendly alternative to Google Optimize, have a look at Matomo A/B Testing

    Integrated with our analytics platform and conversion rate optimisation (CRO) tools, Matomo allows you to run A/B and A/B/n tests without any usage caps or compromises in user privacy.

    Disclaimer : Please note that the information provided in this blog post is for general informational purposes only and is not intended to provide legal advice. Every situation is unique and requires a specific legal analysis. If you have any questions regarding the legal implications of any matter, please consult with your legal team or seek advice from a qualified legal professional.

    Google Optimize vs Matomo : Key Capabilities Compared 

    This guide shows how Matomo A/B testing stacks against Google Optimize in terms of features, reporting, integrations and pricing.

    Supported Platforms 

    Google Optimize supports experiments for dynamic websites and single-page mobile apps only. 

    If you want to run split tests in mobile apps, you’ll have to do so via Firebase — Google’s app development platform. It also has a free tier but paid usage-based subscription kicks in after your product(s) reaches a certain usage threshold. 

    Google Optimize also doesn’t support CRO experiments for web or desktop applications, email campaigns or paid ad campaigns.Matomo A/B Testing, in contrast, allows you to run experiments in virtually every channel. We have three installation options — using JavaScript, server-side technology, or our mobile tracking SDK. These allow you to run split tests in any type of web or mobile app (including games), a desktop product, or on your website. Also, you can do different email marketing tests (e.g., compare subject line variants).

    A/B Testing 

    A/B testing (split testing) is the core feature of both products. Marketers use A/B testing to determine which creative elements such as website microcopy, button placements and banner versions, resonate better with target audiences. 

    You can benchmark different versions against one another to determine which variation resonates more with users. Or you can test an A version against B, C, D and beyond. This is called A/B/n testing. 

    Both Matomo A/B testing and Google Optimize let you test either separate page elements or two completely different landing page designs, using redirect tests. You can show different variants to different user groups (aka apply targeting criteria). For example, activate tests only for certain device types, locations or types of on-site behaviour. 

    The advantage of Matomo is that we don’t limit the number of concurrent experiments you can run. With Google Optimize, you’re limited to 5 simultaneous experiments. Likewise, 

    Matomo lets you select an unlimited number of experiment objectives, whereas Google caps the maximum choice to 3 predefined options per experiment. 

    Objectives are criteria the underlying statistical model will use to determine the best-performing version. Typically, marketers use metrics such as page views, session duration, bounce rate or generated revenue as conversion goals

    Conversions Report Matomo

    Multivariate testing (MVT)

    Multivariate testing (MVT) allows you to “pack” several A/B tests into one active experiment. In other words : You create a stack of variants to determine which combination drives the best marketing outcomes. 

    For example, an MVT experiment can include five versions of a web page, where each has a different slogan, product image, call-to-action, etc. Visitors are then served with a different variation. The tracking code collects data on their behaviours and desired outcomes (objectives) and reports the results.

    MVT saves marketers time as it’s a great alternative to doing separate A/B tests for each variable. Both Matomo and Google Optimize support this feature. However, Google Optimize caps the number of possible combinations at 16, whereas Matomo has no limits. 

    Redirect Tests

    Redirect tests, also known as split URL tests, allow you to serve two entirely different web page versions to users and compare their performance. This option comes in handy when you’re redesigning your website or want to test a localised page version in a new market. 

    Also, redirect tests are a great way to validate the performance of bottom-of-the-funnel (BoFU) pages as a checkout page (for eCommerce websites), a pricing page (for SaaS apps) or a contact/booking form (for a B2B service businesses). 

    You can do split URL tests with Google Optimize and Matomo A/B Testing. 

    Experiment Design 

    Google Optimize provides a visual editor for making simple page changes to your website (e.g., changing button colour or adding several headline variations). You can then preview the changes before publishing an experiment. For more complex experiments (e.g., testing different page block sequences), you’ll have to codify experiments using custom JavaScript, HTML and CSS.

    In Matomo, all A/B tests are configured on the server-side (i.e., by editing your website’s raw HTML) or client-side via JavaScript. Afterwards, you use the Matomo interface to start or schedule an experiment, set objectives and view reports. 

    Experiment Configuration 

    Marketers know how complex customer journeys can be. Multiple factors — from location and device to time of the day and discount size — can impact your conversion rates. That’s why a great CRO app allows you to configure multiple tracking conditions. 

    Matomo A/B testing comes with granular controls. First of all, you can decide which percentage of total web visitors participate in any given experiment. By default, the number is set to 100%, but you can change it to any other option. 

    Likewise, you can change which percentage of traffic each variant gets in an experiment. For example, your original version can get 30% of traffic, while options A and B receive 40% each. We also allow users to specify custom parameters for experiment participation. You can only show your variants to people in specific geo-location or returning visitors only. 

    Finally, you can select any type of meaningful objective to evaluate each variant’s performance. With Matomo, you can either use standard website analytics metrics (e.g., total page views, bounce rate, CTR, visit direction, etc) or custom goals (e.g., form click, asset download, eCommerce order, etc). 

    In other words : You’re in charge of deciding on your campaign targeting criteria, duration and evaluation objectives.

    A free Google Optimize account comes with three main types of user targeting options : 

    • Geo-targeting at city, region, metro and country levels. 
    • Technology targeting  by browser, OS or device type, first-party cookie, etc. 
    • Behavioural targeting based on metrics like “time since first arrival” and “page referrer” (referral traffic source). 

    Users can also configure other types of tracking scenarios (for example to only serve tests to signed-in users), using condition-based rules

    Reporting 

    Both Matomo and Google Optimize use different statistical models to evaluate which variation performs best. 

    Matomo relies on statistical hypothesis testing, which we use to count unique visitors and report on conversion rates. We analyse all user data (with no data sampling applied), meaning you get accurate reporting, based on first-hand data, rather than deductions. For that reason, we ask users to avoid drawing conclusions before their experiment participation numbers reach a statistically significant result. Typically, we recommend running an experiment for at least several business cycles to get a comprehensive report. 

    Google Optimize, in turn, uses Bayesian inference — a statistical method, which relies on a random sample of users to compare the performance rates of each creative against one another. While a Bayesian model generates CRO reports faster and at a bigger scale, it’s based on inferences.

    Model developers need to have the necessary skills to translate subjective prior beliefs about the probability of a certain event into a mathematical formula. Since Google Optimize is a proprietary tool, you cannot audit the underlying model design and verify its accuracy. In other words, you trust that it was created with the right judgement. 

    In comparison, Matomo started as an open-source project, and our source code can be audited independently by anyone at any time. 

    Another reporting difference to mind is the reporting delays. Matomo Cloud generates A/B reports within 6 hours and in only 1 hour for Matomo On-Premise. Google Optimize, in turn, requires 12 hours from the first experiment setup to start reporting on results. 

    When you configure a test experiment and want to quickly verify that everything is set up correctly, this can be an inconvenience.

    User Privacy & GDPR Compliance 

    Google Optimize works in conjunction with Google Analytics, which isn’t GDPR compliant

    For all website traffic from the EU, you’re therefore obliged to show a cookie consent banner. The kicker, however, is that you can only show an Optimize experiment after the user gives consent to tracking. If the user doesn’t, they will only see an original page version. Considering that almost 40% of global consumers reject cookie consent banners, this can significantly affect your results.

    This renders Google Optimize mostly useless in the EU since it would only allow you to run tests with a fraction ( 60%) of EU traffic — and even less if you apply any extra targeting criteria. 

    In comparison, Matomo is fully GDPR compliant. Therefore, our users are legally exempt from displaying cookie-consent banners in most EU markets (with Germany and the UK being an exception). Since Matomo A/B testing is part of Matomo web analytics, you don’t have to worry about GDPR compliance or breaches in user privacy. 

    Digital Experience Intelligence 

    You can get comprehensive statistical data on variants’ performance with Google Optimize. But you don’t get further insights on why some tests are more successful than others. 

    Matomo enables you to collect more insights with two extra features :

    • User session recordings : Monitor how users behave on different page versions. Observe clicks, mouse movements, scrolls, page changes, and form interactions to better understand the users’ cumulative digital experience. 
    • Heatmaps : Determine which elements attract the most users’ attention to fine-tune your split tests. With a standard CRO tool, you only assume that a certain page element does matter for most users. A heatmap can help you determine for sure. 

    Both of these features are bundled into your Matomo Cloud subscription

    Integrations 

    Both Matomo and Google Optimize integrate with multiple other tools. 

    Google Optimize has native integrations with other products in the marketing family — GA, Google Ads, Google Tag Manager, Google BigQuery, Accelerated Mobile Pages (AMP), and Firebase. Separately, other popular marketing apps have created custom connectors for integrating Google Optimize data. 

    Matomo A/B Testing, in turn, can be combined with other web analytics and CRO features such as Funnels, Multi-Channel Attribution, Tag Manager, Form Analytics, Heatmaps, Session Recording, and more ! 

    You can also conveniently export your website analytics or CRO data using Matomo Analytics API to analyse it in another app. 

    Pricing 

    Google Optimize is a free tool but has usage caps. If you want to schedule more than 5 concurrent experiments or test more than 16 variants at once, you’ll have to upgrade to Optimize 360. Optimize 360 prices aren’t listed publicly but are said to be closer to six figures per year. 

    Matomo A/B Testing is available with every Cloud subscription (starting from €19) and Matomo On-Premise users can also get A/B Testing as a plugin (starting from €199/year). In each case, there are no caps or data limits. 

    Google Optimize vs Matomo A/B Testing : Comparison Table

    Features/capabilitiesGoogle OptimizeMatomo A/B test
    Supported channelsWebWeb, mobile, email, digital campaigns
    A/B testingcheck mark iconcheck mark icon
    Multivariate testing (MVT)check mark iconcheck mark icon
    Split URL testscheck mark iconcheck mark icon
    Web analytics integration Native with UA/GA4 Native with Matomo

    You can also migrate historical UA (GA3) data to Matomo
    Audience segmentation BasicAdvanced
    Geo-targetingcheck mark iconX
    Technology targetingcheck mark iconX
    Behavioural targetingBasicAdvanced
    Reporting modelBayesian analysisStatistical hypothesis testing
    Report availability Within 12 hours after setup 6 hours for Matomo Cloud

    1 hour for Matomo On-Premise
    HeatmapsXcheck mark icon

    Included with Matomo Cloud
    Session recordingsXcheck mark icon

    Included with Matomo Cloud
    GDPR complianceXcheck mark icon
    Support Self-help desk on a free tierSelf-help guides, user forum, email
    PriceFree limited tier From €19 for Cloud subscription

    From €199/year as plugin for On-Premise

    Final Thoughts : Who Benefits the Most From an A/B Testing Tool ?

    Split testing is an excellent method for validating various assumptions about your target customers. 

    With A/B testing tools you get a data-backed answer to research hypotheses such as “How different pricing affects purchases ?”, “What contact button placement generates more clicks ?”, “Which registration form performs best with new app subscribers ?” and more. 

    Such insights can be game-changing when you’re trying to improve your demand-generation efforts or conversion rates at the BoFu stage. But to get meaningful results from CRO tests, you need to select measurable, representative objectives.

    For example, split testing different pricing strategies for low-priced, frequently purchased products makes sense as you can run an experiment for a couple of weeks to get a statistically relevant sample. 

    But if you’re in a B2B SaaS product, where the average sales cycle takes weeks (or months) to finalise and things like “time-sensitive discounts” or “one-time promos” don’t really work, getting adequate CRO data will be harder. 

    To see tangible results from CRO, you’ll need to spend more time on test ideation than implementation. Your team needs to figure out : which elements to test, in what order, and why. 

    Effective CRO tests are designed for a specific part of the funnel and assume that you’re capable of effectively identifying and tracking conversions (goals) at the selected stage. This alone can be a complex task since not all customer journeys are alike. For SaaS websites, using a goal like “free trial account registration” can be a good starting point.

    A good test also produces a meaningful difference between the proposed variant and the original version. As Nima Yassini, Partner at Deloitte Digital, rightfully argues :

    “I see people experimenting with the goal of creating an uplift. There’s nothing wrong with that, but if you’re only looking to get wins you will be crushed when the first few tests fail. The industry average says that only one in five to seven tests win, so you need to be prepared to lose most of the time”.

    In many cases, CRO tests don’t provide the data you expected (e.g., people equally click the blue and green buttons). In this case, you need to start building your hypothesis from scratch. 

    At the same time, it’s easy to get caught up in optimising for “vanity metrics” — such that look good in the report, but don’t quite match your marketing objectives. For example, better email headline variations can improve your email open rates. But if users don’t proceed to engage with the email content (e.g. click-through to your website or use a provided discount code), your efforts are still falling short. 

    That’s why developing a baseline strategy is important before committing to an A/B testing tool. Google Optimize appealed to many users because it’s free and allows you to test your split test strategy cost-effectively. 

    With its upcoming depreciation, many marketers are very committed to a more expensive A/B tool (especially when they’re not fully sure about their CRO strategy and its results). 

    Matomo A/B testing is a cost-effective, GDPR-compliant alternative to Google Optimize with a low learning curve and extra competitive features. 

    Discover if Matomo A/B Testing is the ideal Google Optimize alternative for your organization with our free 21-day trial. No credit card required.

  • Video encoding task not working with Django Celery Redis FFMPEG and GraphQL

    18 juin 2023, par phanio

    I'm having a hard time trying to understand how is this FFMPEG encoding works while using Django, Celery, Redis, GraphQL and Docker too.

    


    I have this video / courses platform project and want I'm trying to do using FFMPEG, Celery and Redis is to create different video resolutions so I can display them the way Youtube does inside the videoplayer ( the videoplayer is handled in frontend by Nextjs and Apollo Client ), now on the backend I've just learned that in order to use properly the FFMPEG to resize the oridinal video size, I need to use Celery and Redis to perform asyncronus tasks. I've found a few older posts here on stackoverflow and google, but is not quite enough info for someone who is using the ffmpeg and clery and redis for the first time ( I've started already step by step and created that example that adds two numbers together with celery, that works well ). Now I'm not sure what is wrong with my code, because first of all I'm not really sure where should I trigger the task from, I mean from which file, because at the end of the task I want to send the data through api using GrapQL Strawberry.

    


    This is what I've tried by now :

    


    So first things first my project structure looks like this

    


    - backend #root directory
 --- backend
    -- __init__.py
    -- celery.py
    -- settings.py
    -- urls.py
      etc..

 --- static
   -- videos

 --- video
   -- models.py
   -- schema.py
   -- tasks.py
   -- types.py
   etc..

 --- .env

 --- db.sqlite3

 --- docker-compose.yml

 --- Dockerfile

 --- manage.py

 --- requirements.txt


    


    here is my settings.py file :

    


    from pathlib import Path
import os

# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent

DEBUG = True

ALLOWED_HOSTS=["localhost", "0.0.0.0", "127.0.0.1"]

DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'


# Application definition

INSTALLED_APPS = [
    "corsheaders",
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',

    "strawberry.django",
    "video"
]

etc...

STATIC_URL = '/static/'
MEDIA_URL = '/videos/'

STATICFILES_DIRS = [
    BASE_DIR / 'static',
    # BASE_DIR / 'frontend/build/static',
]

MEDIA_ROOT = BASE_DIR / 'static/videos'

STATIC_ROOT = BASE_DIR / 'staticfiles'

STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'

CORS_ALLOW_ALL_ORIGINS = True


CELERY_BEAT_SCHEDULER = 'django_celery_beat.schedulers:DatabaseScheduler'

# REDIS CACHE
CACHES = {
    "default": {
        "BACKEND": "django_redis.cache.RedisCache",
        "LOCATION": f"redis://127.0.0.1:6379/1",
        "OPTIONS": {
            "CLIENT_CLASS": "django_redis.client.DefaultClient",
        },
    }
}

# Docker
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER", "redis://redis:6379/0")
CELERY_RESULT_BACKEND = os.environ.get("CELERY_BROKER", "redis://redis:6379/0")


    


    This is my main urls.py file :

    


    from django.contrib import admin
from django.conf import settings
from django.conf.urls.static import static
from django.urls import path
from django.urls.conf import include
from strawberry.django.views import GraphQLView

from video.schema import schema

urlpatterns = [
    path('admin/', admin.site.urls),
    path("graphql", GraphQLView.as_view(schema=schema)),
]

if settings.DEBUG:
    urlpatterns += static(settings.MEDIA_URL,
                          document_root=settings.MEDIA_ROOT)
    urlpatterns += static(settings.STATIC_URL,
                          document_root=settings.STATIC_ROOT)


    


    This is my celery.py file :

    


    from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'backend.settings')

backend = Celery('backend')

backend.config_from_object('django.conf:settings', namespace="CELERY")

backend.autodiscover_tasks()

@backend.task(bind=True)
def debug_task(self):
    print('Request: {0!r}'.format(self.request))


    


    This is my init.py file :

    


    from .celery import backend as celery_backend

__all__ = ('celery_backend',)


    


    This is my Dockerfile :

    


    FROM python:3
ENV PYTHONUNBUFFERED=1

WORKDIR /usr/src/backend

RUN apt-get -y update
RUN apt-get -y upgrade
RUN apt-get install -y ffmpeg

COPY requirements.txt ./
RUN pip install -r requirements.txt


    


    This is my docker-compose.yml file :

    


    version: "3.8"

services:
  django:
    build: .
    container_name: django
    command: python manage.py runserver 0.0.0.0:8000
    volumes:
      - .:/usr/src/backend/
    ports:
      - "8000:8000"
    environment:
      - DEBUG=1
      - DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
      - CELERY_BROKER=redis://redis:6379/0
      - CELERY_BACKEND=redis://redis:6379/0
    depends_on:
      - pgdb
      - redis

  celery:
    build: .
    command: celery -A backend worker -l INFO
    volumes:
      - .:/usr/src/backend
    depends_on:
      - django
      - redis

  pgdb:
    image: postgres
    container_name: pgdb
    environment:
      - POSTGRES_DB=postgres
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
    volumes:
      - pgdata:/var/lib/postgresql/data/

  redis:
    image: "redis:alpine"

volumes:
  pgdata:


    


    And now inside my video app folder :

    


    My models.py file :

    


      

    • here I've created separated fields for all resolution sizes, from video_file_2k to video_file_144, I was thinking that maybe after the process of the encoding this will populate those fields..
    • 


    


    from django.db import models
from django.urls import reverse


class Video(models.Model):
    video_id = models.AutoField(primary_key=True, editable=False)
    slug = models.SlugField(max_length=255)
    title = models.CharField(max_length=150, blank=True, null=True)
    description = models.TextField(blank=True, null=True)
    video_file = models.FileField(null=False, blank=False)
    video_file_2k = models.FileField(null=True, blank=True)
    video_file_fullhd = models.FileField(null=True, blank=True)
    video_file_hd = models.FileField(null=True, blank=True)
    video_file_480 = models.FileField(null=True, blank=True)
    video_file_360 = models.FileField(null=True, blank=True)
    video_file_240 = models.FileField(null=True, blank=True)
    video_file_144 = models.FileField(null=True, blank=True)
    category = models.CharField(max_length=64, blank=False, null=False)
    created_at = models.DateTimeField(
        ("Created at"), auto_now_add=True, editable=False)
    updated_at = models.DateTimeField(("Updated at"), auto_now=True)

    class Meta:
        ordering = ("-created_at",)
        verbose_name = ("Video")
        verbose_name_plural = ("Videos")

    def get_absolute_url(self):
        return reverse("store:video_detail", args=[self.slug])

    def __str__(self):
        return self.title


    


    This is my schema.py file :

    


    import strawberry
from strawberry.file_uploads import Upload
from typing import List
from .types import VideoType
from .models import Video
from .tasks import task_video_encoding_1080p, task_video_encoding_720p


@strawberry.type
class Query:
    @strawberry.field
    def videos(self, category: str = None) -> List[VideoType]:
        if category:
            videos = Video.objects.filter(category=category)
            return videos
        return Video.objects.all()

    @strawberry.field
    def video(self, slug: str) -> VideoType:
        if slug == slug:
            video = Video.objects.get(slug=slug)
            return video

    @strawberry.field
    def video_by_id(self, video_id: int) -> VideoType:
        if video_id == video_id:
            video = Video.objects.get(pk=video_id)

          # Here I've tried to trigger my tasks, when I visited 0.0.0.0:8000/graphql url
          # and I was querying for a video by it's id , then I've got the error from celery 
            task_video_encoding_1080p.delay(video_id)
            task_video_encoding_720p.delay(video_id)

            return video


@strawberry.type
class Mutation:
    @strawberry.field
    def create_video(self, slug: str, title: str, description: str, video_file: Upload, video_file_2k: str, video_file_fullhd: str, video_file_hd: str, video_file_480: str, video_file_360: str, video_file_240: str, video_file_144: str, category: str) -> VideoType:

        video = Video(slug=slug, title=title, description=description,
                      video_file=video_file, video_file_2k=video_file_2k, video_file_fullhd=video_file_fullhd, video_file_hd=video_file_hd, video_file_480=video_file_480, video_file_360=video_file_360, video_file_240=video_file_240, video_file_144=video_file_144,category=category)
        
        video.save()
        return video

    @strawberry.field
    def update_video(self, video_id: int, slug: str, title: str, description: str, video_file: str, category: str) -> VideoType:
        video = Video.objects.get(video_id=video_id)
        video.slug = slug
        video.title = title
        video.description = description
        video.video_file = video_file
        video.category = category
        video.save()
        return video

    @strawberry.field
    def delete_video(self, video_id: int) -> bool:
        video = Video.objects.get(video_id=video_id)
        video.delete
        return True


schema = strawberry.Schema(query=Query, mutation=Mutation)


    


    This is my types.py file ( strawberry graphql related ) :

    


    import strawberry

from .models import Video


@strawberry.django.type(Video)
class VideoType:
    video_id: int
    slug: str
    title: str
    description: str
    video_file: str
    video_file_2k: str
    video_file_fullhd: str
    video_file_hd: str
    video_file_480: str
    video_file_360: str
    video_file_240: str
    video_file_144: str
    category: str


    


    And this is my tasks.py file :

    


    from __future__ import absolute_import, unicode_literals
import os, subprocess
from django.conf import settings
from django.core.exceptions import ValidationError
from celery import shared_task
from celery.utils.log import get_task_logger
from .models import Video
FFMPEG_PATH = os.environ["IMAGEIO_FFMPEG_EXE"] = "/opt/homebrew/Cellar/ffmpeg/6.0/bin/ffmpeg"

logger = get_task_logger(__name__)


# CELERY TASKS
@shared_task
def add(x,y):
    return x + y


@shared_task
def task_video_encoding_720p(video_id):
    logger.info('Video Processing started')
    try:
        video = Video.objects.get(video_id=video_id)
        input_file_path = video.video_file.path
        input_file_url = video.video_file.url
        input_file_name = video.video_file.name

        # get the filename (without extension)
        filename = os.path.basename(input_file_url)

        # path to the new file, change it according to where you want to put it
        output_file_name = os.path.join('videos', 'mp4', '{}.mp4'.format(filename))
        output_file_path = os.path.join(settings.MEDIA_ROOT, output_file_name)

        # 2-pass encoding
        for i in range(1):
           new_video_720p = subprocess.call([FFMPEG_PATH, '-i', input_file_path, '-s', '1280x720', '-vcodec', 'mpeg4', '-acodec', 'libvo_aacenc', '-b', '10000k', '-pass', i, '-r', '30', output_file_path])
        #    new_video_720p = subprocess.call([FFMPEG_PATH, '-i', input_file_path, '-s', '{}x{}'.format(height * 16/9, height), '-vcodec', 'mpeg4', '-acodec', 'libvo_aacenc', '-b', '10000k', '-pass', i, '-r', '30', output_file_path])

        if new_video_720p == 0:
            # save the new file in the database
            # video.video_file_hd.name = output_file_name
            video.save(update_fields=['video_file_hd'])
            logger.info('Video Processing Finished')
            return video

        else:
            logger.info('Proceesing Failed.') # Just for now

    except:
        raise ValidationError('Something went wrong')


@shared_task
# def task_video_encoding_1080p(video_id, height):
def task_video_encoding_1080p(video_id):
    logger.info('Video Processing started')
    try:
        video = Video.objects.get(video_id=video_id)
        input_file_path = video.video_file.url
        input_file_name = video.video_file.name

        # get the filename (without extension)
        filename = os.path.basename(input_file_path)

        # path to the new file, change it according to where you want to put it
        output_file_name = os.path.join('videos', 'mp4', '{}.mp4'.format(filename))
        output_file_path = os.path.join(settings.MEDIA_ROOT, output_file_name)

        for i in range(1):
            new_video_1080p = subprocess.call([FFMPEG_PATH, '-i', input_file_path, '-s', '1920x1080', '-vcodec', 'mpeg4', '-acodec', 'libvo_aacenc', '-b', '10000k', '-pass', i, '-r', '30', output_file_path])

        if new_video_1080p == 0:
            # save the new file in the database
            # video.video_file_hd.name = output_file_name
            video.save(update_fields=['video_file_fullhd'])
            logger.info('Video Processing Finished')
            return video
        else:
            logger.info('Proceesing Failed.') # Just for now

    except:
        raise ValidationError('Something went wrong')


    


    In my first attempt I wasn't triggering the tasks no where, then I've tried to trigger the task from the schema.py file from graphql inside the video_by_id, but there I've got this error :

    


    backend-celery-1  | django.core.exceptions.ValidationError: ['Something went wrong']
backend-celery-1  | [2023-06-18 16:38:52,859: ERROR/ForkPoolWorker-4] Task video.tasks.task_video_encoding_1080p[d33b1a42-5914-467c-ad5c-00565bc8be6f] raised unexpected: ValidationError(['Something went wrong'])
backend-celery-1  | Traceback (most recent call last):
backend-celery-1  |   File "/usr/src/backend/video/tasks.py", line 81, in task_video_encoding_1080p
backend-celery-1  |     new_video_1080p = subprocess.call([FFMPEG_PATH, '-i', input_file_path, '-s', '1920x1080', '-vcodec', 'mpeg4', '-acodec', 'libvo_aacenc', '-b', '10000k', '-pass', i, '-r', '30', output_file_path])
backend-celery-1  |                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-celery-1  |   File "/usr/local/lib/python3.11/subprocess.py", line 389, in call
backend-celery-1  |     with Popen(*popenargs, **kwargs) as p:
backend-celery-1  |          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-celery-1  |   File "/usr/local/lib/python3.11/subprocess.py", line 1026, in __init__
backend-celery-1  |     self._execute_child(args, executable, preexec_fn, close_fds,
backend-celery-1  |   File "/usr/local/lib/python3.11/subprocess.py", line 1883, in _execute_child
backend-celery-1  |     self.pid = _fork_exec(
backend-celery-1  |                ^^^^^^^^^^^
backend-celery-1  | TypeError: expected str, bytes or os.PathLike object, not int
backend-celery-1  | 
backend-celery-1  | During handling of the above exception, another exception occurred:
backend-celery-1  | 
backend-celery-1  | Traceback (most recent call last):
backend-celery-1  |   File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 477, in trace_task
backend-celery-1  |     R = retval = fun(*args, **kwargs)
backend-celery-1  |                  ^^^^^^^^^^^^^^^^^^^^
backend-celery-1  |   File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 760, in __protected_call__
backend-celery-1  |     return self.run(*args, **kwargs)
backend-celery-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
backend-celery-1  |   File "/usr/src/backend/video/tasks.py", line 93, in task_video_encoding_1080p
backend-celery-1  |     raise ValidationError('Something went wrong')
backend-celery-1  | django.core.exceptions.ValidationError: ['Something went wrong']


    


    If anyone has done this kind of project or something like this please any suggestion or help is much appreciated.

    


    Thank you in advance !