Recherche avancée

Médias (91)

Autres articles (65)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Dépôt de média et thèmes par FTP

    31 mai 2013, par

    L’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
    Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (6536)

  • Google Optimize vs Matomo A/B Testing : Everything You Need to Know

    17 mars 2023, par Erin — Analytics Tips

    Google Optimize is a popular A/B testing tool marketers use to validate the performance of different marketing assets, website design elements and promotional offers. 

    But by September 2023, Google will sunset both free and paid versions of the Optimize product. 

    If you’re searching for an equally robust, but GDPR compliant, privacy-friendly alternative to Google Optimize, have a look at Matomo A/B Testing

    Integrated with our analytics platform and conversion rate optimisation (CRO) tools, Matomo allows you to run A/B and A/B/n tests without any usage caps or compromises in user privacy.

    Disclaimer : Please note that the information provided in this blog post is for general informational purposes only and is not intended to provide legal advice. Every situation is unique and requires a specific legal analysis. If you have any questions regarding the legal implications of any matter, please consult with your legal team or seek advice from a qualified legal professional.

    Google Optimize vs Matomo : Key Capabilities Compared 

    This guide shows how Matomo A/B testing stacks against Google Optimize in terms of features, reporting, integrations and pricing.

    Supported Platforms 

    Google Optimize supports experiments for dynamic websites and single-page mobile apps only. 

    If you want to run split tests in mobile apps, you’ll have to do so via Firebase — Google’s app development platform. It also has a free tier but paid usage-based subscription kicks in after your product(s) reaches a certain usage threshold. 

    Google Optimize also doesn’t support CRO experiments for web or desktop applications, email campaigns or paid ad campaigns.Matomo A/B Testing, in contrast, allows you to run experiments in virtually every channel. We have three installation options — using JavaScript, server-side technology, or our mobile tracking SDK. These allow you to run split tests in any type of web or mobile app (including games), a desktop product, or on your website. Also, you can do different email marketing tests (e.g., compare subject line variants).

    A/B Testing 

    A/B testing (split testing) is the core feature of both products. Marketers use A/B testing to determine which creative elements such as website microcopy, button placements and banner versions, resonate better with target audiences. 

    You can benchmark different versions against one another to determine which variation resonates more with users. Or you can test an A version against B, C, D and beyond. This is called A/B/n testing. 

    Both Matomo A/B testing and Google Optimize let you test either separate page elements or two completely different landing page designs, using redirect tests. You can show different variants to different user groups (aka apply targeting criteria). For example, activate tests only for certain device types, locations or types of on-site behaviour. 

    The advantage of Matomo is that we don’t limit the number of concurrent experiments you can run. With Google Optimize, you’re limited to 5 simultaneous experiments. Likewise, 

    Matomo lets you select an unlimited number of experiment objectives, whereas Google caps the maximum choice to 3 predefined options per experiment. 

    Objectives are criteria the underlying statistical model will use to determine the best-performing version. Typically, marketers use metrics such as page views, session duration, bounce rate or generated revenue as conversion goals

    Conversions Report Matomo

    Multivariate testing (MVT)

    Multivariate testing (MVT) allows you to “pack” several A/B tests into one active experiment. In other words : You create a stack of variants to determine which combination drives the best marketing outcomes. 

    For example, an MVT experiment can include five versions of a web page, where each has a different slogan, product image, call-to-action, etc. Visitors are then served with a different variation. The tracking code collects data on their behaviours and desired outcomes (objectives) and reports the results.

    MVT saves marketers time as it’s a great alternative to doing separate A/B tests for each variable. Both Matomo and Google Optimize support this feature. However, Google Optimize caps the number of possible combinations at 16, whereas Matomo has no limits. 

    Redirect Tests

    Redirect tests, also known as split URL tests, allow you to serve two entirely different web page versions to users and compare their performance. This option comes in handy when you’re redesigning your website or want to test a localised page version in a new market. 

    Also, redirect tests are a great way to validate the performance of bottom-of-the-funnel (BoFU) pages as a checkout page (for eCommerce websites), a pricing page (for SaaS apps) or a contact/booking form (for a B2B service businesses). 

    You can do split URL tests with Google Optimize and Matomo A/B Testing. 

    Experiment Design 

    Google Optimize provides a visual editor for making simple page changes to your website (e.g., changing button colour or adding several headline variations). You can then preview the changes before publishing an experiment. For more complex experiments (e.g., testing different page block sequences), you’ll have to codify experiments using custom JavaScript, HTML and CSS.

    In Matomo, all A/B tests are configured on the server-side (i.e., by editing your website’s raw HTML) or client-side via JavaScript. Afterwards, you use the Matomo interface to start or schedule an experiment, set objectives and view reports. 

    Experiment Configuration 

    Marketers know how complex customer journeys can be. Multiple factors — from location and device to time of the day and discount size — can impact your conversion rates. That’s why a great CRO app allows you to configure multiple tracking conditions. 

    Matomo A/B testing comes with granular controls. First of all, you can decide which percentage of total web visitors participate in any given experiment. By default, the number is set to 100%, but you can change it to any other option. 

    Likewise, you can change which percentage of traffic each variant gets in an experiment. For example, your original version can get 30% of traffic, while options A and B receive 40% each. We also allow users to specify custom parameters for experiment participation. You can only show your variants to people in specific geo-location or returning visitors only. 

    Finally, you can select any type of meaningful objective to evaluate each variant’s performance. With Matomo, you can either use standard website analytics metrics (e.g., total page views, bounce rate, CTR, visit direction, etc) or custom goals (e.g., form click, asset download, eCommerce order, etc). 

    In other words : You’re in charge of deciding on your campaign targeting criteria, duration and evaluation objectives.

    A free Google Optimize account comes with three main types of user targeting options : 

    • Geo-targeting at city, region, metro and country levels. 
    • Technology targeting  by browser, OS or device type, first-party cookie, etc. 
    • Behavioural targeting based on metrics like “time since first arrival” and “page referrer” (referral traffic source). 

    Users can also configure other types of tracking scenarios (for example to only serve tests to signed-in users), using condition-based rules

    Reporting 

    Both Matomo and Google Optimize use different statistical models to evaluate which variation performs best. 

    Matomo relies on statistical hypothesis testing, which we use to count unique visitors and report on conversion rates. We analyse all user data (with no data sampling applied), meaning you get accurate reporting, based on first-hand data, rather than deductions. For that reason, we ask users to avoid drawing conclusions before their experiment participation numbers reach a statistically significant result. Typically, we recommend running an experiment for at least several business cycles to get a comprehensive report. 

    Google Optimize, in turn, uses Bayesian inference — a statistical method, which relies on a random sample of users to compare the performance rates of each creative against one another. While a Bayesian model generates CRO reports faster and at a bigger scale, it’s based on inferences.

    Model developers need to have the necessary skills to translate subjective prior beliefs about the probability of a certain event into a mathematical formula. Since Google Optimize is a proprietary tool, you cannot audit the underlying model design and verify its accuracy. In other words, you trust that it was created with the right judgement. 

    In comparison, Matomo started as an open-source project, and our source code can be audited independently by anyone at any time. 

    Another reporting difference to mind is the reporting delays. Matomo Cloud generates A/B reports within 6 hours and in only 1 hour for Matomo On-Premise. Google Optimize, in turn, requires 12 hours from the first experiment setup to start reporting on results. 

    When you configure a test experiment and want to quickly verify that everything is set up correctly, this can be an inconvenience.

    User Privacy & GDPR Compliance 

    Google Optimize works in conjunction with Google Analytics, which isn’t GDPR compliant

    For all website traffic from the EU, you’re therefore obliged to show a cookie consent banner. The kicker, however, is that you can only show an Optimize experiment after the user gives consent to tracking. If the user doesn’t, they will only see an original page version. Considering that almost 40% of global consumers reject cookie consent banners, this can significantly affect your results.

    This renders Google Optimize mostly useless in the EU since it would only allow you to run tests with a fraction ( 60%) of EU traffic — and even less if you apply any extra targeting criteria. 

    In comparison, Matomo is fully GDPR compliant. Therefore, our users are legally exempt from displaying cookie-consent banners in most EU markets (with Germany and the UK being an exception). Since Matomo A/B testing is part of Matomo web analytics, you don’t have to worry about GDPR compliance or breaches in user privacy. 

    Digital Experience Intelligence 

    You can get comprehensive statistical data on variants’ performance with Google Optimize. But you don’t get further insights on why some tests are more successful than others. 

    Matomo enables you to collect more insights with two extra features :

    • User session recordings : Monitor how users behave on different page versions. Observe clicks, mouse movements, scrolls, page changes, and form interactions to better understand the users’ cumulative digital experience. 
    • Heatmaps : Determine which elements attract the most users’ attention to fine-tune your split tests. With a standard CRO tool, you only assume that a certain page element does matter for most users. A heatmap can help you determine for sure. 

    Both of these features are bundled into your Matomo Cloud subscription

    Integrations 

    Both Matomo and Google Optimize integrate with multiple other tools. 

    Google Optimize has native integrations with other products in the marketing family — GA, Google Ads, Google Tag Manager, Google BigQuery, Accelerated Mobile Pages (AMP), and Firebase. Separately, other popular marketing apps have created custom connectors for integrating Google Optimize data. 

    Matomo A/B Testing, in turn, can be combined with other web analytics and CRO features such as Funnels, Multi-Channel Attribution, Tag Manager, Form Analytics, Heatmaps, Session Recording, and more ! 

    You can also conveniently export your website analytics or CRO data using Matomo Analytics API to analyse it in another app. 

    Pricing 

    Google Optimize is a free tool but has usage caps. If you want to schedule more than 5 concurrent experiments or test more than 16 variants at once, you’ll have to upgrade to Optimize 360. Optimize 360 prices aren’t listed publicly but are said to be closer to six figures per year. 

    Matomo A/B Testing is available with every Cloud subscription (starting from €19) and Matomo On-Premise users can also get A/B Testing as a plugin (starting from €199/year). In each case, there are no caps or data limits. 

    Google Optimize vs Matomo A/B Testing : Comparison Table

    Features/capabilitiesGoogle OptimizeMatomo A/B test
    Supported channelsWebWeb, mobile, email, digital campaigns
    A/B testingcheck mark iconcheck mark icon
    Multivariate testing (MVT)check mark iconcheck mark icon
    Split URL testscheck mark iconcheck mark icon
    Web analytics integration Native with UA/GA4 Native with Matomo

    You can also migrate historical UA (GA3) data to Matomo
    Audience segmentation BasicAdvanced
    Geo-targetingcheck mark iconX
    Technology targetingcheck mark iconX
    Behavioural targetingBasicAdvanced
    Reporting modelBayesian analysisStatistical hypothesis testing
    Report availability Within 12 hours after setup 6 hours for Matomo Cloud

    1 hour for Matomo On-Premise
    HeatmapsXcheck mark icon

    Included with Matomo Cloud
    Session recordingsXcheck mark icon

    Included with Matomo Cloud
    GDPR complianceXcheck mark icon
    Support Self-help desk on a free tierSelf-help guides, user forum, email
    PriceFree limited tier From €19 for Cloud subscription

    From €199/year as plugin for On-Premise

    Final Thoughts : Who Benefits the Most From an A/B Testing Tool ?

    Split testing is an excellent method for validating various assumptions about your target customers. 

    With A/B testing tools you get a data-backed answer to research hypotheses such as “How different pricing affects purchases ?”, “What contact button placement generates more clicks ?”, “Which registration form performs best with new app subscribers ?” and more. 

    Such insights can be game-changing when you’re trying to improve your demand-generation efforts or conversion rates at the BoFu stage. But to get meaningful results from CRO tests, you need to select measurable, representative objectives.

    For example, split testing different pricing strategies for low-priced, frequently purchased products makes sense as you can run an experiment for a couple of weeks to get a statistically relevant sample. 

    But if you’re in a B2B SaaS product, where the average sales cycle takes weeks (or months) to finalise and things like “time-sensitive discounts” or “one-time promos” don’t really work, getting adequate CRO data will be harder. 

    To see tangible results from CRO, you’ll need to spend more time on test ideation than implementation. Your team needs to figure out : which elements to test, in what order, and why. 

    Effective CRO tests are designed for a specific part of the funnel and assume that you’re capable of effectively identifying and tracking conversions (goals) at the selected stage. This alone can be a complex task since not all customer journeys are alike. For SaaS websites, using a goal like “free trial account registration” can be a good starting point.

    A good test also produces a meaningful difference between the proposed variant and the original version. As Nima Yassini, Partner at Deloitte Digital, rightfully argues :

    “I see people experimenting with the goal of creating an uplift. There’s nothing wrong with that, but if you’re only looking to get wins you will be crushed when the first few tests fail. The industry average says that only one in five to seven tests win, so you need to be prepared to lose most of the time”.

    In many cases, CRO tests don’t provide the data you expected (e.g., people equally click the blue and green buttons). In this case, you need to start building your hypothesis from scratch. 

    At the same time, it’s easy to get caught up in optimising for “vanity metrics” — such that look good in the report, but don’t quite match your marketing objectives. For example, better email headline variations can improve your email open rates. But if users don’t proceed to engage with the email content (e.g. click-through to your website or use a provided discount code), your efforts are still falling short. 

    That’s why developing a baseline strategy is important before committing to an A/B testing tool. Google Optimize appealed to many users because it’s free and allows you to test your split test strategy cost-effectively. 

    With its upcoming depreciation, many marketers are very committed to a more expensive A/B tool (especially when they’re not fully sure about their CRO strategy and its results). 

    Matomo A/B testing is a cost-effective, GDPR-compliant alternative to Google Optimize with a low learning curve and extra competitive features. 

    Discover if Matomo A/B Testing is the ideal Google Optimize alternative for your organization with our free 21-day trial. No credit card required.

  • FFMPEG Output File is Empty Nothing was Encoded (for a Picture) ?

    4 mars 2023, par Sarah Szabo

    I have a strange issue effecting one of my programs that does bulk media conversions using ffmpeg from the command line, however this effects me using it directly from the shell as well :

    


    ffmpeg -i INPUT.mkv -ss 0:30 -y -qscale:v 2 -frames:v 1 -f image2 -huffman optimal "OUTPUT.png"
fails every run with the error message :
Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used)

    


    This only happens with very specific videos, and seemingly no other videos. File type is usually .webm. These files have been downloaded properly (usually from yt-dlp), and I have tried re-downloading them just to verify their integrity.

    


    One such file from a colleague was : https://www.dropbox.com/s/xkucr2z5ra1p2oh/Triggerheart%20Execlica%20OST%20%28Arrange%29%20-%20Crueltear%20Ending.mkv?dl=0

    


    Is there a subtle issue with the command string ?

    


    Notes :

    


    removing -huffman optimal had no effect

    


    moving -ss to before -i had no effect

    


    removing -f image2 had no effect

    


    Full Log :

    


    sarah@MidnightStarSign:~/Music/Playlists/Indexing/Indexing Temp$ ffmpeg -i Triggerheart\ Execlica\ OST\ \(Arrange\)\ -\ Crueltear\ Ending.mkv -ss 0:30 -y -qscale:v 2 -frames:v 1 -f image2 -huffman optimal "TEST.png"
ffmpeg version n5.1.2 Copyright (c) 2000-2022 the FFmpeg developers
  built with gcc 12.2.0 (GCC)
  configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-amf --enable-avisynth --enable-cuda-llvm --enable-lto --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libdav1d --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmfx --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librav1e --enable-librsvg --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-libzimg --enable-nvdec --enable-nvenc --enable-opencl --enable-opengl --enable-shared --enable-version3 --enable-vulkan
  libavutil      57. 28.100 / 57. 28.100
  libavcodec     59. 37.100 / 59. 37.100
  libavformat    59. 27.100 / 59. 27.100
  libavdevice    59.  7.100 / 59.  7.100
  libavfilter     8. 44.100 /  8. 44.100
  libswscale      6.  7.100 /  6.  7.100
  libswresample   4.  7.100 /  4.  7.100
  libpostproc    56.  6.100 / 56.  6.100
[matroska,webm @ 0x55927f484740] Could not find codec parameters for stream 2 (Attachment: none): unknown codec
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, matroska,webm, from 'Triggerheart Execlica OST (Arrange) - Crueltear Ending.mkv':
  Metadata:
    title           : TriggerHeart Exelica PS2 & 360 Arrange ー 16 - Crueltear Ending
    PURL            : https://www.youtube.com/watch?v=zJ0bEa_8xEg
    COMMENT         : https://www.youtube.com/watch?v=zJ0bEa_8xEg
    ARTIST          : VinnyVynce
    DATE            : 20170905
    ENCODER         : Lavf59.27.100
  Duration: 00:00:30.00, start: -0.007000, bitrate: 430 kb/s
  Stream #0:0(eng): Video: vp9 (Profile 0), yuv420p(tv, bt709), 720x720, SAR 1:1 DAR 1:1, 25 fps, 25 tbr, 1k tbn (default)
    Metadata:
      DURATION        : 00:00:29.934000000
  Stream #0:1(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
    Metadata:
      DURATION        : 00:00:30.001000000
  Stream #0:2: Attachment: none
    Metadata:
      filename        : cover.webp
      mimetype        : image/webp
Codec AVOption huffman (Huffman table strategy) specified for output file #0 (TEST.png) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
Stream mapping:
  Stream #0:0 -> #0:0 (vp9 (native) -> png (native))
Press [q] to stop, [?] for help
Output #0, image2, to 'TEST.png':
  Metadata:
    title           : TriggerHeart Exelica PS2 & 360 Arrange ー 16 - Crueltear Ending
    PURL            : https://www.youtube.com/watch?v=zJ0bEa_8xEg
    COMMENT         : https://www.youtube.com/watch?v=zJ0bEa_8xEg
    ARTIST          : VinnyVynce
    DATE            : 20170905
    encoder         : Lavf59.27.100
  Stream #0:0(eng): Video: png, rgb24, 720x720 [SAR 1:1 DAR 1:1], q=2-31, 200 kb/s, 25 fps, 25 tbn (default)
    Metadata:
      DURATION        : 00:00:29.934000000
      encoder         : Lavc59.37.100 png
frame=    0 fps=0.0 q=0.0 Lsize=N/A time=00:00:00.00 bitrate=N/A speed=   0x    
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used)


    


    Manjaro OS System Specs :

    


    System:&#xA;  Kernel: 6.1.12-1-MANJARO arch: x86_64 bits: 64 compiler: gcc v: 12.2.1&#xA;    parameters: BOOT_IMAGE=/@/boot/vmlinuz-6.1-x86_64&#xA;    root=UUID=f11386cf-342d-47ac-84e6-484b7b2f377d rw rootflags=subvol=@&#xA;    radeon.modeset=1 nvdia-drm.modeset=1 quiet&#xA;    cryptdevice=UUID=059df4b4-5be4-44d6-a23a-de81135eb5b4:luks-disk&#xA;    root=/dev/mapper/luks-disk apparmor=1 security=apparmor&#xA;    resume=/dev/mapper/luks-swap udev.log_priority=3&#xA;  Desktop: KDE Plasma v: 5.26.5 tk: Qt v: 5.15.8 wm: kwin_x11 vt: 1 dm: SDDM&#xA;    Distro: Manjaro Linux base: Arch Linux&#xA;Machine:&#xA;  Type: Desktop Mobo: ASUSTeK model: PRIME X570-PRO v: Rev X.0x&#xA;    serial: <superuser required="required"> UEFI: American Megatrends v: 4408&#xA;    date: 10/27/2022&#xA;Battery:&#xA;  Message: No system battery data found. Is one present?&#xA;Memory:&#xA;  RAM: total: 62.71 GiB used: 27.76 GiB (44.3%)&#xA;  RAM Report: permissions: Unable to run dmidecode. Root privileges required.&#xA;CPU:&#xA;  Info: model: AMD Ryzen 9 5950X bits: 64 type: MT MCP arch: Zen 3&#x2B; gen: 4&#xA;    level: v3 note: check built: 2022 process: TSMC n6 (7nm) family: 0x19 (25)&#xA;    model-id: 0x21 (33) stepping: 0 microcode: 0xA201016&#xA;  Topology: cpus: 1x cores: 16 tpc: 2 threads: 32 smt: enabled cache:&#xA;    L1: 1024 KiB desc: d-16x32 KiB; i-16x32 KiB L2: 8 MiB desc: 16x512 KiB&#xA;    L3: 64 MiB desc: 2x32 MiB&#xA;  Speed (MHz): avg: 4099 high: 4111 min/max: 2200/6358 boost: disabled&#xA;    scaling: driver: acpi-cpufreq governor: schedutil cores: 1: 4099 2: 4095&#xA;    3: 4102 4: 4100 5: 4097 6: 4100 7: 4110 8: 4111 9: 4083 10: 4099 11: 4100&#xA;    12: 4094 13: 4097 14: 4101 15: 4100 16: 4099 17: 4100 18: 4097 19: 4098&#xA;    20: 4095 21: 4100 22: 4099 23: 4099 24: 4105 25: 4098 26: 4100 27: 4100&#xA;    28: 4092 29: 4103 30: 4101 31: 4100 32: 4099 bogomips: 262520&#xA;  Flags: 3dnowprefetch abm adx aes aperfmperf apic arat avic avx avx2 bmi1&#xA;    bmi2 bpext cat_l3 cdp_l3 clflush clflushopt clwb clzero cmov cmp_legacy&#xA;    constant_tsc cpb cpuid cqm cqm_llc cqm_mbm_local cqm_mbm_total&#xA;    cqm_occup_llc cr8_legacy cx16 cx8 de decodeassists erms extapic&#xA;    extd_apicid f16c flushbyasid fma fpu fsgsbase fsrm fxsr fxsr_opt ht&#xA;    hw_pstate ibpb ibrs ibs invpcid irperf lahf_lm lbrv lm mba mca mce&#xA;    misalignsse mmx mmxext monitor movbe msr mtrr mwaitx nonstop_tsc nopl npt&#xA;    nrip_save nx ospke osvw overflow_recov pae pat pausefilter pclmulqdq&#xA;    pdpe1gb perfctr_core perfctr_llc perfctr_nb pfthreshold pge pku pni popcnt&#xA;    pse pse36 rapl rdpid rdpru rdrand rdseed rdt_a rdtscp rep_good sep sha_ni&#xA;    skinit smap smca smep ssbd sse sse2 sse4_1 sse4_2 sse4a ssse3 stibp succor&#xA;    svm svm_lock syscall tce topoext tsc tsc_scale umip v_spec_ctrl&#xA;    v_vmsave_vmload vaes vgif vmcb_clean vme vmmcall vpclmulqdq wbnoinvd wdt&#xA;    x2apic xgetbv1 xsave xsavec xsaveerptr xsaveopt xsaves&#xA;  Vulnerabilities:&#xA;  Type: itlb_multihit status: Not affected&#xA;  Type: l1tf status: Not affected&#xA;  Type: mds status: Not affected&#xA;  Type: meltdown status: Not affected&#xA;  Type: mmio_stale_data status: Not affected&#xA;  Type: retbleed status: Not affected&#xA;  Type: spec_store_bypass mitigation: Speculative Store Bypass disabled via&#xA;    prctl&#xA;  Type: spectre_v1 mitigation: usercopy/swapgs barriers and __user pointer&#xA;    sanitization&#xA;  Type: spectre_v2 mitigation: Retpolines, IBPB: conditional, IBRS_FW,&#xA;    STIBP: always-on, RSB filling, PBRSB-eIBRS: Not affected&#xA;  Type: srbds status: Not affected&#xA;  Type: tsx_async_abort status: Not affected&#xA;Graphics:&#xA;  Device-1: NVIDIA GA104 [GeForce RTX 3070] vendor: ASUSTeK driver: nvidia&#xA;    v: 525.89.02 alternate: nouveau,nvidia_drm non-free: 525.xx&#x2B;&#xA;    status: current (as of 2023-02) arch: Ampere code: GAxxx&#xA;    process: TSMC n7 (7nm) built: 2020-22 pcie: gen: 4 speed: 16 GT/s lanes: 8&#xA;    link-max: lanes: 16 bus-ID: 0b:00.0 chip-ID: 10de:2484 class-ID: 0300&#xA;  Device-2: AMD Cape Verde PRO [Radeon HD 7750/8740 / R7 250E]&#xA;    vendor: VISIONTEK driver: radeon v: kernel alternate: amdgpu arch: GCN-1&#xA;    code: Southern Islands process: TSMC 28nm built: 2011-20 pcie: gen: 3&#xA;    speed: 8 GT/s lanes: 8 link-max: lanes: 16 ports: active: DP-3,DP-4&#xA;    empty: DP-1, DP-2, DP-5, DP-6 bus-ID: 0c:00.0 chip-ID: 1002:683f&#xA;    class-ID: 0300 temp: 54.0 C&#xA;  Device-3: Microdia USB 2.0 Camera type: USB driver: snd-usb-audio,uvcvideo&#xA;    bus-ID: 9-2:3 chip-ID: 0c45:6367 class-ID: 0102 serial: <filter>&#xA;  Display: x11 server: X.Org v: 21.1.7 with: Xwayland v: 22.1.8&#xA;    compositor: kwin_x11 driver: X: loaded: modesetting,nvidia dri: radeonsi&#xA;    gpu: radeon display-ID: :0 screens: 1&#xA;  Screen-1: 0 s-res: 5760x2160 s-dpi: 80 s-size: 1829x686mm (72.01x27.01")&#xA;    s-diag: 1953mm (76.91")&#xA;  Monitor-1: DP-1 pos: 1-2 res: 1920x1080 dpi: 93&#xA;    size: 527x296mm (20.75x11.65") diag: 604mm (23.8") modes: N/A&#xA;  Monitor-2: DP-1-3 pos: 2-1 res: 1920x1080 dpi: 82&#xA;    size: 598x336mm (23.54x13.23") diag: 686mm (27.01") modes: N/A&#xA;  Monitor-3: DP-1-4 pos: 1-1 res: 1920x1080 dpi: 93&#xA;    size: 527x296mm (20.75x11.65") diag: 604mm (23.8") modes: N/A&#xA;  Monitor-4: DP-3 pos: primary,2-2 res: 1920x1080 dpi: 82&#xA;    size: 598x336mm (23.54x13.23") diag: 686mm (27.01") modes: N/A&#xA;  Monitor-5: DP-4 pos: 2-4 res: 1920x1080 dpi: 82&#xA;    size: 598x336mm (23.54x13.23") diag: 686mm (27.01") modes: N/A&#xA;  Monitor-6: HDMI-0 pos: 1-3 res: 1920x1080 dpi: 93&#xA;    size: 527x296mm (20.75x11.65") diag: 604mm (23.8") modes: N/A&#xA;  API: OpenGL v: 4.6.0 NVIDIA 525.89.02 renderer: NVIDIA GeForce RTX&#xA;    3070/PCIe/SSE2 direct-render: Yes&#xA;Audio:&#xA;  Device-1: NVIDIA GA104 High Definition Audio vendor: ASUSTeK&#xA;    driver: snd_hda_intel bus-ID: 5-1:2 v: kernel chip-ID: 30be:1019 pcie:&#xA;    class-ID: 0102 gen: 4 speed: 16 GT/s lanes: 8 link-max: lanes: 16&#xA;    bus-ID: 0b:00.1 chip-ID: 10de:228b class-ID: 0403&#xA;  Device-2: AMD Oland/Hainan/Cape Verde/Pitcairn HDMI Audio [Radeon HD 7000&#xA;    Series] vendor: VISIONTEK driver: snd_hda_intel v: kernel pcie: gen: 3&#xA;    speed: 8 GT/s lanes: 8 link-max: lanes: 16 bus-ID: 0c:00.1&#xA;    chip-ID: 1002:aab0 class-ID: 0403&#xA;  Device-3: AMD Starship/Matisse HD Audio vendor: ASUSTeK&#xA;    driver: snd_hda_intel v: kernel pcie: gen: 4 speed: 16 GT/s lanes: 16&#xA;    bus-ID: 0e:00.4 chip-ID: 1022:1487 class-ID: 0403&#xA;  Device-4: Schiit Audio Unison Universal Dac type: USB driver: snd-usb-audio&#xA;  Device-5: JMTek LLC. Plugable USB Audio Device type: USB&#xA;    driver: hid-generic,snd-usb-audio,usbhid bus-ID: 5-2:3 chip-ID: 0c76:120b&#xA;    class-ID: 0300 serial: <filter>&#xA;  Device-6: ASUSTek ASUS AI Noise-Cancelling Mic Adapter type: USB&#xA;    driver: hid-generic,snd-usb-audio,usbhid bus-ID: 5-4:4 chip-ID: 0b05:194e&#xA;    class-ID: 0300 serial: <filter>&#xA;  Device-7: Microdia USB 2.0 Camera type: USB driver: snd-usb-audio,uvcvideo&#xA;    bus-ID: 9-2:3 chip-ID: 0c45:6367 class-ID: 0102 serial: <filter>&#xA;  Sound API: ALSA v: k6.1.12-1-MANJARO running: yes&#xA;  Sound Interface: sndio v: N/A running: no&#xA;  Sound Server-1: PulseAudio v: 16.1 running: no&#xA;  Sound Server-2: PipeWire v: 0.3.65 running: yes&#xA;Network:&#xA;  Device-1: Intel I211 Gigabit Network vendor: ASUSTeK driver: igb v: kernel&#xA;    pcie: gen: 1 speed: 2.5 GT/s lanes: 1 port: f000 bus-ID: 07:00.0&#xA;    chip-ID: 8086:1539 class-ID: 0200&#xA;  IF: enp7s0 state: up speed: 1000 Mbps duplex: full mac: <filter>&#xA;  IP v4: <filter> type: dynamic noprefixroute scope: global&#xA;    broadcast: <filter>&#xA;  IP v6: <filter> type: noprefixroute scope: link&#xA;  IF-ID-1: docker0 state: down mac: <filter>&#xA;  IP v4: <filter> scope: global broadcast: <filter>&#xA;  WAN IP: <filter>&#xA;Bluetooth:&#xA;  Device-1: Cambridge Silicon Radio Bluetooth Dongle (HCI mode) type: USB&#xA;    driver: btusb v: 0.8 bus-ID: 5-5.3:7 chip-ID: 0a12:0001 class-ID: e001&#xA;  Report: rfkill ID: hci0 rfk-id: 0 state: up address: see --recommends&#xA;Logical:&#xA;  Message: No logical block device data found.&#xA;  Device-1: luks-c847cf9f-c6b5-4624-a25e-4531e318851a maj-min: 254:2&#xA;    type: LUKS dm: dm-2 size: 3.64 TiB&#xA;  Components:&#xA;  p-1: sda1 maj-min: 8:1 size: 3.64 TiB&#xA;  Device-2: luks-swap maj-min: 254:1 type: LUKS dm: dm-1 size: 12 GiB&#xA;  Components:&#xA;  p-1: nvme0n1p2 maj-min: 259:2 size: 12 GiB&#xA;  Device-3: luks-disk maj-min: 254:0 type: LUKS dm: dm-0 size: 919.01 GiB&#xA;  Components:&#xA;  p-1: nvme0n1p3 maj-min: 259:3 size: 919.01 GiB&#xA;RAID:&#xA;  Message: No RAID data found.&#xA;Drives:&#xA;  Local Storage: total: 9.1 TiB used: 2.79 TiB (30.6%)&#xA;  SMART Message: Unable to run smartctl. Root privileges required.&#xA;  ID-1: /dev/nvme0n1 maj-min: 259:0 vendor: Western Digital&#xA;    model: WDS100T3X0C-00SJG0 size: 931.51 GiB block-size: physical: 512 B&#xA;    logical: 512 B speed: 31.6 Gb/s lanes: 4 type: SSD serial: <filter>&#xA;    rev: 111110WD temp: 53.9 C scheme: GPT&#xA;  ID-2: /dev/nvme1n1 maj-min: 259:4 vendor: Western Digital&#xA;    model: WDS100T2B0C-00PXH0 size: 931.51 GiB block-size: physical: 512 B&#xA;    logical: 512 B speed: 31.6 Gb/s lanes: 4 type: SSD serial: <filter>&#xA;    rev: 211070WD temp: 46.9 C scheme: GPT&#xA;  ID-3: /dev/sda maj-min: 8:0 vendor: Western Digital&#xA;    model: WD4005FZBX-00K5WB0 size: 3.64 TiB block-size: physical: 4096 B&#xA;    logical: 512 B speed: 6.0 Gb/s type: HDD rpm: 7200 serial: <filter>&#xA;    rev: 1A01 scheme: GPT&#xA;  ID-4: /dev/sdb maj-min: 8:16 vendor: Western Digital&#xA;    model: WD4005FZBX-00K5WB0 size: 3.64 TiB block-size: physical: 4096 B&#xA;    logical: 512 B speed: 6.0 Gb/s type: HDD rpm: 7200 serial: <filter>&#xA;    rev: 1A01 scheme: GPT&#xA;  ID-5: /dev/sdc maj-min: 8:32 type: USB vendor: SanDisk&#xA;    model: Gaming Xbox 360 size: 7.48 GiB block-size: physical: 512 B&#xA;    logical: 512 B type: N/A serial: <filter> rev: 8.02 scheme: MBR&#xA;  SMART Message: Unknown USB bridge. Flash drive/Unsupported enclosure?&#xA;  Message: No optical or floppy data found.&#xA;Partition:&#xA;  ID-1: / raw-size: 919.01 GiB size: 919.01 GiB (100.00%)&#xA;    used: 611.14 GiB (66.5%) fs: btrfs dev: /dev/dm-0 maj-min: 254:0&#xA;    mapped: luks-disk label: N/A uuid: N/A&#xA;  ID-2: /boot/efi raw-size: 512 MiB size: 511 MiB (99.80%)&#xA;    used: 40.2 MiB (7.9%) fs: vfat dev: /dev/nvme0n1p1 maj-min: 259:1 label: EFI&#xA;    uuid: 8922-E04D&#xA;  ID-3: /home raw-size: 919.01 GiB size: 919.01 GiB (100.00%)&#xA;    used: 611.14 GiB (66.5%) fs: btrfs dev: /dev/dm-0 maj-min: 254:0&#xA;    mapped: luks-disk label: N/A uuid: N/A&#xA;  ID-4: /run/media/sarah/ConvergentRefuge raw-size: 3.64 TiB&#xA;    size: 3.64 TiB (100.00%) used: 2.19 TiB (60.1%) fs: btrfs dev: /dev/dm-2&#xA;    maj-min: 254:2 mapped: luks-c847cf9f-c6b5-4624-a25e-4531e318851a&#xA;    label: ConvergentRefuge uuid: 7d295e73-4143-4eb1-9d22-75a06b1d2984&#xA;  ID-5: /run/media/sarah/MSS_EXtended raw-size: 475.51 GiB&#xA;    size: 475.51 GiB (100.00%) used: 1.48 GiB (0.3%) fs: btrfs&#xA;    dev: /dev/nvme1n1p1 maj-min: 259:5 label: MSS EXtended&#xA;    uuid: f98b3a12-e0e4-48c7-91c2-6e3aa6dcd32c&#xA;Swap:&#xA;  Kernel: swappiness: 60 (default) cache-pressure: 100 (default)&#xA;  ID-1: swap-1 type: partition size: 12 GiB used: 6.86 GiB (57.2%)&#xA;    priority: -2 dev: /dev/dm-1 maj-min: 254:1 mapped: luks-swap label: SWAP&#xA;    uuid: c8991364-85a7-4e6c-8380-49cd5bd7a873&#xA;Unmounted:&#xA;  ID-1: /dev/nvme1n1p2 maj-min: 259:6 size: 456 GiB fs: ntfs label: N/A&#xA;    uuid: 5ECA358FCA356485&#xA;  ID-2: /dev/sdb1 maj-min: 8:17 size: 3.64 TiB fs: ntfs&#xA;    label: JerichoVariance uuid: 1AB22D5664889CBD&#xA;  ID-3: /dev/sdc1 maj-min: 8:33 size: 3.57 GiB fs: iso9660&#xA;  ID-4: /dev/sdc2 maj-min: 8:34 size: 4 MiB fs: vfat label: MISO_EFI&#xA;    uuid: 5C67-4BF8&#xA;USB:&#xA;  Hub-1: 1-0:1 info: Hi-speed hub with single TT ports: 4 rev: 2.0&#xA;    speed: 480 Mb/s chip-ID: 1d6b:0002 class-ID: 0900&#xA;  Hub-2: 1-2:2 info: Hitachi ports: 4 rev: 2.1 speed: 480 Mb/s&#xA;    chip-ID: 045b:0209 class-ID: 0900&#xA;  Device-1: 1-2.4:3 info: Microsoft Xbox One Controller (Firmware 2015)&#xA;    type: <vendor specific="specific"> driver: xpad interfaces: 3 rev: 2.0 speed: 12 Mb/s&#xA;    power: 500mA chip-ID: 045e:02dd class-ID: ff00 serial: <filter>&#xA;  Hub-3: 2-0:1 info: Super-speed hub ports: 4 rev: 3.0 speed: 5 Gb/s&#xA;    chip-ID: 1d6b:0003 class-ID: 0900&#xA;  Hub-4: 2-2:2 info: Hitachi ports: 4 rev: 3.0 speed: 5 Gb/s&#xA;    chip-ID: 045b:0210 class-ID: 0900&#xA;  Hub-5: 3-0:1 info: Hi-speed hub with single TT ports: 1 rev: 2.0&#xA;    speed: 480 Mb/s chip-ID: 1d6b:0002 class-ID: 0900&#xA;  Hub-6: 3-1:2 info: VIA Labs Hub ports: 4 rev: 2.1 speed: 480 Mb/s&#xA;    power: 100mA chip-ID: 2109:3431 class-ID: 0900&#xA;  Hub-7: 3-1.2:3 info: VIA Labs VL813 Hub ports: 4 rev: 2.1 speed: 480 Mb/s&#xA;    chip-ID: 2109:2813 class-ID: 0900&#xA;  Hub-8: 4-0:1 info: Super-speed hub ports: 4 rev: 3.0 speed: 5 Gb/s&#xA;    chip-ID: 1d6b:0003 class-ID: 0900&#xA;  Hub-9: 4-2:2 info: VIA Labs VL813 Hub ports: 4 rev: 3.0 speed: 5 Gb/s&#xA;    chip-ID: 2109:0813 class-ID: 0900&#xA;  Hub-10: 5-0:1 info: Hi-speed hub with single TT ports: 6 rev: 2.0&#xA;    speed: 480 Mb/s chip-ID: 1d6b:0002 class-ID: 0900&#xA;  Device-1: 5-1:2 info: Schiit Audio Unison Universal Dac type: Audio&#xA;    driver: snd-usb-audio interfaces: 2 rev: 2.0 speed: 480 Mb/s power: 500mA&#xA;    chip-ID: 30be:1019 class-ID: 0102&#xA;  Device-2: 5-2:3 info: JMTek LLC. Plugable USB Audio Device type: Audio,HID&#xA;    driver: hid-generic,snd-usb-audio,usbhid interfaces: 4 rev: 1.1&#xA;    speed: 12 Mb/s power: 100mA chip-ID: 0c76:120b class-ID: 0300&#xA;    serial: <filter>&#xA;  Device-3: 5-4:4 info: ASUSTek ASUS AI Noise-Cancelling Mic Adapter&#xA;    type: Audio,HID driver: hid-generic,snd-usb-audio,usbhid interfaces: 4&#xA;    rev: 1.1 speed: 12 Mb/s power: 100mA chip-ID: 0b05:194e class-ID: 0300&#xA;    serial: <filter>&#xA;  Hub-11: 5-5:5 info: Genesys Logic Hub ports: 4 rev: 2.0 speed: 480 Mb/s&#xA;    power: 100mA chip-ID: 05e3:0608 class-ID: 0900&#xA;  Device-1: 5-5.3:7 info: Cambridge Silicon Radio Bluetooth Dongle (HCI mode)&#xA;    type: Bluetooth driver: btusb interfaces: 2 rev: 2.0 speed: 12 Mb/s&#xA;    power: 100mA chip-ID: 0a12:0001 class-ID: e001&#xA;  Hub-12: 5-6:6 info: Genesys Logic Hub ports: 4 rev: 2.0 speed: 480 Mb/s&#xA;    power: 100mA chip-ID: 05e3:0608 class-ID: 0900&#xA;  Hub-13: 6-0:1 info: Super-speed hub ports: 4 rev: 3.1 speed: 10 Gb/s&#xA;    chip-ID: 1d6b:0003 class-ID: 0900&#xA;  Hub-14: 7-0:1 info: Hi-speed hub with single TT ports: 6 rev: 2.0&#xA;    speed: 480 Mb/s chip-ID: 1d6b:0002 class-ID: 0900&#xA;  Device-1: 7-2:2 info: SanDisk Cruzer Micro Flash Drive type: Mass Storage&#xA;    driver: usb-storage interfaces: 1 rev: 2.0 speed: 480 Mb/s power: 200mA&#xA;    chip-ID: 0781:5151 class-ID: 0806 serial: <filter>&#xA;  Device-2: 7-4:3 info: ASUSTek AURA LED Controller type: HID&#xA;    driver: hid-generic,usbhid interfaces: 2 rev: 2.0 speed: 12 Mb/s power: 16mA&#xA;    chip-ID: 0b05:18f3 class-ID: 0300 serial: <filter>&#xA;  Hub-15: 8-0:1 info: Super-speed hub ports: 4 rev: 3.1 speed: 10 Gb/s&#xA;    chip-ID: 1d6b:0003 class-ID: 0900&#xA;  Hub-16: 9-0:1 info: Hi-speed hub with single TT ports: 4 rev: 2.0&#xA;    speed: 480 Mb/s chip-ID: 1d6b:0002 class-ID: 0900&#xA;  Hub-17: 9-1:2 info: Terminus FE 2.1 7-port Hub ports: 7 rev: 2.0&#xA;    speed: 480 Mb/s power: 100mA chip-ID: 1a40:0201 class-ID: 0900&#xA;  Device-1: 9-1.1:4 info: Sunplus Innovation Gaming mouse [Philips SPK9304]&#xA;    type: Mouse driver: hid-generic,usbhid interfaces: 1 rev: 2.0 speed: 1.5 Mb/s&#xA;    power: 98mA chip-ID: 1bcf:08a0 class-ID: 0301&#xA;  Device-2: 9-1.5:6 info: Microdia Backlit Gaming Keyboard&#xA;    type: Keyboard,Mouse driver: hid-generic,usbhid interfaces: 2 rev: 2.0&#xA;    speed: 12 Mb/s power: 400mA chip-ID: 0c45:652f class-ID: 0301&#xA;  Device-3: 9-1.6:7 info: HUION H420 type: Mouse,HID driver: uclogic,usbhid&#xA;    interfaces: 3 rev: 1.1 speed: 12 Mb/s power: 100mA chip-ID: 256c:006e&#xA;    class-ID: 0300&#xA;  Hub-18: 9-1.7:8 info: Terminus Hub ports: 4 rev: 2.0 speed: 480 Mb/s&#xA;    power: 100mA chip-ID: 1a40:0101 class-ID: 0900&#xA;  Device-1: 9-2:3 info: Microdia USB 2.0 Camera type: Video,Audio&#xA;    driver: snd-usb-audio,uvcvideo interfaces: 4 rev: 2.0 speed: 480 Mb/s&#xA;    power: 500mA chip-ID: 0c45:6367 class-ID: 0102 serial: <filter>&#xA;  Device-2: 9-4:11 info: VKB-Sim &#xA9; Alex Oz 2021 VKBsim Gladiator EVO L&#xA;    type: HID driver: hid-generic,usbhid interfaces: 1 rev: 2.0 speed: 12 Mb/s&#xA;    power: 500mA chip-ID: 231d:0201 class-ID: 0300&#xA;  Hub-19: 10-0:1 info: Super-speed hub ports: 4 rev: 3.1 speed: 10 Gb/s&#xA;    chip-ID: 1d6b:0003 class-ID: 0900&#xA;Sensors:&#xA;  System Temperatures: cpu: 38.0 C mobo: 41.0 C&#xA;  Fan Speeds (RPM): fan-1: 702 fan-2: 747 fan-3: 938 fan-4: 889 fan-5: 3132&#xA;    fan-6: 0 fan-7: 0&#xA;  GPU: device: nvidia screen: :0.0 temp: 49 C fan: 0% device: radeon&#xA;    temp: 53.0 C&#xA;Info:&#xA;  Processes: 842 Uptime: 3h 11m wakeups: 0 Init: systemd v: 252&#xA;  default: graphical tool: systemctl Compilers: gcc: 12.2.1 alt: 10/11&#xA;  clang: 15.0.7 Packages: 2158 pm: pacman pkgs: 2110 libs: 495 tools: pamac,yay&#xA;  pm: flatpak pkgs: 31 pm: snap pkgs: 17 Shell: Bash v: 5.1.16&#xA;  running-in: yakuake inxi: 3.3.25&#xA;</filter></filter></filter></filter></filter></filter></vendor></filter></filter></filter></filter></filter></filter></filter></filter></filter></filter></filter></filter></filter></filter></filter></filter></filter></superuser>

    &#xA;

  • Announcing Long Term Support in Piwik 2 – The analytics platform for your mission critical projects

    11 janvier 2016, par Matthieu Aubry — About, Development

    We are proud to announce our Long Term Support (LTS) for Piwik 2.X !

    Why Long Term Support (LTS) ?

    Part of our mission is to ready Piwik for the enterprise — and ready the enterprise for Piwik. Our fast release cycle and our ability to quickly innovate has served us well for the past seven years and has lead Piwik to being one of the most popular open source projects, used by over one million websites worldwide. But Piwik’s success today has also shown us that this fast release cycle is not suited for all users and customers. Like most large open source projects (such as Ubuntu, Firefox, Debian, Symfony, Node.js, etc.) at Piwik we now also offer a Long Term Support release which gives users the confidence that Piwik can be used for mission critical projects for months to come.

    What does LTS mean for Piwik ?

    For the duration of the LTS period, Piwik 2.X will continue to receive the following fixes :

    • Critical bugs causing data loss or data corruption.
    • Major and Critical security issues.

    Our goal is to offer you a Piwik LTS release that you can trust for all your mission critical projects.

    How long will Piwik 2.X be supported ?

    Piwik 2.X will be supported for at least 12 months after the initial release of Piwik 3.0.0.
    Piwik 3.0.0 is expected to be released in the second half of 2016.
    This means that Piwik 2.X will be supported at least until the second half of 2017.

    Which Piwik version is LTS ?

    The latest Piwik 2.16.X release is our Long Term Support version.

    How do I benefit from the LTS version ?

    To get the full benefits of Piwik LTS, please make sure you are using the latest LTS version. First, update to the latest Piwik 2.X version, then Configure Piwik to use the LTS release channel and then update to the latest LTS version.

    How do I configure Piwik to use the LTS version ?

    By default, Piwik will not use the LTS version. When you use the one-click update your Piwik instance will be updated to the very latest release : when Piwik 3.0.0 will be released, the one click update will update your instance to 3.0.0. It is however possible to configure your Piwik so that you will stay on Piwik 2.X and keep using the LTS Long Term Support version :

    • Login Piwik as the Super User,
    • Go to Settings > General > Update settings,
    • Under “Release channel” click “Latest stable 2.X Long Term Support version”, and click “Save”.

    How do I get professional Piwik Support ?

    If you need professional support for your Piwik service get in touch with the Piwik experts.


    For other questions, feedback or discussion, feel free to join our forums and comment on this LTS forum post.

    We wish you all a fantastic year 2016 !