Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (26)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

Sur d’autres sites (4238)

  • Is Google Analytics Accurate ? 6 Important Caveats

    8 novembre 2022, par Erin

    It’s no secret that accurate website analytics is crucial for growing your online business — and Google Analytics is often the go-to source for insights. 

    But is Google Analytics data accurate ? Can you fully trust the provided numbers ? Here’s a detailed explainer.

    How Accurate is Google Analytics ? A Data-Backed Answer 

    When properly configured, Google Analytics (Universal Analytics and Google Analytics 4) is moderately accurate for global traffic collection. That said : Google Analytics doesn’t accurately report European traffic. 

    According to GDPR provisions, sites using GA products must display a cookie consent banner. This consent is required to collect third-party cookies — a tracking mechanism for identifying users across web properties.

    Google Analytics (GA) cannot process data about the user’s visit if they rejected cookies. In such cases, your analytics reports will be incomplete.

    Cookie rejection refers to visitors declining or blocking cookies from ever being collected by a specific website (or within their browser). It immediately affects the accuracy of all metrics in Google Analytics.

    Google Analytics is not accurate in locations where cookie consent to tracking is legally required. Most consumers don’t like disruptive cookie banners or harbour concerns about their privacy — and chose to reject tracking. 

    This leaves businesses with incomplete data, which, in turn, results in : 

    • Lower traffic counts as you’re not collecting 100% of the visitor data. 
    • Loss of website optimisation capabilities. You can’t make data-backed decisions due to inconsistent reporting

    For the above reasons, many companies now consider cookieless website tracking apps that don’t require consent screen displays. 

    Why is Google Analytics Not Accurate ? 6 Causes and Solutions 

    A high rejection rate of cookie banners is the main reason for inaccurate Google Analytics reporting. In addition, your account settings can also hinder Google Analytics’ accuracy.

    If your analytics data looks wonky, check for these six Google Analytics accuracy problems. 

    You Need to Secure Consent to Cookies Collection 

    To be GDPR-compliant, you must display a cookie consent screen to all European users. Likewise, other jurisdictions and industries require similar measures for user data collection. 

    This is a nuisance for many businesses since cookie rejection undermines their remarketing capabilities. Hence, some try to maximise cookie acceptance rates with dark patterns. For example : hide the option to decline tracking or make the texts too small. 

    Cookie consent banner examples
    Banner on the left doesn’t provide an evident option to reject all cookies and nudges the user to accept tracking. Banner on the right does a better job explaining the purpose of data collection and offers a straightforward yes/no selection

    Sadly, not everyone’s treating users with respect. A joint study by German and American researchers found that only 11% of US websites (from a sample of 5,000+) use GDPR-compliant cookie banners.

    As a result, many users aren’t aware of the background data collection to which they have (or have not) given consent. Another analysis of 200,000 cookies discovered that 70% of third-party marketing cookies transfer user data outside of the EU — a practice in breach of GDPR.

    Naturally, data regulators and activities are after this issue. In April 2022, Google was pressured to introduce a ‘reject all’ cookies button to all of its products (a €150 million compliance fine likely helped with that). Whereas, noyb has lodged over 220 complaints against individual websites with deceptive cookie consent banners.

    The takeaway ? Messing up with the cookie consent mechanism can get you in legal trouble. Don’t use sneaky banners as there are better ways to collect website traffic statistics. 

    Solution : Try Matomo GDPR-Friendly Analytics 

    Fill in the gaps in your traffic analytics with Matomo – a fully GDPR-compliant product that doesn’t rely on third-party cookies for tracking web visitors. Because of how it is designed, the French data protection authority (CNIL) confirmed that Matomo can be used to collect data without tracking consent.

    With Matomo, you can track website users without asking for cookie consent. And when you do, we supply you with a compact, compliant, non-disruptive cookie banner design. 

    Your Google Tag Isn’t Embedded Correctly 

    Google Tag (gtag.js) is a web tracking script that sends data to your Google Analytics, Google Ads and Google Marketing Platform.

    A corrupted gtag.js installation can create two accuracy issues : 

    • Duplicate page tracking 
    • Missing script installation 

    Is there a way to tell if you’re affected ?

    Yes. You may have duplicate scripts installed if you have a very low bounce rate on most website pages (below 15% – 20%). The above can happen if you’re using a WordPress GA plugin and additionally embed gtag.js straight in your website code. 

    A tell-tale sign of a missing script on some pages is low/no traffic stats. Google alerts you about this with a banner : 

    Google Analytics alerts

    Solution : Use Available Troubleshooting Tools 

    Use Google Analytics Debugger extension to analyse pages with low bounce rates. Use the search bar to locate duplicate code-tracking elements. 

    Alternatively, you can use Google Tag Assistant for diagnosing snippet install and troubleshooting issues on individual pages. 

    If the above didn’t work, re-install your analytics script

    Machine Learning and Blended Data Are Applied

    Google Analytics 4 (GA4) relies a lot on machine learning and algorithmic predictions.

    By applying Google’s advanced machine learning models, the new Analytics can automatically alert you to significant trends in your data. [...] For example, it calculates churn probability so you can more efficiently invest in retaining customers.

    On the surface, the above sounds exciting. In practice, Google’s application of predictive algorithms means you’re not seeing actual data. 

    To offer a variation of cookieless tracking, Google algorithms close the gaps in reporting by creating models (i.e., data-backed predictions) instead of reporting on actual user behaviours. Therefore, your GA4 numbers may not be accurate.

    For bigger web properties (think websites with 1+ million users), Google also relies on data sampling — a practice of extrapolating data analytics, based on a data subset, rather than the entire dataset. Once again, this can lead to inconsistencies in reporting with some numbers (e.g., average conversion rates) being inflated or downplayed. 

    Solution : Try an Alternative Website Analytics App 

    Unlike GA4, Matomo reports consist of 100% unsampled data. All the aggregated reporting you see is based on real user data (not guesstimation). 

    Moreover, you can migrate from Universal Analytics (UA) to Matomo without losing access to your historical records. GA4 doesn’t yet have any backward compatibility.

    Spam and Bot Traffic Isn’t Filtered Out 

    Surprise ! 42% of all Internet traffic is generated by bots, of which 27.7% are bad ones.

    Good bots (aka crawlers) do essential web “housekeeping” tasks like indexing web pages. Bad bots distribute malware, spam contact forms, hack user accounts and do other nasty stuff. 

    A lot of such spam bots are designed specifically for web analytics apps. The goal ? Flood your dashboard with bogus data in hopes of getting some return action from your side. 

    Types of Google Analytics Spam :

    • Referral spam. Spambots hijack the referrer, displayed in your GA referral traffic report to indicate a page visit from some random website (which didn’t actually occur). 
    • Event spam. Bots generate fake events with free language entries enticing you to visit their website. 
    • Ghost traffic spam. Malicious parties can also inject fake pageviews, containing URLs that they want you to click. 

    Obviously, such spammy entities distort the real website analytics numbers. 

    Solution : Set Up Bot/Spam Filters 

    Google Analytics 4 has automatic filtering of bot traffic enabled for all tracked Web and App properties. 

    But if you’re using Universal Analytics, you’ll have to manually configure spam filtering. First, create a new view and then set up a custom filter. Program it to exclude :

    • Filter Field : Request URI
    • Filter Pattern : Bot traffic URL

    Once you’ve configured everything, validate the results using Verify this filter feature. Then repeat the process for other fishy URLs, hostnames and IP addresses. 

    You Don’t Filter Internal Traffic 

    Your team(s) spend a lot of time on your website — and their sporadic behaviours can impair your traffic counts and other website metrics.

    To keep your data “employee-free”, exclude traffic from : 

    • Your corporate IPs addresses 
    • Known personal IPs of employees (for remote workers) 

    If you also have a separate stage version of your website, you should also filter out all traffic coming from it. Your developers, contractors and marketing people spend a lot of time fiddling with your website. This can cause a big discrepancy in average time on page and engagement rates. 

    Solution : Set Internal Traffic Filters 

    Google provides instructions for excluding internal traffic from your reports using IPv4/IPv6 address filters. 

    Google Analytics IP filters

    Session Timeouts After 30 Minutes 

    After 30 minutes of inactivity, Google Analytics tracking sessions start over. Inactivity means no recorded interaction hits during this time. 

    Session timeouts can be a problem for some websites as users often pin a tab to check it back later. Because of this, you can count the same user twice or more — and this leads to skewed reporting. 

    Solution : Programme Custom Timeout Sessions

    You can codify custom cookie timeout sessions with the following code snippets : 

    Final Thoughts 

    Thanks to its scale and longevity, Google Analytics has some strong sides, but its data accuracy isn’t 100% perfect.

    The inability to capture analytics data from users who don’t consent to cookie tracking and data sampling applied to bigger web properties may be a deal-breaker for your business. 

    If that’s the case, try Matomo — a GDPR-compliant, accurate web analytics solution. Start your 21-day free trial now. No credit card required.

  • What's the most desireable way to capture system display and audio in the form of individual encoded audio and video packets in go (language) ? [closed]

    11 janvier 2023, par Tiger Yang

    Question (read the context below first) :

    


    For those of you familiar with the capabilities of go, Is there a better way to go about all this ? Since ffmpeg is so ubiquitous, I'm sure it's been optomized to perfection, but what's the best way to capture system display and audio in the form of individual encoded audio and video packets in go (language), so that they can be then sent via webtransport-go ? I wish for it to prioritize efficiency and low latency, and ideally capture and encode the framebuffer directly like ffmpeg does.

    


    Thanks ! I have many other questions about this, but I think it's best to ask as I go.

    


    Context and what I've done so far :

    


    I'm writing a remote desktop software for my personal use because of grievances with current solutions out there. At the moment, it consists of a web app that uses the webtransport API to send input datagrams and receive AV packets on two dedicated unidirectional streams, and the webcodecs API to decode these packets. On the serverside, I originally planned to use python with the aioquic library as a webtransport server. Upon connection and authentication, the server would start ffmpeg as a subprocess with this command :

    


    ffmpeg -init_hw_device d3d11va -filter_complex ddagrab=video_size=1920x1080:framerate=60 -vcodec hevc_nvenc -tune ll -preset p7 -spatial_aq 1 -temporal_aq 1 -forced-idr 1 -rc cbr -b:v 400K -no-scenecut 1 -g 216000 -f hevc -

    


    What I really appreciate about this is that it uses windows' desktop duplication API to copy the framebuffer of my GPU and hand that directly to the on-die hardware encoder with zero round trips to the CPU. I think it's about as efficient and elegant a solution as I can manage. It then outputs the encoded stream to the stdout, which python can read and send to the client.

    


    As for the audio, there is another ffmpeg instance :

    


    ffmpeg -f dshow -channels 2 -sample_rate 48000 -sample_size 16 -audio_buffer_size 15 -i audio="RD Audio (High Definition Audio Device)" -acodec libopus -vbr on -application audio -mapping_family 0 -apply_phase_inv true -b:a 25K -fec false -packet_loss 0 -map 0 -f data -

    


    which listens to a physical loopback interface, which is literally just a short wire bridging the front panel headphone and microphone jacks (I'm aware of the quality loss of converting to analog and back, but the audio is then crushed down to 25kbps so it's fine) ()

    


    Unfortunately, aioquic was not easy to work with IMO, and I found webtransport-go https://github.com/adriancable/webtransport-go, which was a hell of a lot better in both simplicity and documentation. However, now I'm dealing with a whole new language, and I wanna ask : (above)

    


    EDIT : Here's the code for my server so far :

    


    

    

    package main

import (
    "bytes"
    "context"
    "fmt"
    "log"
    "net/http"
    "os/exec"
    "time"

    "github.com/adriancable/webtransport-go"
)

func warn(str string) {
    fmt.Printf("\n===== WARNING ===================================================================================================\n   %s\n=================================================================================================================\n", str)
}

func main() {

    password := []byte("abc")

    videoString := []string{
        "ffmpeg",
        "-init_hw_device", "d3d11va",
        "-filter_complex", "ddagrab=video_size=1920x1080:framerate=60",
        "-vcodec", "hevc_nvenc",
        "-tune", "ll",
        "-preset", "p7",
        "-spatial_aq", "1",
        "-temporal_aq", "1",
        "-forced-idr", "1",
        "-rc", "cbr",
        "-b:v", "500K",
        "-no-scenecut", "1",
        "-g", "216000",
        "-f", "hevc", "-",
    }

    audioString := []string{
        "ffmpeg",
        "-f", "dshow",
        "-channels", "2",
        "-sample_rate", "48000",
        "-sample_size", "16",
        "-audio_buffer_size", "15",
        "-i", "audio=RD Audio (High Definition Audio Device)",
        "-acodec", "libopus",
        "-mapping_family", "0",
        "-b:a", "25K",
        "-map", "0",
        "-f", "data", "-",
    }

    connected := false

    http.HandleFunc("/", func(writer http.ResponseWriter, request *http.Request) {
        session := request.Body.(*webtransport.Session)

        session.AcceptSession()
        fmt.Println("\nAccepted incoming WebTransport connection.")
        fmt.Println("Awaiting authentication...")

        authData, err := session.ReceiveMessage(session.Context()) // Waits here till first datagram
        if err != nil {                                            // if client closes connection before sending anything
            fmt.Println("\nConnection closed:", err)
            return
        }

        if len(authData) >= 2 && bytes.Equal(authData[2:], password) {
            if connected {
                session.CloseSession()
                warn("Client has authenticated, but a session is already taking place! Connection closed.")
                return
            } else {
                connected = true
                fmt.Println("Client has authenticated!\n")
            }
        } else {
            session.CloseSession()
            warn("Client has failed authentication! Connection closed. (" + string(authData[2:]) + ")")
            return
        }

        videoStream, _ := session.OpenUniStreamSync(session.Context())

        videoCmd := exec.Command(videoString[0], videoString[1:]...)
        go func() {
            videoOut, _ := videoCmd.StdoutPipe()
            videoCmd.Start()

            buffer := make([]byte, 15000)
            for {
                len, err := videoOut.Read(buffer)
                if err != nil {
                    break
                }
                if len > 0 {
                    videoStream.Write(buffer[:len])
                }
            }
        }()

        time.Sleep(50 * time.Millisecond)

        audioStream, err := session.OpenUniStreamSync(session.Context())

        audioCmd := exec.Command(audioString[0], audioString[1:]...)
        go func() {
            audioOut, _ := audioCmd.StdoutPipe()
            audioCmd.Start()

            buffer := make([]byte, 15000)
            for {
                len, err := audioOut.Read(buffer)
                if err != nil {
                    break
                }
                if len > 0 {
                    audioStream.Write(buffer[:len])
                }
            }
        }()

        for {
            data, err := session.ReceiveMessage(session.Context())
            if err != nil {
                videoCmd.Process.Kill()
                audioCmd.Process.Kill()

                connected = false

                fmt.Println("\nConnection closed:", err)
                break
            }

            if len(data) == 0 {

            } else if data[0] == byte(0) {
                fmt.Printf("Received mouse datagram: %s\n", data)
            }
        }

    })

    server := &webtransport.Server{
        ListenAddr: ":1024",
        TLSCert:    webtransport.CertFile{Path: "SSL/fullchain.pem"},
        TLSKey:     webtransport.CertFile{Path: "SSL/privkey.pem"},
        QuicConfig: &webtransport.QuicConfig{
            KeepAlive:      false,
            MaxIdleTimeout: 3 * time.Second,
        },
    }

    fmt.Println("Launching WebTransport server at", server.ListenAddr)
    ctx, cancel := context.WithCancel(context.Background())
    if err := server.Run(ctx); err != nil {
        log.Fatal(err)
        cancel()
    }

}

    


    


    



  • Announcing our latest open source project : DeviceDetector

    30 juillet 2014, par Stefan Giehl — Community, Development, Meta, DeviceDetector

    This blog post is an announcement for our latest open source project release : DeviceDetector ! The Universal Device Detection library will parse any User Agent and detect the browser, operating system, device used (desktop, tablet, mobile, tv, cars, console, etc.), brand and model.

    Read on to learn more about this exciting release.

    Why did we create DeviceDetector ?

    Our previous library UserAgentParser only had the possibility to detect operating systems and browsers. But as more and more traffic is coming from mobile devices like smartphones and tablets it is getting more and more important to know which devices are used by the websites visitors.

    To ensure that the device detection within Piwik will gain the required attention, so it will be as accurate as possible, we decided to move that part of Piwik into a separate project, that we will maintain separately. As an own project we hope the DeviceDetector will gain a better visibility as well as a better support by and for the community !

    DeviceDetector is hosted on GitHub at piwik/device-detector. It is also available as composer package through Packagist.

    How DeviceDetector works

    Every client requesting data from a webserver identifies itself by sending a so-called User-Agent within the request to the server. Those User Agents might contain several information such as :

    • client name and version (clients can be browsers or other software like feed readers, media players, apps,…)
    • operating system name and version
    • device identifier, which can be used to detect the brand and model.

    For Example :

    Mozilla/5.0 (Linux; Android 4.4.2; Nexus 5 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.99 Mobile Safari/537.36

    This User Agent contains following information :

    Operating system is Android 4.4.2, client uses the browser Chrome Mobile 32.0.1700.99 and the device is a Google Nexus 5 smartphone.

    What DeviceDetector currently detects

    DeviceDetector is able to detect bots, like search engines, feed fetchers, site monitors and so on, five different client types, including around 100 browsers, 15 feed readers, some media players, personal information managers (like mail clients) and mobile apps using the AFNetworking framework, around 80 operating systems and nine different device types (smartphones, tablets, feature phones, consoles, tvs, car browsers, cameras, smart displays and desktop devices) from over 180 brands.

    Note : Piwik itself currently does not use the full feature set of DeviceDetector. Client detection is currently not implemented in Piwik (only detected browsers are reported, other clients are marked as Unknown). Client detection will be implemented into Piwik in the future, follow #5413 to stay updated.

    Performance of DeviceDetector

    Our detections are currently handled by an enormous number of regexes, that are defined in several .YML Files. As parsing these .YML files is a bit slow, DeviceDetector is able to cache the parsed .YML Files. By default DeviceDetector uses a static cache, which means that everything is cached in static variables. As that only improves speed for many detections within one process, there are also adapters to cache in files or memcache for speeding up detections across requests.

    How can users help contribute to DeviceDetector ?

    Submit your devices that are not detected yet

    If you own a device, that is currently not correctly detected by the DeviceDetector, please create a issue on GitHub
    In order to check if your device is detected correctly by the DeviceDetector go to your Piwik server, click on ‘Settings’ link, then click on ‘Device Detection’ under the Diagnostic menu. If the data does not match, please copy the displayed User Agent and use that and your device data to create a ticket.

    Submit a list of your User Agents

    In order to create new detections or improve the existing ones, it is necessary for us to have lists of User Agents. If you have a website used by mostly non desktop devices it would be useful if you send a list of the User Agents that visited your website. To do so you need access to your access logs. The following command will extract the User Agents :

    zcat ~/path/to/access/logs* | awk -F'"' '{print $6}' | sort | uniq -c | sort -rn | head -n20000 > /home/piwik/top-user-agents.txt

    If you want to help us with those data, please get in touch at devicedetector@piwik.org

    Submit improvements on GitHub

    As DeviceDetector is free/libre library, we invite you to help us improving the detections as well as the code. Please feel free to create tickets and pull requests on Github.

    What’s the next big thing for DeviceDetector ?

    Please check out the list of issues in device-detector issue tracker.

    We hope the community will answer our call for help. Together, we can build DeviceDetector as the most powerful device detection library !

    Happy Device Detection,