Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (30)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (5898)

  • What Are Website KPIs (10 KPIs and Best Ways to Track Them)

    3 mai 2024, par Erin

    Trying to improve your website’s performance ?

    Have you ever heard the phrase, “What gets measured gets managed ?”

    To improve, you need to start crunching your numbers.

    The question is, what numbers are you supposed to track ?

    If you want to improve your conversions, then you need to track your website KPIs.

    In this guide, we’ll break down the top website KPIs you need to be tracking and how you can track them so you can double down on what’s working with your website (and ditch what’s not).

    Let’s begin.

    What are website KPIs ?

    Before we dive into website KPIs, let’s define “KPI.”

    A KPI is a key performance indicator.

    You can use this measurable metric to track progress toward a specific objective.

    A website KPI is a metric to track progress towards a specific website performance objective.

    What are website KPIs?

    Website KPIs help your business identify strengths and weaknesses on your website, activities you’re doing well (and those you’re struggling with).

    Web KPIs can give you and your team a target to reach with simple checkpoints to show you whether you’re on the right track toward your goals.

    By tracking website KPIs regularly, you can ensure your organisation performs consistently at a high level.

    Whether you’re looking to improve your traffic, leads or revenue, keeping a close eye on your website KPIs can help you reach your goals.

    10 Website KPIs to track

    If you want to improve your site’s performance, you need to track the right KPIs.

    While there are plenty of web analytics solutions on the market today, below we’ll cover KPIs that are automatically tracked in Matomo (and don’t require any configuration).

    Here are the top 10 website KPIs you need to track to improve site performance and grow your brand :

    1. Pageviews

    Website pageviews are one of the most important KPIs to track.

    What is it exactly ?

    It’s simply the number of times a specific web page has been viewed on your site in a specific time period.

    For example, your homepage might have had 327 pageviews last month, and only 252 this month. 

    This is a drop of 23%. 

    A drop in pageviews could mean your search engine optimisation or traffic campaigns are weakening. Alternatively, if you see pageviews rise, it could mean your marketing initiatives are performing well.

    High or low pageviews could also indicate potential issues on specific pages. For example, your visitors might have trouble finding specific pages if you have poor website structure.

    Screenshot example of the Matomo dashboard

    2. Average time on page

    Now that you understand pageviews, let’s talk about average time on page.

    This is simple : it’s the average amount of time your visitors spend on a particular web page on your site.

    This isn’t the average time they spend on your website but on a specific page.

    If you’re finding that you’re getting steady traffic to a specific web page, but the average time on the page is low, it may mean the content on the page needs to be updated or optimised.

    Tracking your average time on page is important, as the longer someone stays on a page, the better the experience.

    This isn’t a hard and fast rule, though. For specific types of content like knowledge base articles, you may want a shorter period of time on page to ensure someone gets their answer quickly.

    3. Bounce rate

    Bounce rate sounds fun, right ?

    Well, it’s not usually a good thing for your website.

    A bounce rate is how many users entered your website but “bounced” away without clicking through to another page.

    Your bounce rate is a key KPI that helps you determine the quality of your content and the user experience on individual pages.

    You could be getting plenty of traffic to your site, but if the majority are bouncing out before heading to new pages, it could mean that your content isn’t engaging enough for your visitors.

    Remember, like average time on page, your bounce rate isn’t a black-and-white KPI.

    A higher bounce rate may mean your site visitors got exactly what they needed and are pleased.

    But, if you have a high bounce rate on a product page or a landing page, that is a sign you need to optimise the page.

    4. Exit rate

    Bounce rate is the percentage of people who left the website after visiting one page.

    Exit rate, on the other hand, is the percentage of website visits that ended on a specific page.

    For example, you may find that a blog post you wrote has a 19% exit rate and received 1,000 visits that month. This means out of the 1,000 people who viewed this page, 190 exited after visiting it.

    On the other hand, you may find that a second blog post has 1,000 pageviews, but a 10% exit rate, with only 100 people leaving the site after visiting this page.

    What could this mean ?

    This means the second page did a better job keeping the person on your website longer. This could be because :

    • It had more engaging content, keeping the visitors’ interest high
    • It had better internal links to other relevant pieces of content
    • It had a better call to action, taking someone to another web page

    If you’re an e-commerce store and notice that your exit rate is higher on your product, cart or checkout pages, you may need to adjust those pages for better conversions.

    A screenshot of exit rate for "diving" and "products."

    5. Average page load time

    Want to know another reason you may have a high exit rate or bounce rate on a page ?

    Your page load time.

    The average page load time is the average time it takes (in seconds) from the moment you click through to a page until it has fully rendered within your browser.

    In other words, it’s the time it takes after you click on a page for it to be fully functional.

    Your average load time is a crucial website KPI because it significantly impacts page performance and the user experience.

    How important is your page load time ?

    Nearly 53% of website visitors expect e-commerce pages to load in 3 seconds or less.

    You will likely lose visitors if your pages take too long to load.

    You could have the best content on a web page, but if it takes too long to load, your visitors will bounce, exit, or simply be frustrated.

    6. Conversions

    Conversion website KPI.

    Conversions.

    It’s one of the most popular words in digital marketing circles.

    But what does it mean ?

    A conversion is simply the number of times someone takes a specific action on your website.

    For example, it could be wanting someone to :

    • Read a blog post
    • Click an external link
    • Download a PDF guide
    • Sign up to your email list
    • Comment on your blog post
    • Watch a new video you uploaded
    • Purchase a limited-edition product
    • Sign up for a free trial of your software

    To start tracking conversions, you need to first decide what your business goals are for your website.

    With Matomo, you can set up conversions easily through the Goals feature. Simply set up your website goals, and Matomo will automatically track the conversions towards that objective (as a goal completion).

    Simply choose what conversion you want to track, and you can analyse when conversions occur through the Matomo platform.

    7. Conversion rate

    A graph showing evolution over a set period.

    Now that you know what a conversion is, it’s time to talk about conversion rate.

    This key website KPI will help you analyse your performance towards your goals.

    Conversion rate is simply the percentage of visitors who take a desired action, like completing a purchase, signing up for a newsletter, or filling out a form, out of the total number of visitors to your website or landing page.

    Understanding this percentage can help you plan your marketing strategy to improve your website and business performance.

    For instance, let’s say that 2% of your website visitors purchase a product on your digital storefront.

    Knowing this, you could tweak different levers to increase your sales.

    If your average order value is $50 and you get 100,000 visits monthly, you make about $100,000.

    Let’s say you want to increase your revenue.

    One option is to increase your traffic by implementing campaigns to increase different traffic sources, such as social media ads, search ads, organic social traffic, and SEO.

    If you can get your traffic to 120,000 visitors monthly, you can increase your revenue to $120,000 — an additional $20,000 monthly for the extra 20,000 visits.

    Or, if you wanted to increase revenue, you could ignore traffic growth and simply improve your website with conversion rate optimisation (CRO).

    CRO is the practice of making changes to your website or landing page to encourage more visitors to take the desired action.

    If you can get your conversion rate up to 2.5%, the calculation looks like this :

    100,000 visits x $50 average order value x 2.5% = $125,000/month.

    8. Average time spent on forms

    If you want more conversions, you need to analyse forms.

    Why ?

    Form analysis is crucial because it helps you pinpoint where users might be facing obstacles. 

    By identifying these pain points, you can refine the form’s layout and fields to enhance the user experience, leading to higher conversion rates.

    In particular, you should track the average time spent on your forms to understand which ones might be causing frustration or confusion. 

    The average time a visitor spends on a form is calculated by measuring the duration between their first interaction with a form field (such as when they focus on it) and their final interaction.

    Find out how Concrete CMS tripled their leads using Form Analytics.

    9. Play rate

    One often overlooked website KPI you need to be tracking is play rate.

    What is it exactly ?

    The percentage of visitors who click “play” on a video or audio media format on a specific web page.

    For example, if you have a video on your homepage, and 50 people watched it out of the 1,000 people who visited your website today, you have a play rate of 5%.

    Play rate lets you track whenever someone consumes a particular piece of audio or video content on your website, like a video, podcast, or audiobook.

    Not all web analytics solutions offer media analytics. However, Matomo lets you track your media like audio and video without the need for configuration, saving you time and upkeep.

    10. Actions per visit

    Another crucial website KPI is actions per visit.

    This is the average number of interactions a visitor has with your website during a single visit.

    For example, someone may visit your website, resulting in a variety of actions :

    • Downloading content
    • Clicking external links
    • Visiting a number of pages
    • Conducting specific site searches

    Actions per visit is a core KPI that indicates how engaging your website and content are.

    The higher the actions per visit, the more engaged your visitors typically are, which can help them stay longer and eventually convert to paying customers.

    Track your website KPIs with Matomo today

    Running a website is no easy task.

    There are dozens of factors to consider and manage :

    • Copy
    • Design
    • Performance
    • Tech integrations
    • And more

    But, to improve your website and grow your business, you must also dive into your web analytics by tracking key website KPIs.

    Managing these metrics can be challenging, but Matomo simplifies the process by consolidating all your core KPIs into one easy-to-use platform.

    As a privacy-friendly and GDPR-compliant web analytics solution, Matomo tracks 20-40% more data than other solutions. So you gain access to 100% accurate, unsampled insights, enabling confident decision-making.

    Join over 1 million websites that trust Matomo as their web analytics solution. Try it free for 21 days — no credit card required.

  • avformat/dvdvideodec : fix seeking on multi-angle discs

    1er février, par Marth64
    avformat/dvdvideodec : fix seeking on multi-angle discs
    

    When seeking on multi-angle titles, libdvdnav does not lock on
    to the correct sectors initially as it seeks to find the right NAV packet.

    This manifests itself as two bugs :
    (1) When seeking on the first angle in a multi-angle segment,
    frames from another angle will appear (for example in intro
    or credits scenes). This issue is present in VLC also.

    (2) When seeking during a segment on angle n+1, the demuxer
    cannot deduce the right position from dvdnav and does not allow
    seeking within the segment (due to it maintaining a strict state).

    Correct the issue by switching to angle 1 before doing the seek
    operation, and skipping 3 VOBUs (NAV packet led segments) ahead
    where dvdnav will have positioned itself correctly.

    Reported-by : Kacper Michajlow <kasper93@gmail.com>
    Signed-off-by : Marth64 <marth64@proxyid.net>

    • [DH] libavformat/dvdvideodec.c
  • Processing Big Data Problems

    8 janvier 2011, par Multimedia Mike — Big Data

    I’m becoming more interested in big data problems, i.e., extracting useful information out of absurdly sized sets of input data. I know it’s a growing field and there is a lot to read on the subject. But you know how I roll— just think of a problem to solve and dive right in.

    Here’s how my adventure unfolded.

    The Corpus
    I need to run a command line program on a set of files I have collected. This corpus is on the order of 350,000 files. The files range from 7 bytes to 175 MB. Combined, they occupy around 164 GB of storage space.

    Oh, and said storage space resides on an external, USB 2.0-connected hard drive. Stop laughing.

    A file is named according to the SHA-1 hash of its data. The files are organized in a directory hierarchy according to the first 6 hex digits of the SHA-1 hash (e.g., a file named a4d5832f... is stored in a4/d5/83/a4d5832f...). All of this file hash, path, and size information is stored in an SQLite database.

    First Pass
    I wrote a Python script that read all the filenames from the database, fed them into a pool of worker processes using Python’s multiprocessing module, and wrote some resulting data for each file back to the SQLite database. My Eee PC has a single-core, hyperthreaded Atom which presents 2 CPUs to the system. Thus, 2 worker threads crunched the corpus. It took awhile. It took somewhere on the order of 9 or 10 or maybe even 12 hours. It took long enough that I’m in no hurry to re-run the test and get more precise numbers.

    At least I extracted my initial set of data from the corpus. Or did I ?

    Think About The Future

    A few days later, I went back to revisit the data only to notice that the SQLite database was corrupted. To add insult to that bit of injury, the script I had written to process the data was also completely corrupted (overwritten with something unrelated to Python code). BTW, this is was on a RAID brick configured for redundancy. So that’s strike 3 in my personal dealings with RAID technology.

    I moved the corpus to a different external drive and also verified the files after writing (easy to do since I already had the SHA-1 hashes on record).

    The corrupted script was pretty simple to rewrite, even a little better than before. Then I got to re-run it. However, this run was on a faster machine, a hyperthreaded, quad-core beast that exposes 8 CPUs to the system. The reason I wasn’t too concerned about the poor performance with my Eee PC is that I knew I was going to be able to run in on this monster later.

    So I let the rewritten script rip. The script gave me little updates regarding its progress. As it did so, I ran some rough calculations and realized that it wasn’t predicted to finish much sooner than it would have if I were running it on the Eee PC.

    Limiting Factors
    It had been suggested to me that I/O bandwidth of the external USB drive might be a limiting factor. This is when I started to take that idea very seriously.

    The first idea I had was to move the SQLite database to a different drive. The script records data to the database for every file processed, though it only commits once every 100 UPDATEs, so at least it’s not constantly syncing the disc. I ran before and after tests with a small subset of the corpus and noticed a substantial speedup thanks to this policy chance.

    Then I remembered hearing something about "atime" which is access time. Linux filesystems, per default, record the time that a file was last accessed. You can watch this in action by running 'stat &lt;file&gt; ; cat &lt;file&gt; &gt; /dev/null ; stat &lt;file&gt;' and observe that the "Access" field has been updated to NOW(). This also means that every single file that gets read from the external drive still causes an additional write. To avoid this, I started mounting the external drive with '-o noatime' which instructs Linux not to record "last accessed" time for files.

    On the limited subset test, this more than doubled script performance. I then wondered about mounting the external drive as read-only. This had the same performance as noatime. I thought about using both options together but verified that access times are not updated for a read-only filesystem.

    A Note On Profiling
    Once you start accessing files in Linux, those files start getting cached in RAM. Thus, if you profile, say, reading a gigabyte file from a disk and get 31 MB/sec, and then repeat the same test, you’re likely to see the test complete instantaneously. That’s because the file is already sitting in memory, cached. This is useful in general application use, but not if you’re trying to profile disk performance.

    Thus, in between runs, do (as root) 'sync; echo 3 > /proc/sys/vm/drop_caches' in order to wipe caches (explained here).

    Even Better ?
    I re-ran the test using these little improvements. Now it takes somewhere around 5 or 6 hours to run.

    I contrived an artificially large file on the external drive and did some 'dd' tests to measure what the drive could really do. The drive consistently measured a bit over 31 MB/sec. If I could read and process the data at 30 MB/sec, the script would be done in about 95 minutes.

    But it’s probably rather unreasonable to expect that kind of transfer rate for lots of smaller files scattered around a filesystem. However, it can’t be that helpful to have 8 different processes constantly asking the HD for 8 different files at any one time.

    So I wrote a script called stream-corpus.py which simply fetched all the filenames from the database and loaded the contents of each in turn, leaving the data to be garbage-collected at Python’s leisure. This test completed in 174 minutes, just shy of 3 hours. I computed an average read speed of around 17 MB/sec.

    Single-Reader Script
    I began to theorize that if I only have one thread reading, performance should improve greatly. To test this hypothesis without having to do a lot of extra work, I cleared the caches and ran stream-corpus.py until 'top' reported that about half of the real memory had been filled with data. Then I let the main processing script loose on the data. As both scripts were using sorted lists of files, they iterated over the filenames in the same order.

    Result : The processing script tore through the files that had obviously been cached thanks to stream-corpus.py, degrading drastically once it had caught up to the streaming script.

    Thus, I was incented to reorganize the processing script just slightly. Now, there is a reader thread which reads each file and stuffs the name of the file into an IPC queue that one of the worker threads can pick up and process. Note that no file data is exchanged between threads. No need— the operating system is already implicitly holding onto the file data, waiting in case someone asks for it again before something needs that bit of RAM. Technically, this approach accesses each file multiple times. But it makes little practical difference thanks to caching.

    Result : About 183 minutes to process the complete corpus (which works out to a little over 16 MB/sec).

    Why Multiprocess
    Is it even worthwhile to bother multithreading this operation ? Monitoring the whole operation via 'top', most instances of the processing script are barely using any CPU time. Indeed, it’s likely that only one of the worker threads is doing any work most of the time, pulling a file out of the IPC queue as soon the reader thread triggers its load into cache. Right now, the processing is usually pretty quick. There are cases where the processing (external program) might hang (one of the reasons I’m running this project is to find those cases) ; the multiprocessing architecture at least allows other processes to take over until a hanging process is timed out and killed by its monitoring process.

    Further, the processing is pretty simple now but is likely to get more intensive in future iterations. Plus, there’s the possibility that I might move everything onto a more appropriately-connected storage medium which should help alleviate the bottleneck bravely battled in this post.

    There’s also the theoretical possibility that the reader thread could read too far ahead of the processing threads. Obviously, that’s not too much of an issue in the current setup. But to guard against it, the processes could share a variable that tracks the total number of bytes that have been processed. The reader thread adds filesizes to the count while the processing threads subtract file sizes. The reader thread would delay reading more if the number got above a certain threshold.

    Leftovers
    I wondered if the order of accessing the files mattered. I didn’t write them to the drive in any special order. The drive is formatted with Linux ext3. I ran stream-corpus.py on all the filenames sorted by filename (remember the SHA-1 naming convention described above) and also by sorting them randomly.

    Result : It helps immensely for the filenames to be sorted. The sorted variant was a little more than twice as fast as the random variant. Maybe it has to do with accessing all the files in a single directory before moving onto another directory.

    Further, I have long been under the impression that the best read speed you can expect from USB 2.0 was 27 Mbytes/sec (even though 480 Mbit/sec is bandied about in relation to the spec). This comes from profiling I performed with an external enclosure that supports both USB 2.0 and FireWire-400 (and eSata). FW-400 was able to read the same file at nearly 40 Mbytes/sec that USB 2.0 could only read at 27 Mbytes/sec. Other sources I have read corroborate this number. But this test (using different hardware), achieved over 31 Mbytes/sec.