Recherche avancée

Médias (1)

Mot : - Tags -/copyleft

Autres articles (76)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (10289)

  • What is Google Analytics data sampling and what’s so bad about it ?

    16 août 2019, par Joselyn Khor — Analytics Tips, Development

    What is Google Analytics data sampling, and what’s so bad about it ?

    Google (2019) explains what data sampling is :

    “In data analysis, sampling is the practice of analysing a subset of all data in order to uncover the meaningful information in the larger data set.”[1]

    This is basically saying instead of analysing all of the data, there’s a threshold on how much data is analysed and any data after that will be an assumption based on patterns.

    Google’s (2019) data sampling thresholds :

    Ad-hoc queries of your data are subject to the following general thresholds for sampling :
    [Google] Analytics Standard : 500k sessions at the property level for the date range you are using
    [Google] Analytics 360 : 100M sessions at the view level for the date range you are using (para. 3) [2]

    This threshold is limiting because your data in GA may become more inaccurate as the traffic to your website increases.

    Say you’re looking through all your traffic data from the last year and find you have 5 million page views. Only 500K of that 5 million is accurate ! The data for the remaining 4.5 million (90%) is an assumption based on the 500K sample size.

    This is a key weapon Google uses to sell to large businesses. In order to increase that threshold for more accurate reporting, upgrading to premium Google Analytics 360 for approximately US$150,000 per year seems to be the only choice.

    What’s so bad about data sampling ?

    It’s unfair to say sampled data is to be disregarded completely. There is a calculation ensuring it is representative and can allow you to get good enough insights. However, we don’t encourage it as we don’t just want “good enough” data. We want the actual facts.

    In a recent survey sent to Matomo customers, we found a large proportion of users switched from GA to Matomo due to the data sampling issue.

    The two reasons why data sampling isn’t preferable : 

    1. If the selected sample size is too small, you won’t get a good representative of all the data. 
    2. The bigger your website grows, the more inaccurate your reports will become.

    An example of why we don’t fully trust sampled data is, say you have an ecommerce store and see your GA revenue reports aren’t matching the actual sales data, due to data sampling. In GA you may be seeing revenue for the month as $1 million, instead of actual sales of $800K.

    The sampling here has caused an inaccuracy that could have negative financial implications. What you get in the GA report is an estimated dollar figure rather than the actual sales. Making decisions based on inaccurate data can be costly in this case. 

    Another disadvantage to sampled data is that you might be missing out on opportunities you would’ve noticed if you were given a view of the whole. E.g. not being able to see real patterns occurring due to the data already being predicted. 

    By not getting a chance to see things as they are and only being able to jump to the conclusions and assumptions made by GA is risky. The bigger your business grows, the less you can risk making business decisions based on assumptions that could be inaccurate. 

    If you feel you could be missing out on opportunities because your GA data is sampled data, get 100% accurately reported data. 

    The benefits of 100% accurate data

    Matomo doesn’t use data sampling on any of our products or plans. You get to see all of your data and not a sampled data set.

    Data quality is necessary for high impact decision-making. It’s hard to make strategic changes if you don’t have confidence that your data is reliable and accurate.

    Learn about how Matomo is a serious contender to Google Analytics 360. 

    Now you can import your Google Analytics data directly into your Matomo

    If you’re wanting to make the switch to Matomo but worried about losing all your historic Google Analytics data, you can now import this directly into your Matomo with the Google Analytics Importer tool.


    Take the challenge !

    Compare your Google Analytics data (sampled data) against your Matomo data, or if you don’t have Matomo data yet, sign up to our 30-day free trial and start tracking !

    References :

    [1 & 2] About data sampling. (2019). In Analytics Help About data sampling. Retrieved August 14, 2019, from https://support.google.com/analytics/answer/2637192

  • How to use google/zx to execute ffmpeg shell in windows ?

    26 août 2021, par radiorz

    I download the ffmpeg through scoop,
and I can
My script is below :

    


    #!/usr/bin/env zx
const filePath = `./abc.mp4`;
const srsServerURL = `192.168.30.100`;
await $`ffmpeg -re -i ${filePath} -c copy -f mp4 -y rtmp://${srsServerURL}/live/livestream`;


    


    but the zx tell me that ffmpeg not found.

    


  • How to upload object to a bucket in Google Cloud Platform from Python script

    7 juillet 2016, par Bryan

    The goal of this script is to extract audio from a video file using ffmpeg and upload it into a bucket on Google Cloud Platform each time it is called. Eventually I will have to extract audio from a large list of videos, so ideally I would want my script to extract and subsequently upload it into the cloud.

    My confusion is how to use GCP API to upload my object into a bucket. Any advice would be greatly appreciated !

    Link for reference : https://cloud.google.com/storage/docs/json_api/v1/json-api-python-samples#setup-code

    import subprocess
    import sys
    import re

    fullVideo = sys.argv[1]
    title = re.findall('^([^.]*).*', fullVideo)
    title = str(title[0])
    subprocess.call('ffmpeg -i ' + fullVideo + ' -vn -ab 128k ' + title + '.flac', shell = True)

    def upload_object(bucket, filename, readers, owners):
       service = create_service()

       # This is the request body as specified:
       # http://g.co/cloud/storage/docs/json_api/v1/objects/insert#request
       body = {
           'name': filename,
       }

       # If specified, create the access control objects and add them to the
       # request body
       if readers or owners:
           body['acl'] = []

       for r in readers:
           body['acl'].append({
               'entity': 'user-%s' % r,
               'role': 'READER',
               'email': r
           })
       for o in owners:
           body['acl'].append({
               'entity': 'user-%s' % o,
               'role': 'OWNER',
               'email': o
           })

       # Now insert them into the specified bucket as a media insertion.
       # http://g.co/dev/resources/api-libraries/documentation/storage/v1/python/latest/storage_v1.objects.html#insert
       with open(filename, 'rb') as f:
           req = service.objects().insert(
               bucket=bucket, body=body,
               # You can also just set media_body=filename, but # for the sake of
               # demonstration, pass in the more generic file handle, which could
               # very well be a StringIO or similar.
               media_body=http.MediaIoBaseUpload(f, 'application/octet-stream'))
           resp = req.execute()

       return resp