Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (66)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

Sur d’autres sites (6538)

  • ffmpeg not working with filenames that have whitespace

    1er avril 2017, par cmw

    I’m using FFMPEG to measure the duration of videos stored in an Amazon S3 Bucket.

    I’ve read the FFMPEG docs, and they explicitly state that all whitespace and special characters need to be escaped, in order for FFMPEG to handle them properly :

    See docs 2.1 and 2.1.1 : https://ffmpeg.org/ffmpeg-utils.html

    However, when dealing with files whose filenames contain whitespace, ffmpeg fails to render a result.

    I’ve tried the following, with no success

    ffmpeg -i "http://s3.mybucketname.com/videos/my\ video\ file.mov" 2>&1 | grep Duration | awk '{print $2}' | tr -d
    ffmpeg -i "http://s3.mybucketname.com/videos/my video file.mov" 2>&1 | grep Duration | awk '{print $2}' | tr -d
    ffmpeg -i "http://s3.mybucketname.com/videos/my'\' video'\' file.mov" 2>&1 | grep Duration | awk '{print $2}' | tr -d
    ffmpeg -i "http://s3.mybucketname.com/videos/my\ video\ file.mov" 2>&1 | grep Duration | awk '{print $2}' | tr -d

    However, if I strip out the whitespace in the filename – all is well, and the duration of the video is returned.

    Any help is appreciated !

  • ffmpeg - Record Server Desktop Without Connection

    21 janvier, par chrisp

    I set up an application which uses ffmpeg to record a desktop on an Amazon AWS EC2 instance having Windows Server 2012 R2 installed. It records the desktop and puts the result into a file.

    



    This works as long as a Remote Desktop or TeamViewer connection is active for that particular Amazon AWS EC2 instance. As soon as I close the Remote Desktop and TeamViewer connection the recording stops and continues as soon as I reconnect.

    



    I assume that it's because the GPU doesn't deliver frames without a display in use.

    



    How can I make sure that frames are constantly being rendered so that I can record them ?

    


  • Introducing the BigQuery & Data Warehouse Export feature

    30 janvier, par Matomo Core Team

    Matomo is built on a simple truth : your data belongs to you, and you should have complete control over it. That’s why we’re excited to launch our new BigQuery & Data Warehouse Export feature for Matomo Cloud, giving you even more ways to work with your analytics data. 

    Until now, getting raw data from Matomo Cloud required APIs and custom scripts, or waiting for engineering help.  

    Our new BigQuery & Data Warehouse Export feature removes those barriers. You can now access your raw, unaggregated data and schedule regular exports straight to your data warehouse. 

    The feature works with all major data warehouses including (but not limited to) : 

    • Google BigQuery 
    • Amazon Redshift 
    • Snowflake 
    • Azure Synapse Analytics 
    • Apache Hive 
    • Teradata 

    You can schedule exports, combine your Matomo data with other data sources in your data warehouse, and easily query data with SQL-like queries. 

    Direct raw data access for greater data portability 

    Waiting for engineering support can delay your work. Managing API connections and writing scripts can be time-consuming. This keeps you from focusing on what you do best—analysing data. 

    BigQuery create-table-menu

    With the BigQuery & Data Warehouse Export feature, you get direct access to your raw Matomo data without the technical setup. So, you can spend more time analysing data and finding insights that matter. 

    Bringing your data together 

    Answering business questions often requires data from multiple sources. A single customer interaction might span your CRM, web analytics, sales systems, and more. Piecing this data together manually is time-consuming—what starts as a seemingly simple question from stakeholders can turn into hours of work collecting and comparing data across different tools. 

    This feature lets you combine your Matomo data with data from other business systems in your data warehouse. Instead of switching between tools or manually comparing spreadsheets, you can analyse all your data in one place to better understand how customers interact with your business. 

    Easy, custom analysis with SQL-like queries 

    Standard, pre-built reports often don’t address the specific, detailed questions that analysts need to answer.  

    When you use the BigQuery & Data Warehouse Export feature, you can use SQL-like queries in your data warehouse to do detailed, customised analysis. This flexibility allows you to explore your data in depth and uncover specific insights that aren’t possible with pre-built reports. 

    Here is an example of how you might use SQL-like query to compare the behaviours of paying vs. non-paying users : 

    				
                                            <xmp>SELECT  

    custom_dimension_value AS user_type, -- Assuming 'user_type' is stored in a custom dimension

    COUNT(*) AS total_visits,  

    AVG(visit_total_time) AS avg_duration,

    SUM(conversion.revenue) AS total_spent  

    FROM  

    `your_project.your_dataset.matomo_log_visit` AS visit

    LEFT JOIN  

    `your_project.your_dataset.matomo_log_conversion` AS conversion  

    ON  

    visit.idvisit = conversion.idvisit  

    GROUP BY  

    custom_dimension_value; </xmp>
                                   

    This query helps you compare metrics such as the number of visits, average session duration, and total amount spent between paying and non-paying users. It provides a full view of behavioural differences between these groups. 

    Advanced data manipulation and visualisation 

    When you need to create detailed reports or dive deep into data analysis, working within the constraints of a fixed user interface (UI) can limit your ability to draw insights. 

    Exporting your Matomo data to a data warehouse like BigQuery provides greater flexibility for in-depth manipulation and advanced visualisations, enabling you to uncover deeper insights and tailor your reports more effectively. 

    Getting started 

    To set up data warehouse exports in your Matomo : 

    1. Go to System Admin (cog icon in the top right corner) 
    2. Select ‘Export’ from the left-hand menu 
    3. Choose ‘BigQuery & Data Warehouse’ 

    You’ll find detailed instructions in our data warehouse exports guide 

    Please note, enabling this feature will cost an additional 10% of your current subscription. You can view the exact cost by following the steps above. 

    New to Matomo ? Start your 21-day free trial now (no credit card required), or request a demo.