Recherche avancée

Médias (91)

Autres articles (40)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (6717)

  • Introducing the BigQuery & Data Warehouse Export feature

    30 janvier, par Matomo Core Team

    Matomo is built on a simple truth : your data belongs to you, and you should have complete control over it. That’s why we’re excited to launch our new BigQuery & Data Warehouse Export feature for Matomo Cloud, giving you even more ways to work with your analytics data. 

    Until now, getting raw data from Matomo Cloud required APIs and custom scripts, or waiting for engineering help.  

    Our new BigQuery & Data Warehouse Export feature removes those barriers. You can now access your raw, unaggregated data and schedule regular exports straight to your data warehouse. 

    The feature works with all major data warehouses including (but not limited to) : 

    • Google BigQuery 
    • Amazon Redshift 
    • Snowflake 
    • Azure Synapse Analytics 
    • Apache Hive 
    • Teradata 

    You can schedule exports, combine your Matomo data with other data sources in your data warehouse, and easily query data with SQL-like queries. 

    Direct raw data access for greater data portability 

    Waiting for engineering support can delay your work. Managing API connections and writing scripts can be time-consuming. This keeps you from focusing on what you do best—analysing data. 

    BigQuery create-table-menu

    With the BigQuery & Data Warehouse Export feature, you get direct access to your raw Matomo data without the technical setup. So, you can spend more time analysing data and finding insights that matter. 

    Bringing your data together 

    Answering business questions often requires data from multiple sources. A single customer interaction might span your CRM, web analytics, sales systems, and more. Piecing this data together manually is time-consuming—what starts as a seemingly simple question from stakeholders can turn into hours of work collecting and comparing data across different tools. 

    This feature lets you combine your Matomo data with data from other business systems in your data warehouse. Instead of switching between tools or manually comparing spreadsheets, you can analyse all your data in one place to better understand how customers interact with your business. 

    Easy, custom analysis with SQL-like queries 

    Standard, pre-built reports often don’t address the specific, detailed questions that analysts need to answer.  

    When you use the BigQuery & Data Warehouse Export feature, you can use SQL-like queries in your data warehouse to do detailed, customised analysis. This flexibility allows you to explore your data in depth and uncover specific insights that aren’t possible with pre-built reports. 

    Here is an example of how you might use SQL-like query to compare the behaviours of paying vs. non-paying users : 

    				
                                            <xmp>SELECT  

    custom_dimension_value AS user_type, -- Assuming 'user_type' is stored in a custom dimension

    COUNT(*) AS total_visits,  

    AVG(visit_total_time) AS avg_duration,

    SUM(conversion.revenue) AS total_spent  

    FROM  

    `your_project.your_dataset.matomo_log_visit` AS visit

    LEFT JOIN  

    `your_project.your_dataset.matomo_log_conversion` AS conversion  

    ON  

    visit.idvisit = conversion.idvisit  

    GROUP BY  

    custom_dimension_value; </xmp>
                                   

    This query helps you compare metrics such as the number of visits, average session duration, and total amount spent between paying and non-paying users. It provides a full view of behavioural differences between these groups. 

    Advanced data manipulation and visualisation 

    When you need to create detailed reports or dive deep into data analysis, working within the constraints of a fixed user interface (UI) can limit your ability to draw insights. 

    Exporting your Matomo data to a data warehouse like BigQuery provides greater flexibility for in-depth manipulation and advanced visualisations, enabling you to uncover deeper insights and tailor your reports more effectively. 

    Getting started 

    To set up data warehouse exports in your Matomo : 

    1. Go to System Admin (cog icon in the top right corner) 
    2. Select ‘Export’ from the left-hand menu 
    3. Choose ‘BigQuery & Data Warehouse’ 

    You’ll find detailed instructions in our data warehouse exports guide 

    Please note, enabling this feature will cost an additional 10% of your current subscription. You can view the exact cost by following the steps above. 

    New to Matomo ? Start your 21-day free trial now (no credit card required), or request a demo. 

  • Flask send_file not sending file

    30 avril 2021, par jackmerrill

    I'm using Flask with send_file() to have people download a file off the server.

    &#xA;&#xA;

    My current code is as follows :

    &#xA;&#xA;

    @app.route(&#x27;/&#x27;, methods=["GET", "POST"])&#xA;def index():&#xA;    if request.method == "POST":&#xA;        link = request.form.get(&#x27;Link&#x27;)&#xA;        with youtube_dl.YoutubeDL(ydl_opts) as ydl:&#xA;            info_dict = ydl.extract_info(link, download=False)&#xA;            video_url = info_dict.get("url", None)&#xA;            video_id = info_dict.get("id", None)&#xA;            video_title = info_dict.get(&#x27;title&#x27;, None)&#xA;            ydl.download([link])&#xA;        print("sending file...")&#xA;        send_file("dl/"&#x2B;video_title&#x2B;".f137.mp4", as_attachment=True)&#xA;        print("file sent, deleting...")&#xA;        os.remove("dl/"&#x2B;video_title&#x2B;".f137.mp4")&#xA;        print("done.")&#xA;        return render_template("index.html", message="Success!")&#xA;    else:&#xA;        return render_template("index.html", message=message)&#xA;

    &#xA;&#xA;

    The only reason I have .f137.mp4 added is because I am using AWS C9 to be my online IDE and I can't install FFMPEG to combine the audio and video on Amazon Linux. However, that is not the issue. The issue is that it is not sending the download request.

    &#xA;&#xA;

    Here is the console output :

    &#xA;&#xA;

    127.0.0.1 - - [12/Dec/2018 16:17:41] "POST / HTTP/1.1" 200 -&#xA;[youtube] 2AYgi2wsdkE: Downloading webpage&#xA;[youtube] 2AYgi2wsdkE: Downloading video info webpage&#xA;[youtube] 2AYgi2wsdkE: Downloading webpage&#xA;[youtube] 2AYgi2wsdkE: Downloading video info webpage&#xA;WARNING: You have requested multiple formats but ffmpeg or avconv are not installed. The formats won&#x27;t be merged.&#xA;[download] Destination: dl/Meme Awards v244.f137.mp4&#xA;[download] 100% of 73.82MiB in 00:02&#xA;[download] Destination: dl/Meme Awards v244.f140.m4a&#xA;[download] 100% of 11.63MiB in 00:00&#xA;sending file...&#xA;file sent, deleting...&#xA;done.&#xA;127.0.0.1 - - [12/Dec/2018 16:18:03] "POST / HTTP/1.1" 200 -&#xA;

    &#xA;&#xA;

    Any and all help is appreciated. Thanks !

    &#xA;

  • Capture camera + mic and encode to h264/aac on macOS

    16 décembre 2018, par Flock Dawson

    I’m having trouble capturing and encoding audio+video on-the-fly on macOS.

    I tried two options :

    1. ffmpeg

      ffmpeg -threads 0 -f avfoundation -s 1920x1080 -framerate 25 -I 0:0 -async 441 -c:v libx264 -preset medium -pix_fmt yuv420p -crf 22 -c:a libfdk_aac -aq 95 -y
    2. gstreamer

      gst-launch-1.0 -ve avfvideosrc device-index=0 ! video/x-raw,width=1920,height=1080,framerate=25/1 ! vtenc_h264 ! queue ! mp4mux name=mux ! filesink location=out.mp4  osxaudiosrc device=0 ! audio/x-raw ! faac midside=false ! queue ! mux.

    The ffmpeg option works, but only for lower resolutions. With higher resolutions, the Mac mini (2018 gen) can’t do the heavy lifting. I think because I installed ffmpeg with brew, so it wasn’t compiled on my machine, meaning it doesn’t use the h264 hardware encoder in the Mac ?

    The gstreamer option works as well, but there’s a slight audio/video sync issue (audio is 100ms ahead of the video). I can’t seem to add delay to the GStreamer queue (it ignores it) :

    queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=100000000

    Anyone who has any experience with this ? Thanks !