Recherche avancée

Médias (91)

Autres articles (68)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (6648)

  • Introducing the Data Warehouse Connector feature

    30 janvier, par Matomo Core Team

    Matomo is built on a simple truth : your data belongs to you, and you should have complete control over it. That’s why we’re excited to launch our new Data Warehouse Connector feature for Matomo Cloud, giving you even more ways to work with your analytics data. 

    Until now, getting raw data from Matomo Cloud required APIs and custom scripts, or waiting for engineering help.  

    Our new Data Warehouse Connector feature removes those barriers. You can now access your raw, unaggregated data and schedule regular exports straight to your data warehouse. 

    The feature works with all major data warehouses including (but not limited to) : 

    • Google BigQuery 
    • Amazon Redshift 
    • Snowflake 
    • Azure Synapse Analytics 
    • Apache Hive 
    • Teradata 

    You can schedule exports, combine your Matomo data with other data sources in your data warehouse, and easily query data with SQL-like queries. 

    Direct raw data access for greater data portability 

    Waiting for engineering support can delay your work. Managing API connections and writing scripts can be time-consuming. This keeps you from focusing on what you do best—analysing data. 

    BigQuery create-table-menu

    With the Data Warehouse Connector feature, you get direct access to your raw Matomo data without the technical setup. So, you can spend more time analysing data and finding insights that matter. 

    Bringing your data together 

    Answering business questions often requires data from multiple sources. A single customer interaction might span your CRM, web analytics, sales systems, and more. Piecing this data together manually is time-consuming—what starts as a seemingly simple question from stakeholders can turn into hours of work collecting and comparing data across different tools. 

    This feature lets you combine your Matomo data with data from other business systems in your data warehouse. Instead of switching between tools or manually comparing spreadsheets, you can analyse all your data in one place to better understand how customers interact with your business. 

    Easy, custom analysis with SQL-like queries 

    Standard, pre-built reports often don’t address the specific, detailed questions that analysts need to answer.  

    When you use the Data Warehouse Connector feature, you can use SQL-like queries in your data warehouse to do detailed, customised analysis. This flexibility allows you to explore your data in depth and uncover specific insights that aren’t possible with pre-built reports. 

    Here is an example of how you might use SQL-like query to compare the behaviours of paying vs. non-paying users : 

    				
                                            <xmp>SELECT  

    custom_dimension_value AS user_type, -- Assuming 'user_type' is stored in a custom dimension

    COUNT(*) AS total_visits,  

    AVG(visit_total_time) AS avg_duration,

    SUM(conversion.revenue) AS total_spent  

    FROM  

    `your_project.your_dataset.matomo_log_visit` AS visit

    LEFT JOIN  

    `your_project.your_dataset.matomo_log_conversion` AS conversion  

    ON  

    visit.idvisit = conversion.idvisit  

    GROUP BY  

    custom_dimension_value; </xmp>
                                   

    This query helps you compare metrics such as the number of visits, average session duration, and total amount spent between paying and non-paying users. It provides a full view of behavioural differences between these groups. 

    Advanced data manipulation and visualisation 

    When you need to create detailed reports or dive deep into data analysis, working within the constraints of a fixed user interface (UI) can limit your ability to draw insights. 

    Exporting your Matomo data to a data warehouse like BigQuery provides greater flexibility for in-depth manipulation and advanced visualisations, enabling you to uncover deeper insights and tailor your reports more effectively. 

    Getting started 

    To set up data warehouse exports in your Matomo : 

    1. Go to System Admin (cog icon in the top right corner) 
    2. Select ‘Export’ from the left-hand menu 
    3. Choose ‘Data Warehouse Connector’ 

    You’ll find detailed instructions in our data warehouse exports guide 

    Please note, enabling this feature will cost an additional 10% of your current subscription. You can view the exact cost by following the steps above. 

    New to Matomo ? Start your 21-day free trial now (no credit card required), or request a demo. 

  • How can I build a custom version of opencv while enabling CUDA and opengl ? [closed]

    10 février, par Josh

    I have a hard requirement of python3.7 for certain libraries (aeneas & afaligner). I've been using the regular opencv-python and ffmpeg libraries in my program and they've been working find.

    &#xA;

    Recently I wanted to adjust my program to use h264 instead of mpeg4 and ran down a licensing rabbit hole of how opencv-python uses a build of ffmpeg with opengl codecs off to avoid licensing issues. x264 is apparently opengl, and is disabled in the opencv-python library.

    &#xA;

    In order to solve this issue, I built a custom build of opencv using another custom build of ffmpeg both with opengl enabled. This allowed me to use the x264 encoder with the VideoWriter in my python program.

    &#xA;

    Here's the dockerfile of how I've been running it :

    &#xA;&#xA;

    FROM python:3.7-slim&#xA;&#xA;# Set optimization flags and number of cores globally&#xA;ENV CFLAGS="-O3 -march=native -ffast-math -flto -fno-fat-lto-objects -ffunction-sections -fdata-sections" \&#xA;    CXXFLAGS="-O3 -march=native -ffast-math -flto -fno-fat-lto-objects -ffunction-sections -fdata-sections" \&#xA;    LDFLAGS="-flto -fno-fat-lto-objects -Wl,--gc-sections" \&#xA;    MAKEFLAGS="-j\$(nproc)"&#xA;&#xA;# Combine all system dependencies in a single layer&#xA;RUN apt-get update &amp;&amp; apt-get install -y --no-install-recommends \&#xA;    build-essential \&#xA;    cmake \&#xA;    git \&#xA;    wget \&#xA;    unzip \&#xA;    yasm \&#xA;    pkg-config \&#xA;    libsm6 \&#xA;    libxext6 \&#xA;    libxrender-dev \&#xA;    libglib2.0-0 \&#xA;    libavcodec-dev \&#xA;    libavformat-dev \&#xA;    libswscale-dev \&#xA;    libavutil-dev \&#xA;    libswresample-dev \&#xA;    nasm \&#xA;    mercurial \&#xA;    libnuma-dev \&#xA;    espeak \&#xA;    libespeak-dev \&#xA;    libtiff5-dev \&#xA;    libjpeg62-turbo-dev \&#xA;    libopenjp2-7-dev \&#xA;    zlib1g-dev \&#xA;    libfreetype6-dev \&#xA;    liblcms2-dev \&#xA;    libwebp-dev \&#xA;    tcl8.6-dev \&#xA;    tk8.6-dev \&#xA;    python3-tk \&#xA;    libharfbuzz-dev \&#xA;    libfribidi-dev \&#xA;    libxcb1-dev \&#xA;    python3-dev \&#xA;    python3-setuptools \&#xA;    libsndfile1 \&#xA;    libavdevice-dev \&#xA;    libavfilter-dev \&#xA;    libpostproc-dev \&#xA;    &amp;&amp; apt-get clean \&#xA;    &amp;&amp; rm -rf /var/lib/apt/lists/*&#xA;&#xA;# Build x264 with optimizations&#xA;RUN cd /tmp &amp;&amp; \&#xA;    wget https://code.videolan.org/videolan/x264/-/archive/master/x264-master.tar.bz2 &amp;&amp; \&#xA;    tar xjf x264-master.tar.bz2 &amp;&amp; \&#xA;    cd x264-master &amp;&amp; \&#xA;    ./configure \&#xA;        --enable-shared \&#xA;        --enable-pic \&#xA;        --enable-asm \&#xA;        --enable-lto \&#xA;        --enable-strip \&#xA;        --enable-optimizations \&#xA;        --bit-depth=8 \&#xA;        --disable-avs \&#xA;        --disable-swscale \&#xA;        --disable-lavf \&#xA;        --disable-ffms \&#xA;        --disable-gpac \&#xA;        --disable-lsmash \&#xA;        --extra-cflags="-O3 -march=native -ffast-math -fomit-frame-pointer -flto -fno-fat-lto-objects" \&#xA;        --extra-ldflags="-O3 -flto -fno-fat-lto-objects" &amp;&amp; \&#xA;    make &amp;&amp; \&#xA;    make install &amp;&amp; \&#xA;    cd /tmp &amp;&amp; \&#xA;    # Build FFmpeg with optimizations&#xA;    wget https://ffmpeg.org/releases/ffmpeg-7.1.tar.bz2 &amp;&amp; \&#xA;    tar xjf ffmpeg-7.1.tar.bz2 &amp;&amp; \&#xA;    cd ffmpeg-7.1 &amp;&amp; \&#xA;    ./configure \&#xA;        --enable-gpl \&#xA;        --enable-libx264 \&#xA;        --enable-shared \&#xA;        --enable-nonfree \&#xA;        --enable-pic \&#xA;        --enable-asm \&#xA;        --enable-optimizations \&#xA;        --enable-lto \&#xA;        --enable-pthreads \&#xA;        --disable-debug \&#xA;        --disable-static \&#xA;        --disable-doc \&#xA;        --disable-ffplay \&#xA;        --disable-ffprobe \&#xA;        --disable-filters \&#xA;        --disable-programs \&#xA;        --disable-postproc \&#xA;        --extra-cflags="-O3 -march=native -ffast-math -fomit-frame-pointer -flto -fno-fat-lto-objects -ffunction-sections -fdata-sections" \&#xA;        --extra-ldflags="-O3 -flto -fno-fat-lto-objects -Wl,--gc-sections" \&#xA;        --prefix=/usr/local &amp;&amp; \&#xA;    make &amp;&amp; \&#xA;    make install &amp;&amp; \&#xA;    ldconfig &amp;&amp; \&#xA;    rm -rf /tmp/*&#xA;&#xA;# Install Python dependencies first&#xA;RUN pip install --no-cache-dir --upgrade pip setuptools wheel &amp;&amp; \&#xA;    pip install --no-cache-dir numpy py-spy&#xA;&#xA;# Build OpenCV with optimized configuration&#xA;RUN cd /tmp &amp;&amp; \&#xA;    # Download specific OpenCV version archives&#xA;    wget -O opencv.zip https://github.com/opencv/opencv/archive/4.8.0.zip &amp;&amp; \&#xA;    wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.8.0.zip &amp;&amp; \&#xA;    unzip opencv.zip &amp;&amp; \&#xA;    unzip opencv_contrib.zip &amp;&amp; \&#xA;    mv opencv-4.8.0 opencv &amp;&amp; \&#xA;    mv opencv_contrib-4.8.0 opencv_contrib &amp;&amp; \&#xA;    rm opencv.zip opencv_contrib.zip &amp;&amp; \&#xA;    cd opencv &amp;&amp; \&#xA;    mkdir build &amp;&amp; cd build &amp;&amp; \&#xA;    cmake \&#xA;        -D CMAKE_BUILD_TYPE=RELEASE \&#xA;        -D CMAKE_C_FLAGS="-O3 -march=native -ffast-math -flto -fno-fat-lto-objects -ffunction-sections -fdata-sections" \&#xA;        -D CMAKE_CXX_FLAGS="-O3 -march=native -ffast-math -flto -fno-fat-lto-objects -ffunction-sections -fdata-sections -Wno-deprecated" \&#xA;        -D CMAKE_EXE_LINKER_FLAGS="-flto -fno-fat-lto-objects -Wl,--gc-sections" \&#xA;        -D CMAKE_SHARED_LINKER_FLAGS="-flto -fno-fat-lto-objects -Wl,--gc-sections" \&#xA;        -D CMAKE_INSTALL_PREFIX=/usr/local \&#xA;        -D ENABLE_FAST_MATH=ON \&#xA;        -D CPU_BASELINE_DETECT=ON \&#xA;        -D CPU_BASELINE=SSE3 \&#xA;        -D CPU_DISPATCH=SSE4_1,SSE4_2,AVX,AVX2,AVX512_SKX,FP16 \&#xA;        -D WITH_OPENMP=ON \&#xA;        -D OPENCV_ENABLE_NONFREE=ON \&#xA;        -D WITH_FFMPEG=ON \&#xA;        -D FFMPEG_ROOT=/usr/local \&#xA;        -D OPENCV_EXTRA_MODULES_PATH=/tmp/opencv_contrib/modules \&#xA;        -D PYTHON_EXECUTABLE=/usr/local/bin/python3.7 \&#xA;        -D PYTHON3_EXECUTABLE=/usr/local/bin/python3.7 \&#xA;        -D PYTHON3_INCLUDE_DIR=/usr/local/include/python3.7m \&#xA;        -D PYTHON3_LIBRARY=/usr/local/lib/libpython3.7m.so \&#xA;        -D PYTHON3_PACKAGES_PATH=/usr/local/lib/python3.7/site-packages \&#xA;        -D PYTHON3_NUMPY_INCLUDE_DIRS=/usr/local/lib/python3.7/site-packages/numpy/core/include \&#xA;        -D BUILD_opencv_python3=ON \&#xA;        -D INSTALL_PYTHON_EXAMPLES=OFF \&#xA;        -D BUILD_TESTS=OFF \&#xA;        -D BUILD_PERF_TESTS=OFF \&#xA;        -D BUILD_EXAMPLES=OFF \&#xA;        -D BUILD_DOCS=OFF \&#xA;        -D BUILD_opencv_apps=OFF \&#xA;        -D WITH_OPENCL=OFF \&#xA;        -D WITH_CUDA=OFF \&#xA;        -D WITH_IPP=OFF \&#xA;        -D WITH_TBB=OFF \&#xA;        -D WITH_V4L=OFF \&#xA;        -D WITH_QT=OFF \&#xA;        -D WITH_GTK=OFF \&#xA;        -D BUILD_LIST=core,imgproc,imgcodecs,videoio,python3 \&#xA;        .. &amp;&amp; \&#xA;    make &amp;&amp; \&#xA;    make install &amp;&amp; \&#xA;    ldconfig &amp;&amp; \&#xA;    rm -rf /tmp/*&#xA;&#xA;# Set working directory and copy application code&#xA;WORKDIR /app&#xA;&#xA;COPY requirements.txt .&#xA;&#xA;RUN apt-get update &amp;&amp; apt-get install -y --no-install-recommends ffmpeg&#xA;&#xA;RUN pip install --no-cache-dir aeneas afaligner &amp;&amp; \&#xA;    pip install --no-cache-dir -r requirements.txt&#xA;&#xA;COPY . .&#xA;&#xA;# Make entrypoint executable&#xA;RUN chmod &#x2B;x entrypoint.sh&#xA;ENTRYPOINT ["./entrypoint.sh"]&#xA;

    &#xA;

    My trouble now, is I've been considering running parts of my program on my GPU, it's creating graphics for a video after all. I have no idea how to edit my Dockerfile to make the opencv build run with CUDA enabled, every combination I try leads to issues.

    &#xA;

    How can I tell which version of CUDA, opencv and ffmpeg are compatible with python 3.7 ? I've tried so so many combinations and they all lead to different issues, I've asked various AI agents and they all flounder. Where can I find a reliable source of information about this ?

    &#xA;

  • Zlib vs. XZ on 2SF

    21 juillet 2012, par Multimedia Mike — General, psf, saltygme, xz, zlib

    I recently released my Game Music Appreciation website. It allows users to play an enormous range of video game music directly in their browsers. To do this, the site has to host the music. And since I’m a compression bore, I have to know how small I can practically make these music files. I already published the results of my effort to see if XZ could beat RAR (RAR won, but only slightly, and I still went with XZ for the project) on the corpus of Super Nintendo chiptune sets. Next is the corpus of Nintendo DS chiptunes.

    Repacking Nintendo DS 2SF
    The prevailing chiptune format for storing Nintendo DS songs is the .2sf format. This is a subtype of the Portable Sound Format (PSF). The designers had the foresight to build compression directly into the format. Much of payload data in a PSF file is compressed with zlib. Since I already incorporated Embedded XZ into the player project, I decided to try repacking the PSF payload data from zlib -> xz.

    In an effort to not corrupt standards too much, I changed the ’PSF’ file signature (seen in the first 3 bytes of a file) to ’psf’.

    Results
    There are about 900 Nintendo DS games currently represented in my website’s archive. Total size of the original PSF archive, payloads packed with zlib : 2.992 GB. Total size of the same archive with payloads packed as xz : 2.059 GB.

    Using xz vs. zlib saved me nearly a gigabyte of storage. That extra storage doesn’t really impact my hosting plan very much (I have 1/2 TB, which is why I’m so nonchalant about hosting the massive MPlayer Samples Archive). However, smaller individual files translates to a better user experience since the files are faster to download.

    Here is a pretty picture to illustrate the space savings :



    The blue occasionally appears to dip below the orange but the data indicates that xz is always more efficient than zlib. Here’s the raw data (comes in vanilla CSV flavor too).

    Interface Impact
    So the good news for the end user is that the songs are faster to load up front. The downside is that there can be a noticeable delay when changing tracks. Even though all songs are packaged into one file for download, and the entire file is downloaded before playback begins, each song is individually compressed. Thus, changing tracks triggers another decompression operation. I’m toying the possibility of some sort of background process that decompresses song (n+1) while playing song (n) in order to help compensate for this.

    I don’t like the idea of decompressing everything up front because A) it would take even longer to start playing ; and B) it would take a huge amount of memory.

    Corner Case
    There was at least one case in which I found zlib to be better than xz. It looks like zlib’s minimum block size is smaller than xz’s. I think I discovered xz to be unable to compress a few bytes to a block any smaller than about 60-64 bytes while zlib got it down into the teens. However, in those cases, it was more efficient to just leave the data uncompressed anyway.