Recherche avancée

Médias (91)

Autres articles (66)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (7420)

  • Introducing the Data Warehouse Connector feature

    30 janvier, par Matomo Core Team

    Matomo is built on a simple truth : your data belongs to you, and you should have complete control over it. That’s why we’re excited to launch our new Data Warehouse Connector feature for Matomo Cloud, giving you even more ways to work with your analytics data. 

    Until now, getting raw data from Matomo Cloud required APIs and custom scripts, or waiting for engineering help.  

    Our new Data Warehouse Connector feature removes those barriers. You can now access your raw, unaggregated data and schedule regular exports straight to your data warehouse. 

    The feature works with all major data warehouses including (but not limited to) : 

    • Google BigQuery 
    • Amazon Redshift 
    • Snowflake 
    • Azure Synapse Analytics 
    • Apache Hive 
    • Teradata 

    You can schedule exports, combine your Matomo data with other data sources in your data warehouse, and easily query data with SQL-like queries. 

    Direct raw data access for greater data portability 

    Waiting for engineering support can delay your work. Managing API connections and writing scripts can be time-consuming. This keeps you from focusing on what you do best—analysing data. 

    BigQuery create-table-menu

    With the Data Warehouse Connector feature, you get direct access to your raw Matomo data without the technical setup. So, you can spend more time analysing data and finding insights that matter. 

    Bringing your data together 

    Answering business questions often requires data from multiple sources. A single customer interaction might span your CRM, web analytics, sales systems, and more. Piecing this data together manually is time-consuming—what starts as a seemingly simple question from stakeholders can turn into hours of work collecting and comparing data across different tools. 

    This feature lets you combine your Matomo data with data from other business systems in your data warehouse. Instead of switching between tools or manually comparing spreadsheets, you can analyse all your data in one place to better understand how customers interact with your business. 

    Easy, custom analysis with SQL-like queries 

    Standard, pre-built reports often don’t address the specific, detailed questions that analysts need to answer.  

    When you use the Data Warehouse Connector feature, you can use SQL-like queries in your data warehouse to do detailed, customised analysis. This flexibility allows you to explore your data in depth and uncover specific insights that aren’t possible with pre-built reports. 

    Here is an example of how you might use SQL-like query to compare the behaviours of paying vs. non-paying users : 

    				
                                            <xmp>SELECT  

    custom_dimension_value AS user_type, -- Assuming 'user_type' is stored in a custom dimension

    COUNT(*) AS total_visits,  

    AVG(visit_total_time) AS avg_duration,

    SUM(conversion.revenue) AS total_spent  

    FROM  

    `your_project.your_dataset.matomo_log_visit` AS visit

    LEFT JOIN  

    `your_project.your_dataset.matomo_log_conversion` AS conversion  

    ON  

    visit.idvisit = conversion.idvisit  

    GROUP BY  

    custom_dimension_value; </xmp>
                                   

    This query helps you compare metrics such as the number of visits, average session duration, and total amount spent between paying and non-paying users. It provides a full view of behavioural differences between these groups. 

    Advanced data manipulation and visualisation 

    When you need to create detailed reports or dive deep into data analysis, working within the constraints of a fixed user interface (UI) can limit your ability to draw insights. 

    Exporting your Matomo data to a data warehouse like BigQuery provides greater flexibility for in-depth manipulation and advanced visualisations, enabling you to uncover deeper insights and tailor your reports more effectively. 

    Getting started 

    To set up data warehouse exports in your Matomo : 

    1. Go to System Admin (cog icon in the top right corner) 
    2. Select ‘Export’ from the left-hand menu 
    3. Choose ‘Data Warehouse Connector’ 

    You’ll find detailed instructions in our data warehouse exports guide 

    Please note, enabling this feature will cost an additional 10% of your current subscription. You can view the exact cost by following the steps above. 

    New to Matomo ? Start your 21-day free trial now (no credit card required), or request a demo. 

  • AWS Lambda subprocess OSError : [Errno 2] No such file or directory

    11 septembre 2016, par Lev

    I’m trying to create a lambda function that makes collection of thumbnails from a video on amazon s3 using ffmpeg. ffmpeg binary is included into fuction package.

    function code :

    # -*- coding: utf-8 -*-

    import stat
    import shutil
    import boto3
    import logging
    import subprocess as sp
    import os
    import threading

    thumbnail_prefix = 'thumb_'
    thumbnail_ext = '.jpg'
    time_delta = 1
    video_frames_path = 'media/videos/frames'

    print('Loading function')
    logger = logging.getLogger()
    logger.setLevel(logging.INFO)

    lambda_tmp_dir = '/tmp'  # Lambda fuction can use this directory.

    # ffmpeg is stored with this script.
    # When executing ffmpeg, execute permission is requierd.
    # But Lambda source directory do not have permission to change it.
    # So move ffmpeg binary to `/tmp` and add permission.
    ffmpeg_bin = "{0}/ffmpeg.linux64".format(lambda_tmp_dir)
    shutil.copyfile('/var/task/ffmpeg.linux64', ffmpeg_bin)

    os.chmod(ffmpeg_bin, 777)

    # tried also:
    # os.chmod(ffmpeg_bin, os.stat(ffmpeg_bin).st_mode | stat.S_IEXEC)

    s3 = boto3.client('s3')


    def get_thumb_filename(num):
       return '{prefix}{num:03d}{ext}'.format(prefix=thumbnail_prefix, num=num, ext=thumbnail_ext)


    def create_thumbnails(video_url):
       i = 1
       filenames_list = []
       filename = None
       while i == 1 or os.path.isfile(os.path.join(os.getcwd(), get_thumb_filename(i-1))):
           if filename:
               filenames_list.append(filename)
           time = time_delta * (i - 1)
           filename = get_thumb_filename(i)
           print(ffmpeg_bin)
           if os.path.isfile(ffmpeg_bin):
               print('ok')
           sp.call(['sudo',
                    ffmpeg_bin,
                    '-ss',
                    str(time),
                    '-i',
                    video_url,
                    '-frames:v',
                    '1',
                    get_thumb_filename(i)])
           i += 1
       print(filenames_list)
       return filenames_list


    def s3_upload_file(file_path, key, bucket, acl, content_type):
       file = open(file_path, 'r')
       s3.put_object(
           Bucket=bucket,
           ACL=acl,
           Body=file,
           Key=key,
           ContentType=content_type
       )
       logger.info("file {0} moved to {1}/{2}".format(file_path, bucket, key))


    def s3_upload_files_in_threads(filenames_list, dir_path, bucket, s3path, acl, content_type):
       for filename in filenames_list:
           if os.path.isfile(os.path.join(dir_path, filename)):
               print(os.path.join(dir_path, filename))
           t = threading.Thread(target=s3_upload_file,
                                args=(os.path.join(dir_path, filename),
                                      '{0}/{1}'.format(s3path, filename),
                                      bucket,
                                      acl,
                                      content_type)).start()


    def lambda_handler(event, context):
       bucket = event['Records'][0]['s3']['bucket']['name']
       video_key = event['Records'][0]['s3']['object']['key']
       video_name = video_key.split('/')[-1].split('.')[0]
       video_url = 'http://{0}/{1}'.format(bucket, video_key)
       filenames_list = create_thumbnails(video_url)
       s3_upload_files_in_threads(filenames_list,
                                  os.getcwd(),
                                  bucket,
                                  '{0}/{1}'.format(video_frames_path, video_name),
                                  'public-read',
                                  'image/jpeg')
       return

    during the execution I get following logs :

    Loading function

    /tmp/ffmpeg.linux64

    ok

    [Errno 2] No such file or directory: OSError
    Traceback (most recent call last):
    File "/var/task/lambda_function.py", line 112, in lambda_handler
    filenames_list = create_thumbnails(video_url)
    File "/var/task/lambda_function.py", line 77, in create_thumbnails
    get_thumb_filename(i)])
    File "/usr/lib64/python2.7/subprocess.py", line 522, in call
    return Popen(*popenargs, **kwargs).wait()
    File "/usr/lib64/python2.7/subprocess.py", line 710, in __init__
    errread, errwrite)
    File "/usr/lib64/python2.7/subprocess.py", line 1335, in _execute_child
    raise child_exception
    OSError: [Errno 2] No such file or directory

    When I use the same sp.call() with the same ffmpeg binary on my ec2 instance it works fine.

  • ffmpeg stream chrome kiosk mode ubuntu 16.04 server

    21 décembre 2016, par Raul

    I have a weird out-of-sync issue while using ffmpeg to stream to youtube live a chrome browser from an ub untu 16.04 server.

    Issue : output video streamed to youtube has audio/video out of sync, sometimes with as much as 3s

    Current flow :

    1) start pulseaudio - we using something like this to start it :

    pulseaudio --start -vvv --disallow-exit --log-target=syslog --high-priority --exit-idle-time=-1 --daemonize

    2) start Xvfb

    Xvfb :0 -ac -screen 0 1920x1080x24

    3) start chrome linux in kiosk mode

    google-chrome --kiosk --disable-gpu --incognito --no-first-run --disable-java --disable-plugins --disable-translate --disk-cache-size=$((1024 * 1024)) --disk-cache-dir=/tmp/chrome/ --user-data-dir=/tmp/chrome/ --force-device-scale-factor=1 --window-size=1920,1080 --window-position=0,0 LOCATION_URL

    4) start ffmpeg

    ffmpeg -y \
     -thread_queue_size 8192 -rtbufsize 250M -f x11grab -video_size 1920x1080 -framerate 24 -i :0 \
     -thread_queue_size 8192 -channel_layout stereo -f alsa -i pulse \
     -c:v libx264 -pix_fmt yuv420p -c:v libx264 -g 48 -crf 24 -filter:v fps=24 -preset ultrafast -tune zerolatency \
     -c:a aac -strict -2 -channel_layout stereo -ab 96k -ac 2 -flags +global_header \
     -f flv YOUTUBE_LIVE_STREAMING_RTMP

    Note : this is running on an amazon ec2 instance, meaning there is no soundcard, so alsa and pulseaudio are creating a dummy audio card. However, the latency does not come from there. Logs :

    Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Adjust latency mode enabled, configuring sink latency to half of overall latency.
    Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Requested latency=23.22 ms, Received latency=23.22 ms
    Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Final latency 69.66 ms = 23.22 ms + 2*11.61 ms + 23.22 ms

    At this point, here’s what we observed :

    1. if we start ffmpeg exactly after issuing the command to start chrome, we see the DTS errors from ffmpeg. Audio is out of sync with the video and has delay of 3-5seconds AHEAD. We also noticed the out of sync remains the same for the full duration of the stream

    2. if we start ffmpeg after around 10seconds, audio and video are almost in sync. We then manually added a -itsoffset -0.125 to the ffmpeg command and everything is perfect.

    Questions :

    1. Why would ffmpeg have so much lag if it’s started right after chrome ?
    2. Is starting the ffmpeg after 10s or X seconds the expected behavior ? That is, is this because the system needs to wait for audio/video signals to be "ready" or something ?
    3. Is there a way to 100% calculate or know when Chrome is fully ready and start ffmpeg ? We found sometimes it takes 5s, sometimes 10. Depends on the URL we load.
    4. Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything. And a restart is required to "re-balance" the audio/video inputs and get them back in sync.
    5. Can pulseaudio be the problem in this scenario ?

    Thank you

    UPDATE Dec 20

    We were able to do some tricks to force chrome to start the audio on page load, and that will force connect to pulseaudio. Doing that, plus adding a 3s delay for ffmpeg to start, there is no more delay when ffmpeg starts.
    However, our app is a webRTC app, and we have a STRANGER thing happening : if we start the page with no webcam/audio, once the webcam/audio is enabled, ffmpeg (while showing no errors) has a delay of 2s or so. While keep talking, in about max 30s, that delay is GONE.

    So the new questions are :

    1. Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything.
    2. What could cause the initial audio/video out of sync issue and then catching up ?