Recherche avancée

Médias (91)

Autres articles (79)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

Sur d’autres sites (10950)

  • Revision 35243 : suivre [35095]

    16 février 2010, par brunobergot@… — Log

    suivre [35095]

  • Matomo Celebrates 15 Years of Building an Open-Source & Transparent Web Analytics Solution

    30 juin 2022, par Matthieu Aubry — About, Community
    &lt;script type=&quot;text/javascript&quot;&gt;<br />
           if ('function' === typeof window.playMatomoVideo){<br />
           window.playMatomoVideo(&quot;brand&quot;, &quot;#brand&quot;)<br />
           } else {<br />
           document.addEventListener(&quot;DOMContentLoaded&quot;, function() { window.playMatomoVideo(&quot;brand&quot;, &quot;#brand&quot;); });<br />
           }<br />
      &lt;/script&gt;

    Fifteen years ago, I realised that people (myself included) were increasingly integrating the internet into their everyday lives, and it was clear that it would only expand in the future. It was an exciting new world, but the amount of personal data shared online, level of tracking and lack of security was a growing concern. Google Analytics was just launched then and was already gaining huge traction – so data from millions of websites started flowing into Google’s database, creating what was then the biggest centralised database about people worldwide and their actions online.

    So as a young engineering student, I decided we needed to build an open source and transparent solution that could help make the internet more secure and private while still providing organisations with powerful insights. I aimed to create a win-win solution for businesses and their digital consumers.

    And in 2007, I started developing Matomo with the help from Scott Switzer and Jennifer Langdon (who offered me an internship and support).   

    All thanks to the Matomo Community

    We have reached significant milestones and made major changes over the last 15 years, but we wouldn’t be where we are today without the Matomo Community.

    So I would like to celebrate and thank the hundreds of volunteer developers who have donated their time to develop Matomo, the thousands of contributors who provided feedback to improve Matomo, the countless supportive forum members, our passionate team of 40 at Matomo, the numerous translators who have translated Matomo and the 1.5 million websites that choose Matomo as their analytics platform.

    Matomo's Birthday
    Team Meetup in Paris in 2012

    Matomo has been a community effort built on the shoulders of many, and we will continue to work for you. 

    So let’s look at some milestones we have achieved over the last 15 years.

    Looking back on milestones in our timeline

    2007

    • Birth of Matomo
    • First alpha version released

    2008

    • Release first public 0.1.0 version

    2009

    • 50,000 websites use Matomo

    2010

    • Matomo first stable 1.0.0 released
    • Mobile app launched

    2011

    • Released Ecommerce Analytics, Custom Variables, First Party Cookies

    • Released Privacy control features (first of many privacy features to come !)

    2012

    • Released Log Analytics feature
    • 1 Million Downloads !
    • 300,000 websites worldwide use Matomo

    2013

    • Matomo is now available in 50 languages !
    • Matomo brand redesign

    2016

    2017

    • Launched Matomo Cloud service 
    • Released Multi Channel Conversion Attribution Premium Feature, Custom Reports Premium Feature, Login Saml Premium Feature, WooCommerceAnalytics Premium Feature and Heatmap & Session Recording Premium Feature 

    2018

    2019

    2020

    2021

    • 1,000,000 websites worldwide use Matomo
    • including 30,000 active Matomo for WordPress installations
    • Released SEO Web Vitals, Advertising Conversion Export and Tracking Spam Prevention feature

    2022

    • Released WP Statistics to Matomo importer

    Our efforts continue

    While we’ve seen incredible growth over the years, our work doesn’t stop there. In fact, we’re only just getting started.

    Today over 55% of the internet continues to use privacy-threatening web analytics solutions, while 1.5% uses Matomo. So there are still great strides to be made to create a more private internet, and joining the Matomo Community is one way to support this movement.

    There are many ways to get involved too, such as :

    So what comes next for Matomo ?

    The future of Matomo is approachable, powerful and flexible. We’re strengthening the customers’ voice, expanding our resources internally (we’re continuously hiring !) and conducting rigorous customer research to craft a tool that balances usability and functionality.

    I look forward to the next 15 years and seeing what the future holds for Matomo and our community.

  • ffmpeg stops capturing whole hour of HTTP stream after some time

    7 juillet 2020, par CompuChip

    First of all, sorry if I'm using the wrong terminology. I've been playing around with nginx and I'm still a bit confused about RTMP and HLS and other acronyms.

    &#xA;

    I've managed to setup OBS to stream to an nginx server, which takes the RTMP stream and chops it into pieces for HLS. Here's the relevant part of the nginx configuration file.

    &#xA;

    rtmp {&#xA;    server {&#xA;        listen 1935;&#xA;        chunk_size 4000;&#xA;        ping 30s;&#xA;        deny play all;&#xA;&#xA;        application live {&#xA;            live on;&#xA;            hls on;&#xA;            hls_nested on; # Create a new folder for each stream&#xA;            hls_path /mnt/hls/live;&#xA;            hls_fragment 3s;&#xA;            hls_fragment_naming timestamp;&#xA;            hls_playlist_length 60s;&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;http {&#xA;    server {&#xA;        listen 81 ssl;&#xA;&#xA;        #creates the http-location for our full-resolution (desktop) HLS stream - "http://localhost:8080/live/test/index.m3u8"&#xA;        location /live {&#xA;            # Elided caching and CORS for brevity&#xA;&#xA;            alias /mnt/hls/live;&#xA;            add_header Cache-Control no-cache;&#xA;            index index.m3u8;&#xA;        }&#xA;    }&#xA;}&#xA;

    &#xA;

    This works well, I can view the stream in VLC or on a website and it looks smooth. Now I wanted to add some logging : I'd like to write full hours (starting at xx:00:00 and ending at xx:59:59) to a file named log_yyyymmdd_hh.mp4, e.g. log_20200707_18.mp4 for the files of 7 July 2020, 18:00 - 19:00 hrs. So I've set up an hourly cron job with the following ffmpeg command :

    &#xA;

    ffmpeg -i https://stream.example.com:81/live/<streamkey> -preset veryfast -maxrate 2000k \&#xA;    -bufsize 2000k -g 60 -t 3600 -y /var/video/log/$(date &#x2B;\%Y\%m\%d_\%H00).mp4 >/dev/null 2>&amp;1&#xA;</streamkey>

    &#xA;

    At first this seemed to work well, so I left it running happily for about 24 hours. When I checked, most of my hourly files were small ( 100MB) files of about 10 to 15 minutes long. It seems like any small delay in the stream will cause ffmpeg to stop writing to the file. I suspect such hiccups may for example be caused by an OBS plugin and I'll need to look into that, but I would prefer that ffmpeg will retry for some time before giving up. What arguments should I be passing to ffmpeg to make it not break when the stream is down for, say, up to a second every now and then ?.

    &#xA;

    When I view back the HLS files there don't seem to be any noticeable gaps, so eventually all the data arrives. I went for the crontab solution with ffmpeg because when recording from nginx I could not figure out how to start recording at the start of the whole hour.

    &#xA;