Recherche avancée

Médias (3)

Mot : - Tags -/image

Autres articles (76)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (11982)

  • A *hot* Piwik Community Meetup 2015 !

    10 août 2015 — Community

    Last weekend I arrived in Germany to attend the Piwik Community Meetup 2015 and now I am in Poland. I joined Piwik PRO back in May as enterprise support project coordinator in North America. I am now writing this from the Piwik PRO main office in Wrocław, where I’ll be working from for the next two weeks.

    The meetup was HOT in every sense ! Berlin temperatures reached 35 degrees (celsius), as I finally meet in person several long-time, dedicated Piwik community contributors.

    Meetup preparation in Berlin, photo by M. Zawadziński, licensed under CC-BY-SA 4.0

    Pictures from the meetup preparation sessions

    In the first leg of my trip I was in Berlin to meet Piwik community members and Piwik PRO staff to prepare for the 2015 annual Piwik community meetup. These are my notes taken during the meeting at the request of one of my colleagues. I also relayed live on Framasphère, Twitter and IRC.

    Community discussion at the meetup, photo by D.Czajka, licensed under CC-BY-SA 4.0

    More pictures from the Piwik meetup

    This was harder than I expected, as I took notes with my laptop, pictures with my phone, wrote live to social media (using the Android Diaspora Native Web App), and used my laptop to relay on IRC. Going forward this requires better preparation, I was glad I had a few links and pictures ready before hand but it really requires intense focus to achieve this. I am glad presenters were patient when I requested repeating some of the ideas they shared. I am also a bit disappointed not much happened in IRC.

    Two day preparation sessions

    The discussions and session we had during the two days prior to the meetup are available here.

    We gathered in rented apartments in Berlin, this reminded me very much of similar community gatherings and perhaps of BarCamp and, at a much smaller scale, UDS sessions.

    Piwik Pizza !, photo by F. Rodríguez, licensed under CC-BY-SA 4.0

    A list of ideas of topics was initially submitted, we then proceeded to have scheduled sessions for open discussion. Several people shared their concern there was no possible remote participation which led to making public the Trello boards used/linked here.

    Note : The Trello links below still have action items and notes that are pending bug report / feature requests filing which should happen over the coming weeks. Most importantly, many action items will need identifying leads for different community team including Translations and Documentation, and better coordination of coming community engagement.

    Monday sessions consisted of the following subjects :

    On Tuesday we met again to discuss the following subjects :

    Some more details about individual preparation sessions

    What are Piwik values & how to communicate them ?

    The main subjects in this session were important changes proposed in the project mission and values. This was edited directly on on the wiki page on GitHub, some of the changes can be seen by comparing revisions.

    Piwik mission statement (bug #7376)

    “To create the leading Free and open source analytics platform, and to support global organisations and communities to keep full control over their data.”

    Our values

    • Openness
    • Freedom
    • Transparency
    • Data ownership
    • Privacy
    • Kaizen (改善) : continuous improvement

    This was also presented by Matthieu Aubry at the meetup and is published in the Roadmap page. Bringing more visibility and perhaps having a top page for Mission and Values was also brought up.

    Meetup agenda and notes

    The official agenda is available here.

    Many Piwik PRO employees stayed in Berlin for the meetup, and we had good participation although less than last year in Munich as my colleagues told me. Some were consultants, others staff from public organizations, universities, etc. In retrospect considering the very hot weather and summer holidays the attendance was good. I was very happy to arrive at the beautiful Kulturbrauerei and enter the air-conditioned Soda Club. T-Shirts were waiting for all attendees and free drinks (non-alcohol !) were welcome

  • What is Google Analytics data sampling and what’s so bad about it ?

    16 août 2019, par Joselyn Khor — Analytics Tips, Development

    What is Google Analytics data sampling, and what’s so bad about it ?

    Google (2019) explains what data sampling is :

    “In data analysis, sampling is the practice of analysing a subset of all data in order to uncover the meaningful information in the larger data set.”[1]

    This is basically saying instead of analysing all of the data, there’s a threshold on how much data is analysed and any data after that will be an assumption based on patterns.

    Google’s (2019) data sampling thresholds :

    Ad-hoc queries of your data are subject to the following general thresholds for sampling :
    [Google] Analytics Standard : 500k sessions at the property level for the date range you are using
    [Google] Analytics 360 : 100M sessions at the view level for the date range you are using (para. 3) [2]

    This threshold is limiting because your data in GA may become more inaccurate as the traffic to your website increases.

    Say you’re looking through all your traffic data from the last year and find you have 5 million page views. Only 500K of that 5 million is accurate ! The data for the remaining 4.5 million (90%) is an assumption based on the 500K sample size.

    This is a key weapon Google uses to sell to large businesses. In order to increase that threshold for more accurate reporting, upgrading to premium Google Analytics 360 for approximately US$150,000 per year seems to be the only choice.

    What’s so bad about data sampling ?

    It’s unfair to say sampled data is to be disregarded completely. There is a calculation ensuring it is representative and can allow you to get good enough insights. However, we don’t encourage it as we don’t just want “good enough” data. We want the actual facts.

    In a recent survey sent to Matomo customers, we found a large proportion of users switched from GA to Matomo due to the data sampling issue.

    The two reasons why data sampling isn’t preferable : 

    1. If the selected sample size is too small, you won’t get a good representative of all the data. 
    2. The bigger your website grows, the more inaccurate your reports will become.

    An example of why we don’t fully trust sampled data is, say you have an ecommerce store and see your GA revenue reports aren’t matching the actual sales data, due to data sampling. In GA you may be seeing revenue for the month as $1 million, instead of actual sales of $800K.

    The sampling here has caused an inaccuracy that could have negative financial implications. What you get in the GA report is an estimated dollar figure rather than the actual sales. Making decisions based on inaccurate data can be costly in this case. 

    Another disadvantage to sampled data is that you might be missing out on opportunities you would’ve noticed if you were given a view of the whole. E.g. not being able to see real patterns occurring due to the data already being predicted. 

    By not getting a chance to see things as they are and only being able to jump to the conclusions and assumptions made by GA is risky. The bigger your business grows, the less you can risk making business decisions based on assumptions that could be inaccurate. 

    If you feel you could be missing out on opportunities because your GA data is sampled data, get 100% accurately reported data. 

    The benefits of 100% accurate data

    Matomo doesn’t use data sampling on any of our products or plans. You get to see all of your data and not a sampled data set.

    Data quality is necessary for high impact decision-making. It’s hard to make strategic changes if you don’t have confidence that your data is reliable and accurate.

    Learn about how Matomo is a serious contender to Google Analytics 360. 

    Now you can import your Google Analytics data directly into your Matomo

    If you’re wanting to make the switch to Matomo but worried about losing all your historic Google Analytics data, you can now import this directly into your Matomo with the Google Analytics Importer tool.


    Take the challenge !

    Compare your Google Analytics data (sampled data) against your Matomo data, or if you don’t have Matomo data yet, sign up to our 30-day free trial and start tracking !

    References :

    [1 & 2] About data sampling. (2019). In Analytics Help About data sampling. Retrieved August 14, 2019, from https://support.google.com/analytics/answer/2637192

  • Capture and encode desktop with libav in real time not giving corect images

    3 septembre 2022, par thoxey

    As part of a larger project I want to be able to capture and encode the desktop frame by frame in real time. I have the following test code to reproduce the issue shown in the screenshot :

    


    #include &#xA;#include &#xA;#include <iostream>&#xA;#include <fstream>&#xA;#include <string>&#xA;#include &#xA;#include &#xA;&#xA;extern "C"&#xA;{&#xA;#include "libavdevice/avdevice.h"&#xA;#include "libavutil/channel_layout.h"&#xA;#include "libavutil/mathematics.h"&#xA;#include "libavutil/opt.h"&#xA;#include "libavformat/avformat.h"&#xA;#include "libswscale/swscale.h"&#xA;}&#xA;&#xA;&#xA;/* 5 seconds stream duration */&#xA;#define STREAM_DURATION   5.0&#xA;#define STREAM_FRAME_RATE 25 /* 25 images/s */&#xA;#define STREAM_NB_FRAMES  ((int)(STREAM_DURATION * STREAM_FRAME_RATE))&#xA;#define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */&#xA;&#xA;int videoStreamIndx;&#xA;int framerate = 30;&#xA;&#xA;int width = 1920;&#xA;int height = 1080;&#xA;&#xA;int encPacketCounter;&#xA;&#xA;AVFormatContext* ifmtCtx;&#xA;AVCodecContext* avcodecContx;&#xA;AVFormatContext* ofmtCtx;&#xA;AVStream* videoStream;&#xA;AVCodecContext* avCntxOut;&#xA;AVPacket* avPkt;&#xA;AVFrame* avFrame;&#xA;AVFrame* outFrame;&#xA;SwsContext* swsCtx;&#xA;&#xA;std::ofstream fs;&#xA;&#xA;&#xA;AVDictionary* ConfigureScreenCapture()&#xA;{&#xA;&#xA;    AVDictionary* options = NULL;&#xA;    //Try adding "-rtbufsize 100M" as in https://stackoverflow.com/questions/6766333/capture-windows-screen-with-ffmpeg&#xA;    av_dict_set(&amp;options, "rtbufsize", "100M", 0);&#xA;    av_dict_set(&amp;options, "framerate", std::to_string(framerate).c_str(), 0);&#xA;    char buffer[16];&#xA;    sprintf(buffer, "%dx%d", width, height);&#xA;    av_dict_set(&amp;options, "video_size", buffer, 0);&#xA;    return options;&#xA;}&#xA;&#xA;AVCodecParameters* ConfigureAvCodec()&#xA;{&#xA;    AVCodecParameters* av_codec_par_out = avcodec_parameters_alloc();&#xA;    av_codec_par_out->width = width;&#xA;    av_codec_par_out->height = height;&#xA;    av_codec_par_out->bit_rate = 40000;&#xA;    av_codec_par_out->codec_id = AV_CODEC_ID_H264; //AV_CODEC_ID_MPEG4; //Try H.264 instead of MPEG4&#xA;    av_codec_par_out->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    av_codec_par_out->format = 0;&#xA;    return av_codec_par_out;&#xA;}&#xA;&#xA;int GetVideoStreamIndex()&#xA;{&#xA;    int VideoStreamIndx = -1;&#xA;    avformat_find_stream_info(ifmtCtx, NULL);&#xA;    /* find the first video stream index . Also there is an API available to do the below operations */&#xA;    for (int i = 0; i &lt; (int)ifmtCtx->nb_streams; i&#x2B;&#x2B;) // find video stream position/index.&#xA;    {&#xA;        if (ifmtCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;        {&#xA;            VideoStreamIndx = i;&#xA;            break;&#xA;        }&#xA;    }&#xA;&#xA;    if (VideoStreamIndx == -1)&#xA;    {&#xA;    }&#xA;&#xA;    return VideoStreamIndx;&#xA;}&#xA;&#xA;void CreateFrames(AVCodecParameters* av_codec_par_in, AVCodecParameters* av_codec_par_out)&#xA;{&#xA;&#xA;    avFrame = av_frame_alloc();&#xA;    avFrame->width = avcodecContx->width;&#xA;    avFrame->height = avcodecContx->height;&#xA;    avFrame->format = av_codec_par_in->format;&#xA;    av_frame_get_buffer(avFrame, 0);&#xA;&#xA;    outFrame = av_frame_alloc();&#xA;    outFrame->width = avCntxOut->width;&#xA;    outFrame->height = avCntxOut->height;&#xA;    outFrame->format = av_codec_par_out->format;&#xA;    av_frame_get_buffer(outFrame, 0);&#xA;}&#xA;&#xA;bool Init()&#xA;{&#xA;    AVCodecParameters* avCodecParOut = ConfigureAvCodec();&#xA;&#xA;    AVDictionary* options = ConfigureScreenCapture();&#xA;&#xA;    AVInputFormat* ifmt = av_find_input_format("gdigrab");&#xA;    auto ifmtCtxLocal = avformat_alloc_context();&#xA;    if (avformat_open_input(&amp;ifmtCtxLocal, "desktop", ifmt, &amp;options) &lt; 0)&#xA;    {&#xA;        return false;&#xA;    }&#xA;    ifmtCtx = ifmtCtxLocal;&#xA;&#xA;    videoStreamIndx = GetVideoStreamIndex();&#xA;&#xA;    AVCodecParameters* avCodecParIn = avcodec_parameters_alloc();&#xA;    avCodecParIn = ifmtCtx->streams[videoStreamIndx]->codecpar;&#xA;&#xA;    AVCodec* avCodec = avcodec_find_decoder(avCodecParIn->codec_id);&#xA;    if (avCodec == NULL)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    avcodecContx = avcodec_alloc_context3(avCodec);&#xA;    if (avcodec_parameters_to_context(avcodecContx, avCodecParIn) &lt; 0)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    //av_dict_set&#xA;    int value = avcodec_open2(avcodecContx, avCodec, NULL); //Initialize the AVCodecContext to use the given AVCodec.&#xA;    if (value &lt; 0)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    AVOutputFormat* ofmt = av_guess_format("h264", NULL, NULL);&#xA;&#xA;    if (ofmt == NULL)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    auto ofmtCtxLocal = avformat_alloc_context();&#xA;    avformat_alloc_output_context2(&amp;ofmtCtxLocal, ofmt, NULL, NULL);&#xA;    if (ofmtCtxLocal == NULL)&#xA;    {&#xA;        return false;&#xA;    }&#xA;    ofmtCtx = ofmtCtxLocal;&#xA;&#xA;    AVCodec* avCodecOut = avcodec_find_encoder(avCodecParOut->codec_id);&#xA;    if (avCodecOut == NULL)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    videoStream = avformat_new_stream(ofmtCtx, avCodecOut);&#xA;    if (videoStream == NULL)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    avCntxOut = avcodec_alloc_context3(avCodecOut);&#xA;    if (avCntxOut == NULL)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    if (avcodec_parameters_copy(videoStream->codecpar, avCodecParOut) &lt; 0)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    if (avcodec_parameters_to_context(avCntxOut, avCodecParOut) &lt; 0)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    avCntxOut->gop_size = 30; //3; //Use I-Frame frame every 30 frames.&#xA;    avCntxOut->max_b_frames = 0;&#xA;    avCntxOut->time_base.num = 1;&#xA;    avCntxOut->time_base.den = framerate;&#xA;&#xA;    //avio_open(&amp;ofmtCtx->pb, "", AVIO_FLAG_READ_WRITE);&#xA;&#xA;    if (avformat_write_header(ofmtCtx, NULL) &lt; 0)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    value = avcodec_open2(avCntxOut, avCodecOut, NULL); //Initialize the AVCodecContext to use the given AVCodec.&#xA;    if (value &lt; 0)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    if (avcodecContx->codec_id == AV_CODEC_ID_H264)&#xA;    {&#xA;        av_opt_set(avCntxOut->priv_data, "preset", "ultrafast", 0);&#xA;        av_opt_set(avCntxOut->priv_data, "zerolatency", "1", 0);&#xA;        av_opt_set(avCntxOut->priv_data, "tune", "ull", 0);&#xA;    }&#xA;&#xA;    if ((ofmtCtx->oformat->flags &amp; AVFMT_GLOBALHEADER) != 0)&#xA;    {&#xA;        avCntxOut->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;    }&#xA;&#xA;    CreateFrames(avCodecParIn, avCodecParOut);&#xA;&#xA;    swsCtx = sws_alloc_context();&#xA;    if (sws_init_context(swsCtx, NULL, NULL) &lt; 0)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    swsCtx = sws_getContext(avcodecContx->width, avcodecContx->height, avcodecContx->pix_fmt,&#xA;        avCntxOut->width, avCntxOut->height, avCntxOut->pix_fmt, SWS_FAST_BILINEAR,&#xA;        NULL, NULL, NULL);&#xA;    if (swsCtx == NULL)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    return true;&#xA;}&#xA;&#xA;void Encode(AVCodecContext* enc_ctx, AVFrame* frame, AVPacket* pkt)&#xA;{&#xA;    int ret;&#xA;&#xA;    /* send the frame to the encoder */&#xA;    ret = avcodec_send_frame(enc_ctx, frame);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        return;&#xA;    }&#xA;&#xA;    while (ret >= 0)&#xA;    {&#xA;        ret = avcodec_receive_packet(enc_ctx, pkt);&#xA;        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;            return;&#xA;        if (ret &lt; 0)&#xA;        {&#xA;            return;&#xA;        }&#xA;&#xA;        fs.write((char*)pkt->data, pkt->size);&#xA;        av_packet_unref(pkt);&#xA;    }&#xA;}&#xA;&#xA;void EncodeFrames(int noFrames)&#xA;{&#xA;    int frameCount = 0;&#xA;    avPkt = av_packet_alloc();&#xA;    AVPacket* outPacket = av_packet_alloc();&#xA;    encPacketCounter = 0;&#xA;&#xA;    while (av_read_frame(ifmtCtx, avPkt) >= 0)&#xA;    {&#xA;        if (frameCount&#x2B;&#x2B; == noFrames)&#xA;            break;&#xA;        if (avPkt->stream_index != videoStreamIndx) continue;&#xA;&#xA;        avcodec_send_packet(avcodecContx, avPkt);&#xA;&#xA;        if (avcodec_receive_frame(avcodecContx, avFrame) >= 0) // Frame successfully decoded :)&#xA;        {&#xA;            outPacket->data = NULL; // packet data will be allocated by the encoder&#xA;            outPacket->size = 0;&#xA;&#xA;            outPacket->pts = av_rescale_q(encPacketCounter, avCntxOut->time_base, videoStream->time_base);&#xA;            if (outPacket->dts != AV_NOPTS_VALUE)&#xA;                outPacket->dts = av_rescale_q(encPacketCounter, avCntxOut->time_base, videoStream->time_base);&#xA;&#xA;            outPacket->dts = av_rescale_q(encPacketCounter, avCntxOut->time_base, videoStream->time_base);&#xA;            outPacket->duration = av_rescale_q(1, avCntxOut->time_base, videoStream->time_base);&#xA;&#xA;            outFrame->pts = av_rescale_q(encPacketCounter, avCntxOut->time_base, videoStream->time_base);&#xA;            outFrame->pkt_duration = av_rescale_q(encPacketCounter, avCntxOut->time_base, videoStream->time_base);&#xA;            encPacketCounter&#x2B;&#x2B;;&#xA;&#xA;            int sts = sws_scale(swsCtx,&#xA;                avFrame->data, avFrame->linesize, 0, avFrame->height,&#xA;                outFrame->data, outFrame->linesize);&#xA;&#xA;            /* make sure the frame data is writable */&#xA;            auto ret = av_frame_make_writable(outFrame);&#xA;            if (ret &lt; 0)&#xA;                break;&#xA;            Encode(avCntxOut, outFrame, outPacket);&#xA;        }&#xA;        av_frame_unref(avFrame);&#xA;        av_packet_unref(avPkt);&#xA;    }&#xA;}&#xA;&#xA;void Dispose()&#xA;{&#xA;    fs.close();&#xA;&#xA;    auto ifmtCtxLocal = ifmtCtx;&#xA;    avformat_close_input(&amp;ifmtCtx);&#xA;    avformat_free_context(ifmtCtx);&#xA;    avcodec_free_context(&amp;avcodecContx);&#xA;&#xA;}&#xA;&#xA;int main(int argc, char** argv)&#xA;{&#xA;    avdevice_register_all();&#xA;&#xA;    fs.open("out.h264");&#xA;&#xA;    if (Init())&#xA;    {&#xA;        EncodeFrames(300);&#xA;    }&#xA;    else&#xA;    {&#xA;        std::cout &lt;&lt; "Failed to Init \n";&#xA;    }    &#xA;&#xA;    Dispose();&#xA;&#xA;    return 0;&#xA;}&#xA;</string></fstream></iostream>

    &#xA;

    As far as I can tell the setup of the encoding process is correct as it is largely unchanged from how the example given in the official documentation is working : https://libav.org/documentation/doxygen/master/encode__video_8c_source.html

    &#xA;

    However there is limited documentation around the desktop capture online so I am not sure if I have set that up correctly.

    &#xA;

    Bad image

    &#xA;