Recherche avancée

Médias (91)

Autres articles (71)

  • MediaSPIP Player : les contrôles

    26 mai 2010, par

    Les contrôles à la souris du lecteur
    En plus des actions au click sur les boutons visibles de l’interface du lecteur, il est également possible d’effectuer d’autres actions grâce à la souris : Click : en cliquant sur la vidéo ou sur le logo du son, celui ci se mettra en lecture ou en pause en fonction de son état actuel ; Molette (roulement) : en plaçant la souris sur l’espace utilisé par le média (hover), la molette de la souris n’exerce plus l’effet habituel de scroll de la page, mais diminue ou (...)

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (5232)

  • Encode h264 to match existing stream

    6 juin 2013, par Diego Sánchez

    In short : I have to encode a tiny amount of video frames and stitch them in front of a much bigger h.264 stream without reencoding said stream.

    The details : I receive a multi GB transport stream containing a h.264 es and an audio es. Currently the h.264 streams are always generated using x264, and I can assume this will be the case in the future. Now I have to prepend some video frames to this stream, but am not allowed to decode/encode the whole stream ; which leaves me with the only option to find out the exact parameters I need to pass x264_encoder_open so both streams match.

    Currently what I'm doing is :

    1. Demux the original ts and extract h.264 NAL packets.
    2. When I find the first "user data unregistered" SEI packet, I parse it and find a bunch of x264 parameters.
    3. Use libavcodec to start decoding the video. That gives me the dimensions of the picture and the h264 profile and level in the AVCodecContext structure.
    4. Match all of that as best as I can in the x264_param_t structure.

    I can do some encoding with that, and the encoded video plays correctly up to the join point. When VLC is reaching the stitch point it starts throwing out the following sequence of messages and soon after stops playing :

    [h264 @ 0x7fe36cd75be0] decode_slice_header error
    [h264 @ 0x7fe36cd75be0] no frame!
    [h264 @ 0x7fe36ccc9080] Width/height changing with threads is not implemented. Update your Libav version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.

    which clearly shows that my encoded frames do not match the original ones. I've been browsing the source code and can't seem to find a way of doing this. What I have currently (besides not working), involves a lot of guesswork, so even if I could make it work with the handful of example files I have, I would be scared to deploy this in production servers.

    So the obvious question is : Is there a safe, formal way of doing this ?

    Thanks in Advance

  • Capture from multiple streams concurrently, best way to do it and how to reduce CPU usage

    19 juin 2019, par DRONE_6969

    I am currently in the process of writing an application that will capture a lot of RTSP streams(in my case its 12) and display it on the QT widget. The problem arouses when I am going beyond around 6-7 streams, the CPU usage spikes and there is visible stutter.

    The reason why I think that it is not QT draw function is because I have done some checking to measure how much time it takes to draw an incoming image from camera and just sample images I had, it is always a lot less than 33 milliseconds(even if there are 12 widgets being updated).

    I also just ran opencv capture method without drawing and got pretty much the same CPU consumption as if I was drawing the frames (lost like 10% CPU at most and GPU usage went to zero).

    IMPORTANT : I am using RTSP stream which is a h264 stream.

    IF IT MATTERS MY SPECS :

    Intel Core i7-6700 @ 3.40GHZ(8 CPUS)
    Memory : 16gb
    GPU : Intel HD Graphics 530

    (Also I ran my code on a computer with dedicated Graphics card, it did eliminate some stutter but CPU usage is still pretty high)

    I am currently using OPENCV 4.1.0 with GSTREAMER enabled and built, I also have the OPENCV-WORLD version, there is no difference in performance.

    I have created a special class called Camera that holds its frame size constraints and various control functions as well stream function. The stream function is being ran on a separate thread, whenever stream() function is done with current frame it sends ready Mat via onNewFrame event I created which converts to QPixmap and updates widget’s lastImage variable. This way I can update image in a more thread safe way.

    I have tried to manipulate those VideoCapture.set() values, but it didn’t really help.

    This is my stream function (Ignore the bool return, it doesn’t do anything it is a remnant from couple of minutes ago when I was trying to use std::async) :

    bool Camera::stream() {
       /* This function is meant to run on a separate thread and fill up the buffer independantly of
       main stream thread */
       //cv::setNumThreads(100);
       /* Rules for these slightly changed! */
       Mat pre;  // Grab initial undoctored frame
       //pre = Mat::zeros(size, CV_8UC1);
       Mat frame; // Final modified frame
       frame = Mat::zeros(size, CV_8UC1);
       if (!pre.isContinuous()) pre = pre.clone();

       ipCam.open(streamUrl, CAP_FFMPEG);


       while (ipCam.isOpened() && capture) {
           // If camera is opened wel need to capture and process the frame
           try {
               auto start = std::chrono::system_clock::now();

               ipCam >> pre;

               if (pre.empty()) {
                   /* Check for blank frame, return error if there is a blank frame*/
                   cerr << id << ": ERROR! blank frame grabbed\n";
                   for (FrameListener* i : clients) {
                       i->onNotification(1); // Notify clients about this shit
                   }
                   break;
               }

               else {
                   // Only continue if frame not empty

                   if (pre.cols != size.width && pre.rows != size.height) {
                       resize(pre, frame, size);
                       pre.release();
                   }
                   else {
                       frame = pre;
                   }

                   dPacket* pack = new dPacket{id,&frame};
                   for (auto i : clients) {
                       i->onPNewFrame(pack);
                   }
                   frame.release();
                   delete pack;
               }
           }

           catch (int e) {
               cout << endl << "-----Exception during capture process! CODE " << e << endl;
           }
           // End camera manipulations
       }

       cout << "Camera timed out, or connection is closed..." << endl;
       if (tryResetConnection) {
           cout << "Reconnection flag is set, retrying after 3 seconds..." << endl;
           for (FrameListener* i : clients) {
               i->onNotification(-1); // Notify clients about this shit
           }
           this_thread::sleep_for(chrono::milliseconds(3000));
           stream();
       }

       return true;
    }

    This is my onPNewFrame function. The conversion is still being done on camera’s thread because it was called within stream() and therefore is within that scope(and I also checked) :

    void GLWidget::onPNewFrame(dPacket* inPack) {
       lastFlag = 0;

       if (bufferEnabled) {
           buffer.push(QPixmap::fromImage(toQImageFromPMat(inPack->frame)));
       }
       else {
           if (playing) {
               /* Only process if this widget is playing */
               frameProcessing = true;
               lastImage.convertFromImage(toQImageFromPMat(inPack->frame));
               frameProcessing = false;
           }
       }

       if (lastFlag != -1 && !lastImage.isNull()) {
           connecting = false;
       }
       else {
           connecting = true;
       }
    }

    This is my Mat to QImage :

    QImage GLWidget::toQImageFromPMat(cv::Mat* mat) {



       return QImage(mat->data, mat->cols, mat->rows, QImage::Format_RGB888).rgbSwapped();

    NOTE : not converting does not result in CPU boost (at least not a significant one).

    Minimal verifiable example

    This program is large. I am going to paste GLWidget.cpp and GLWidget.h as well as Camera.h and Camera.cpp. You can put GLWidget into anything just as long as you spawn more than 6 of it. Camera relies on the CamUtils, but it is possible to just paste url in videocapture

    I also supplied CamUtils, just in case

    Camera.h :

    #pragma once
    #include <iostream>
    #include <vector>
    #include <fstream>
    #include <map>
    #include <string>
    #include <sstream>
    #include <algorithm>
    #include "FrameListener.h"
    #include
    #include <thread>
    #include "CamUtils.h"
    #include <ctime>
    #include "dPacket.h"

    using namespace std;
    using namespace cv;

    class Camera
    {

       /*
           CLEANED UP!
           Camera now is only responsible for streaming and echoing captured frames.
           Frames are now wrapped into dPacket struct.
       */


    private:
       string id;
       vector clients;
       VideoCapture ipCam;
       string streamUrl;
       Size size;
       bool tryResetConnection = false;

       //TODO: Remove these as they are not going to be used going on:
       bool isPlaying = true;
       bool capture = true;

       //SECRET FEATURES:
       bool detect = false;


    public:
       Camera(string url, int width = 480, int height = 240, bool detect_=false);
       bool stream();
       void setReconnectable(bool newReconStatus);
       void addListener(FrameListener* client);
       vector<bool> getState();    // Returns current state: vector[0] stream state; vector[1] stream state; TODO: Remove this as this is no longer should control behaviour
       void killStream();
       bool getReconnectable();
    };

    </bool></ctime></thread></algorithm></sstream></string></map></fstream></vector></iostream>

    Camera.cpp

    #include "Camera.h"


    Camera::Camera(string url, int width, int height, bool detect_) // Default 240p
    {
       streamUrl = url; // Prepare url
       size = Size(width, height);
       detect = detect_;

    }

    void Camera::addListener(FrameListener* client) {
       clients.push_back(client);
    }


    /*
                   TEST CAMERAS(Paste into cameras.dViewer):
                   {"id":"96a73796-c129-46fc-9c01-40acd8ed7122","ip":"176.57.73.231","password":"null","username":"null"},
                   {"id":"96a73796-c129-46fc-9c01-40acd8ed7122","ip":"176.57.73.231","password":"null","username":"null"},
                   {"id":"96a73796-c129-46fc-9c01-40acd8ed7144","ip":"172.20.101.13","password":"admin","username":"root"}
                   {"id":"96a73796-c129-46fc-9c01-40acd8ed7144","ip":"172.20.101.13","password":"admin","username":"root"}

    */



    bool Camera::stream() {
       /* This function is meant to run on a separate thread and fill up the buffer independantly of
       main stream thread */
       //cv::setNumThreads(100);
       /* Rules for these slightly changed! */
       Mat pre;  // Grab initial undoctored frame
       //pre = Mat::zeros(size, CV_8UC1);
       Mat frame; // Final modified frame
       frame = Mat::zeros(size, CV_8UC1);
       if (!pre.isContinuous()) pre = pre.clone();

       ipCam.open(streamUrl, CAP_FFMPEG);

       while (ipCam.isOpened() &amp;&amp; capture) {
           // If camera is opened wel need to capture and process the frame
           try {
               auto start = std::chrono::system_clock::now();

               ipCam >> pre;

               if (pre.empty()) {
                   /* Check for blank frame, return error if there is a blank frame*/
                   cerr &lt;&lt; id &lt;&lt; ": ERROR! blank frame grabbed\n";
                   for (FrameListener* i : clients) {
                       i->onNotification(1); // Notify clients about this shit
                   }
                   break;
               }

               else {
                   // Only continue if frame not empty

                   if (pre.cols != size.width &amp;&amp; pre.rows != size.height) {
                       resize(pre, frame, size);
                       pre.release();
                   }
                   else {
                       frame = pre;
                   }

                   auto end = std::chrono::system_clock::now();
                   std::time_t ts = std::chrono::system_clock::to_time_t(end);
                   dPacket* pack = new dPacket{ id,&amp;frame};
                   for (auto i : clients) {
                       i->onPNewFrame(pack);
                   }
                   frame.release();
                   delete pack;
               }
           }

           catch (int e) {
               cout &lt;&lt; endl &lt;&lt; "-----Exception during capture process! CODE " &lt;&lt; e &lt;&lt; endl;
           }
           // End camera manipulations
       }

       cout &lt;&lt; "Camera timed out, or connection is closed..." &lt;&lt; endl;
       if (tryResetConnection) {
           cout &lt;&lt; "Reconnection flag is set, retrying after 3 seconds..." &lt;&lt; endl;
           for (FrameListener* i : clients) {
               i->onNotification(-1); // Notify clients about this shit
           }
           this_thread::sleep_for(chrono::milliseconds(3000));
           stream();
       }

       return true;
    }


    void Camera::killStream(){
       tryResetConnection = false;
       capture = false;
       ipCam.release();
    }

    void Camera::setReconnectable(bool reconFlag) {
       tryResetConnection = reconFlag;
    }

    bool Camera::getReconnectable() {
       return tryResetConnection;
    }

    vector<bool> Camera::getState() {
       vector<bool> states;
       states.push_back(isPlaying);
       states.push_back(ipCam.isOpened());
       return states;
    }



    </bool></bool>

    GLWidget.h :

    #ifndef GLWIDGET_H
    #define GLWIDGET_H

    #include <qopenglwidget>
    #include <qmouseevent>
    #include "FrameListener.h"
    #include "Camera.h"
    #include "FrameListener.h"
    #include
    #include "Camera.h"
    #include "CamUtils.h"
    #include
    #include "dPacket.h"
    #include <chrono>
    #include <ctime>
    #include
    #include "FullScreenVideo.h"
    #include <qmovie>
    #include "helper.h"
    #include <iostream>
    #include <qpainter>
    #include <qtimer>

    class Helper;

    class GLWidget : public QOpenGLWidget, public FrameListener
    {
       Q_OBJECT

    public:
       GLWidget(std::string camId, CamUtils *cUtils, int width, int height, bool denyFullScreen_ = false, bool detectFlag_=false, QWidget* parent = nullptr);
       void killStream();
       ~GLWidget();

    public slots:
       void animate();
       void setBufferEnabled(bool setState);
       void setCameraRetryConnection(bool setState);
       void GLUpdate();            // Call to update the widget
       void onRightClickMenu(const QPoint&amp; point);

    protected:
       void paintEvent(QPaintEvent* event) override;
       void onPNewFrame(dPacket* frame);
       void onNotification(int alert_code);


    private:
       // Objects and resourses
       Helper* helper;
       Camera* cam;
       CamUtils* camUtils;
       QTimer* timer; // Keep track of update
       QPixmap lastImage;
       QMovie* connMov;
       QMovie* test;

       QPixmap logo;

       // Control fields
       int width;
       int height;
       int camUtilsAddr;
       int elapsed;
       std::thread* camThread;
       std::string camId;
       bool denyFullScreen = false;
       bool playing = true;
       bool streaming = true;
       bool debug = false;
       bool connecting = true;
       int lastFlag = 0;


       // Debug fields
       std::chrono::high_resolution_clock::time_point lastFrameAt;
       std::chrono::high_resolution_clock::time_point now;
       std::chrono::duration<double> painTime; // time took to draw last frame

       //Buffer stuff
       std::queue<qpixmap> buffer;
       bool bufferEnabled = false;
       bool initialBuffer = false;
       bool buffering = true;
       bool frameProcessing = false;



       //Functions
       QImage toQImageFromPMat(cv::Mat* inFrame);
       void mousePressEvent(QMouseEvent* event) override;
       void drawImageGLLatest(QPainter* painter, QPaintEvent* event, int elapsed);
       void drawOnPaused(QPainter* painter, QPaintEvent* event, int elapsed);
       void drawOnStatus(int statusFlag, QPainter* painter, QPaintEvent* event, int elapsed);
    };

    #endif

    </qpixmap></double></qtimer></qpainter></iostream></qmovie></ctime></chrono></qmouseevent></qopenglwidget>

    GLWidget.cpp :

    #include "glwidget.h"
    #include <future>


    FullScreenVideo* fullScreen;

    GLWidget::GLWidget(std::string camId_, CamUtils* cUtils, int width_, int height_,  bool denyFullScreen_, bool detectFlag_, QWidget* parent)
       : QOpenGLWidget(parent), helper(helper)
    {
       cout &lt;&lt; "Player for CAMERA " &lt;&lt; camId_ &lt;&lt; endl;

       /* Underlying properties */
       camUtils = cUtils;
       cout &lt;&lt; "GLWidget Incoming CamUtils addr " &lt;&lt; camUtils &lt;&lt; endl;
       cout &lt;&lt; "GLWidget Set CamUtils addr " &lt;&lt; camUtils &lt;&lt; endl;
       camId = camId_;
       elapsed = 0;
       width = width_ + 5;
       height = height_ + 5;
       helper = new Helper();
       setFixedSize(width, height);
       denyFullScreen = denyFullScreen_;

       /* Camera capture thread */
       cam = new Camera(camUtils->getCameraStreamURL(camId), width_, height_, detectFlag_);
       cam->addListener(this);

       /* Sync states */
       vector<bool> initState = cam->getState();
       playing = initState[0];
       streaming = initState[1];
       cout &lt;&lt; "Initial states: " &lt;&lt; playing &lt;&lt; " " &lt;&lt; streaming &lt;&lt; endl;
       camThread = new std::thread(&amp;Camera::stream, cam);
       cout &lt;&lt; "================================================" &lt;&lt; endl;

       // Right click set up
       setContextMenuPolicy(Qt::CustomContextMenu);


       /* Loading gif */
       connMov = new QMovie("establishingConnection.gif");
       connMov->start();
       QString url = R"(RLC-logo.png)";
       logo = QPixmap(url);
       QTimer* timer = new QTimer(this);
       connect(timer, SIGNAL(timeout()), this, SLOT(GLUpdate()));
       timer->start(1000/30);
       playing = true;

    }

    /* SYSTEM */
    void GLWidget::animate()
    {
       elapsed = (elapsed + qobject_cast(sender())->interval()) % 1000;
       std::cout &lt;&lt; elapsed &lt;&lt; "\n";
    }


    void GLWidget::GLUpdate() {
       /* Process descisions before update call */
       if (bufferEnabled) {
           /* Process buffer before update */
           now = chrono::high_resolution_clock::now();
           std::chrono::duration timeSinceLastUpdate = now - lastFrameAt;
           if (timeSinceLastUpdate.count() > 25) {
               if (buffer.size() > 1 &amp;&amp; playing) {
                   lastImage.swap(buffer.front());
                   buffer.pop();
                   lastFrameAt = chrono::high_resolution_clock::now();
               }
           }
           //update(); // Update
       }
       else {
           /* No buffer */
       }
       repaint();
    }


    /* EVENTS */
    void GLWidget::onRightClickMenu(const QPoint&amp; point) {
       cout &lt;&lt; "Right click request got" &lt;&lt; endl;

       QPoint globPos = this->mapToGlobal(point);
       QMenu myMenu;

       if (!denyFullScreen) {
           myMenu.addAction("Open Full Screen");
       }
       myMenu.addAction("Toggle Debug Info");


       QAction* selected = myMenu.exec(globPos);

       if (selected) {
           string optiontxt = selected->text().toStdString();

           if (optiontxt == "Open Full Screen") {
               cout &lt;&lt; "Chose to open full screen of " &lt;&lt; camId &lt;&lt; endl;
               fullScreen = new FullScreenVideo(bufferEnabled, this);
               fullScreen->setUpView(camUtils, camId);
               fullScreen->show();
               playing = false;
           }

           if (optiontxt == "Toggle Debug Info") {
               cout &lt;&lt; "Chose to toggle debug of " &lt;&lt; camId &lt;&lt; endl;
               debug = !debug;
           }
       }
       else {
           cout &lt;&lt; "Chose nothing!" &lt;&lt; endl;
       }


    }



    void GLWidget::onPNewFrame(dPacket* inPack) {
       lastFlag = 0;

       if (bufferEnabled) {
           buffer.push(QPixmap::fromImage(toQImageFromPMat(inPack->frame)));
       }
       else {
           if (playing) {
               /* Only process if this widget is playing */
               frameProcessing = true;
               lastImage.convertFromImage(toQImageFromPMat(inPack->frame));
               frameProcessing = false;
           }
       }

       if (lastFlag != -1 &amp;&amp; !lastImage.isNull()) {
           connecting = false;
       }
       else {
           connecting = true;
       }
    }


    void GLWidget::onNotification(int alert) {
       lastFlag = alert;  
    }


    /* Paint events*/


    void GLWidget::paintEvent(QPaintEvent* event)
    {
       QPainter painter(this);

           if (lastFlag != 0 || connecting) {
               drawOnStatus(lastFlag, &amp;painter, event, elapsed);
           }
           else {

               /* Actual frame drawing */
               if (playing) {
                   if (!frameProcessing) {
                       drawImageGLLatest(&amp;painter, event, elapsed);
                   }
               }
               else {
                   drawOnPaused(&amp;painter, event, elapsed);
               }
           }
       painter.end();

    }


    /* DRAWING STUFF */

    void GLWidget::drawOnStatus(int statusFlag, QPainter* bgPaint, QPaintEvent* event, int elapsed) {

       QString str;
       QFont font("times", 15);
       bgPaint->eraseRect(QRect(0, 0, width, height));
       if (!lastImage.isNull()) {
           bgPaint->drawPixmap(QRect(0, 0, width, height), lastImage);
       }
       /* Test background painting */
       if (connecting) {
           string k = "Connecting to " + camUtils->getIp(camId);
           str.append(k.c_str());
       }
       else {
           switch (statusFlag) {
           case 1:
               str = "Blank frame received...";
               break;

           case -1:
               if (cam->getReconnectable()) {
                   str = "Connection lost, will try to reconnect.";
                   bgPaint->setOpacity(0.3);
               }
               else {
                   str = "Connection lost...";
                   bgPaint->setOpacity(0.3);
               }

               break;
           }
       }

       bgPaint->drawPixmap(QRect(0, 0, width, height), QPixmap::fromImage(connMov->currentImage()));
       bgPaint->setPen(Qt::red);
       bgPaint->setFont(font);
       QFontMetrics fm(font);
       const QRect kek(0, 0, fm.width(str), fm.height());
       QRect bound;
       bgPaint->setOpacity(1);
       bgPaint->drawText(bgPaint->viewport().width()/2 - kek.width()/2, bgPaint->viewport().height()/2 - kek.height(), str);

       bgPaint->drawPixmap(bgPaint->viewport().width() / 2 - logo.width()/2, height - logo.width() - 15, logo);

    }



    void GLWidget::drawOnPaused(QPainter* painter, QPaintEvent* event, int elapsed) {
       painter->eraseRect(0, 0, width, height);
       QFont font = painter->font();
       font.setPointSize(18);
       painter->setPen(Qt::red);
       QFontMetrics fm(font);
       QString str("Paused");
       painter->drawPixmap(QRect(0, 0, width, height),lastImage);
       painter->drawText(QPoint(painter->viewport().width() - fm.width(str), 50), str);

       if (debug) {
           QFont font = painter->font();
           font.setPointSize(25);
           painter->setPen(Qt::red);
           string camMess = "CAMID: " + camId;
           QString mess(camMess.c_str());
           string camIp = "IP: " + camUtils->getIp(camId);
           QString ipMess(camIp.c_str());
           QString bufferSize("Buffer size: " + QString::number(buffer.size()));
           QString lastFrameText("Last frame draw time: " + QString::number(painTime.count()) + "s");
           painter->drawText(QPoint(10, 50), mess);
           painter->drawText(QPoint(10, 60), ipMess);
           QString bufferState;
           if (bufferEnabled) {
               bufferState = QString("Experimental BUFFER is enabled!");
               QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
               painter->drawText(QPoint(10, 80), currentBufferSize);
           }
           else {
               bufferState = QString("Experimental BUFFER is disabled!");
           }
           painter->drawText(QPoint(10, 70), bufferState);
           painter->drawText(QPoint(10, height - 25), lastFrameText);
       }
    }


    void GLWidget::drawImageGLLatest(QPainter* painter, QPaintEvent* event, int elapsed) {
       auto start = chrono::high_resolution_clock::now();
       painter->drawPixmap(QRect(0, 0, width, height), lastImage);
       if (debug) {
           QFont font = painter->font();
           font.setPointSize(25);
           painter->setPen(Qt::red);
           string camMess = "CAMID: " + camId;
           QString mess(camMess.c_str());
           string camIp = "IP: " + camUtils->getIp(camId);
           QString ipMess(camIp.c_str());
           QString bufferSize("Buffer size: " + QString::number(buffer.size()));
           QString lastFrameText("Last frame draw time: " + QString::number(painTime.count()) + "s");
           painter->drawText(QPoint(10, 50), mess);
           painter->drawText(QPoint(10, 60), ipMess);
           QString bufferState;
           if(bufferEnabled){
               bufferState = QString("Experimental BUFFER is enabled!");
               QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
               painter->drawText(QPoint(10,80), currentBufferSize);
           }
           else {
               bufferState = QString("Experimental BUFFER is disabled!");
               QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
               painter->drawText(QPoint(10, 80), currentBufferSize);
           }
           painter->drawText(QPoint(10, 70), bufferState);
           painter->drawText(QPoint(10, height - 25), lastFrameText);

       }
       auto end = chrono::high_resolution_clock::now();
       painTime = end - start;
    }



    /* END DRAWING STUFF */



    /* UI EVENTS */

    void GLWidget::mousePressEvent(QMouseEvent* e) {

       if (e->button() == Qt::LeftButton) {
           if (fullScreen == nullptr || !fullScreen->isVisible()) { // Do not unpause if window is opened
               playing = !playing;
           }
       }

       if (e->button() == Qt::RightButton) {
           onRightClickMenu(e->pos());
       }
    }



    /* Utilities */
    QImage GLWidget::toQImageFromPMat(cv::Mat* mat) {



       return QImage(mat->data, mat->cols, mat->rows, QImage::Format_RGB888).rgbSwapped();



    }

    /* State control */

    void GLWidget::killStream() {
       cam->killStream();
       camThread->join();
    }

    void GLWidget::setBufferEnabled(bool newBufferState) {
       cout &lt;&lt; "Player: " &lt;&lt; camId &lt;&lt; ", buffer state updated: " &lt;&lt; newBufferState &lt;&lt; endl;
       bufferEnabled = newBufferState;
       buffer.empty();
    }

    void GLWidget::setCameraRetryConnection(bool newState) {
       cam->setReconnectable(newState);
    }

    /* Destruction */
    GLWidget::~GLWidget() {
       cam->killStream();
       camThread->join();
    }
    </bool></future>

    CamUtils.h :

    #pragma once
    #include <iostream>
    #include <vector>
    #include <fstream>
    #include <map>
    #include <string>
    #include <sstream>
    #include <algorithm>
    #include <nlohmann></nlohmann>json.hpp>

    using namespace std;
    using json = nlohmann::json;

    class CamUtils
    {
    private:

       string camDb = "cameras.dViewer";
       map> cameraList; // Legacy
       json cameras;
       ofstream dbFile;
       bool dbExists(); // Always hard coded

       /* Old IMPLEMENTATION */
       void writeLineToDb_(const string&amp; content, bool append = false);
       void loadCameras_();

       /* JSON based */
       void loadCameras();

    public:
       CamUtils();
       string generateRandomString(size_t length);
       string getCameraStreamURL(string cameraId) const;
       string saveCamera(string ip, string username, string pass); // Return generated id
       vector<string> listAllCameraIds();
       string getIp(string cameraId);
    };


    </string></algorithm></sstream></string></map></fstream></vector></iostream>

    CamUtils.cpp :

    #include "CamUtils.h"
    #pragma comment(lib, "rpcrt4.lib")  // UuidCreate - Minimum supported OS Win 2000
    #include
    #include <iostream>

    CamUtils::CamUtils()
    {
       if (!dbExists()) {
           ofstream dbFile;
           dbFile.open(camDb);
           cameras["cameras"] = json::array();
           dbFile &lt;&lt; cameras &lt;&lt; std::endl;
           dbFile.close();

       }
       else {
           loadCameras();
       }
    }




    vector<string> CamUtils::listAllCameraIds() {
       vector<string> ids;
       cout &lt;&lt; "IN LIST " &lt;&lt; endl;
       for (auto&amp; cam : cameras["cameras"]) {
           ids.push_back(cam["id"].get<string>());
           //cout &lt;&lt; cam["id"].get<string>() &lt;&lt; std::endl;
       }
       return ids;
    }

    string CamUtils::getIp(string id) {
       vector<string> camDetails = cameraList[id];
       string ip = "NO IP WILL DISPLAYED UNTIL I FIGURE OUT A BUG";
       for (auto&amp; cam : cameras["cameras"]) {
           if (id == cam["id"]) {
               ip = cam["ip"].get<string>();
           }
       }

       return ip;
    }

    string CamUtils::getCameraStreamURL(string id) const {
       string url = "err"; // err is the default, it will be overwritten in case id is found, dont forget to check for it

       for (auto&amp; cam : cameras["cameras"]) {
           if (id == cam["id"]) {
               if (cam["username"].get<string>() == "null") {
                   url = "rtsp://" + cam["ip"].get<string>() + ":554/axis-media/media.amp?tcp";
               }
               else {
                   url = "rtsp://" + cam["username"].get<string>() + ":" + cam["password"].get<string>() + "@" + cam["ip"].get<string>() + ":554/axis-media/media.amp?streamprofile=720_30";
               }
           }
       }

       return url;  // Dont forget to check for err when using this shit
    }


    string CamUtils::saveCamera(string ip, string username, string password) {
       UUID uid;
       UuidCreate(&amp;uid);
       char* str;
       UuidToStringA(&amp;uid, (RPC_CSTR*)&amp;str);
       string id = str;
       cout &lt;&lt; "GEN: " &lt;&lt; id &lt;&lt; endl;
       json cam = json({}); //Create emtpy object
       cam["id"] = id;
       cam["ip"] = ip;
       cam["username"] = username;
       cam["password"] = password;
       cameras["cameras"].push_back(cam);
       std::ofstream out(camDb);
       out &lt;&lt; cameras &lt;&lt; std::endl;
       cout &lt;&lt; cameras["cameras"] &lt;&lt; endl;

       cout &lt;&lt; "Saved camera as " &lt;&lt; id &lt;&lt; endl;
       return id;
    }


    bool CamUtils::dbExists() {
       ifstream dbFile(camDb);
       return (bool)dbFile;
    }





    void CamUtils::loadCameras() {
       cout &lt;&lt; "Load call" &lt;&lt; endl;
       ifstream dbFile(camDb);
       string line;
       string wholeFile;

       while (std::getline(dbFile, line)) {
           cout &lt;&lt; line &lt;&lt; endl;
           wholeFile += line;
       }
       try {
           cameras = json::parse(wholeFile);
           //cout &lt;&lt; cameras["cameras"] &lt;&lt; endl;

       }
       catch (exception e) {
           cout &lt;&lt; e.what() &lt;&lt; endl;
       }
       dbFile.close();
    }










    /*
       LEGACY CODE, TO BE REMOVED!

    */



    void CamUtils::loadCameras_() {
       /*
           LEGACY CODE:
           This used to be the way to load cameras, but I moved on to JSON based configuration so this is no longer needed and will be removed soon
       */

       ifstream dbFile(camDb);
       string line;
       while (std::getline(dbFile, line)) {
           /*
               This function load camera data to the map:
               The order MUST be the following: 0:ID, 1:IP, 2:USERNAME, 3:PASSWORD.
               Always delimited with | no spaces between!
           */
           if (!line.empty()) {
               stringstream ss(line);
               string item;
               vector<string> splitString;

               while (std::getline(ss, item, '|')) {
                   splitString.push_back(item);
               }
               if (splitString.size() > 0) {
                   /* Dont even parse if the program didnt split right*/
                   //cout &lt;&lt; "Split string: " &lt;&lt; splitString.size() &lt;&lt; "\n";
                   for (int i = 0; i &lt; (splitString.size()); i++) cameraList[splitString[0]].push_back(splitString[i]);
               }
           }
       }
    }



    void CamUtils::writeLineToDb_(const string &amp; content, bool append) {
       ofstream dbFile;
       cout &lt;&lt; "Creating?";
       if (append) {
           dbFile.open(camDb, ios_base::app);
       }
       else {
           dbFile.open(camDb);
       }

       dbFile &lt;&lt; content.c_str() &lt;&lt; "\r\n";
       dbFile.flush();
    }

    /* JSON Reworx */




    string CamUtils::generateRandomString(size_t length)
    {
       const char* charmap = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
       const size_t charmapLength = strlen(charmap);
       auto generator = [&amp;]() { return charmap[rand() % charmapLength]; };
       string result;
       result.reserve(length);
       generate_n(back_inserter(result), length, generator);
       return result;
    }
    </string></string></string></string></string></string></string></string></string></string></string></string></iostream>

    End of example

    How would I go about decreasing CPU usage when dealing with large amount of streams ?

  • Exceeded GA’s 10M hits data limit, now what ?

    21 juin 2019, par Joselyn Khor

    Exceeded GA’s 10M hits data limit, now what ? Matomo has the answers

    “Your data volume (1XXM hits) exceeds the limit of 10M hits per month as outlined in our Terms of Service. If you continue to exceed the limit, we will stop processing new data on XXX 21, 2019. Learn more about possible solutions.”

    Yikes. Alarm bells were ringing when a Google Analytics free user came to us faced with this notice. Let’s call him ‘Mark’. Mark had reached the limits on the data he could collect through Google Analytics and was shocked by the limited options available to fix the problem, without blowing the budget. The thoughts racing through his head were :

    • “What happens to all my data ?”
    • “What if Google starts charging USD150K now ?”

    Then he came across Matomo and decided to get in touch with our support team …

    “Can you fix this issue ?” he asked us.

    “Absolutely !” we said.

    We’ll get back to helping Mark in a minute. For now let’s go over why this was such a dilemma for him.

    In order to resolve this data limits issue, one of the solutions was for him to upgrade to Google Analytics 360, which meant shelling out USD150,000 per year for their 1 billion hits per month option. Going from free to USD150,000 was too much of a stretch for a growing company.

    “Your data volume (1XXM hits) exceeds the limit of 10M hits per month …”, what did this message mean ?

    With the free version, Mark could collect up to 10 million “hits” per month, per account. Going over meant Google Analytics could stop collecting any more data for free as outlined in their Terms.

    Google Analytics’ Terms of Service (2018, sec. 2) states, “Subject to Section 15, the Service is provided without charge to You for up to 10 million Hits per month per account.”[1]

    In general, what’s a "hit" ?

    Data being sent to Google Analytics. It can be a transaction, event, social interaction or pageview - these all produce what Google calls a “hit”.

    Google Analytics data limits
    Google Analytics Terms of Service

    And their Analytics Help Data Limits (n.d.) support page makes clear that : “If a property sends more hits per month to Analytics than allowed by the Analytics Terms of Service, there is no assurance that the excess hits will be processed. If the property’s hit volume exceeds this limit, a warning may be displayed in the user interface and you may be prevented from accessing reports.”[2]

    Google Analytics data collection limit
    Google Analytics’ data limits support page

    Possible solutions

    So the possible solutions given by Google Analytics’ Data Limits support page were (also shown in image below) :

    • To pay USD150K to upgrade to Google Analytics 360
    • To send fewer hits by setting up sampling
    • Or choose the slightly less relevant option to upgrade mobile app tracking to Google Analytics for Firebase.

    Without the means to pay, the free version was fast becoming inaccessible for Mark as he was facing a future where he risked no longer having access to up-to-date data used in his business’ reporting.

    Mark was facing a problem that potentially didn’t have a cost-effective solution.

    Google Analytics data limits
    Google Analytics’ data limits support page

    So what can you really do about it ?

    This is where we can help provide some assistance. If you’re reading this article, we’ll assume you can relate to Mark and share with you the advice on options we gave him.

    Options :

    One option posed by Google is for you to send fewer hits by auditing your data collection processes

    If you really don’t have the budget, you’ll need to reassess your data collection priorities and go over your strategies to see what is necessary to track, and what isn’t.

    • Make sure you know what you’re tracking and why. Look at what websites are being tracked by Google and into what properties.
    • Go through what data you’re tracking and decide what is or isn’t of value.
    • Set up data sampling, this however, will lead to inaccurate data.

    From here you can start to course correct. If you’ve found data you’re not using for analysis, get rid of these events/pageviews in your Google Analytics.

    But the limitations here are that eventually, you’re going to run out of irrelevant metrics and everything you’re tracking will be essential. So you’ll hit another brick wall and return to the same situation.

    Option 2 Ignore and continue using the free version of Google Analytics

    With this option, you’ll have to bear the business risks involved by basing decisions off of analytics reports that may or may not be updated. In this case, you may still get contacted about exceeding the limits. As the free service is provided for only up to 10 million hits, once you’ve gone over them, you’re violating what’s stipulated in the Terms of Service. 

    There’s also the warning that “… you may be prevented from accessing reports” (Data limits, n.d.). So while we may not know for certain what Google Analytics will do, in this case it may be better to be safe rather than sorry by acting quickly to resolve it. 

    Option 3 The Matomo solution. Upgrade to a web analytics platform that can handle your demanding data requirements

    Save money while continuing to gain valuable insights by moving over to Matomo Analytics (recommended)

    This is where you can save up to USD130,000 a year. As well as that, the transition from Google Analytics to the Matomo Cloud is a seamless experience as setup and maintenance is taken care of by our experts.

    For example, you can get up to 15M pageviews for USD1,612.50/month (or USD19,350/year) on the Essentials plan.

    Or even 25M pageviews for USD2400/month (or USD24000/year) on the Business plan – which offers additional web analytics and conversion optimization resources.

    Matomo Cloud is a great option if you’re looking for a secure, cost-effective and powerful analytics solution. You also get what Google Analytics could never offer you : full control and ownership of your own data and privacy. 

    No need to worry about losing your Google Analytics data because …

    Now you can import your historic Google Analytics data directly into your Matomo with the Google Analytics Importer tool. Simply follow the step-by-step guide to get started for free.

    Along with savings you can get :

    • A solution for the data limits issue forever. You choose the right plan to suit your data needs and adapt as you continue growing
    • 100% accurate data (no data sampling)
    • 100% data ownership of all your information without signing away your data to a third party
    • Powerful web analytics and conversion optimization features
    • Matomo Tag Manager
    • Easy setup
    • Support from Matomo’s specialists

    Learn more about Matomo Cloud pricing.

    Or go for Matomo On-Premise

    If you have the in-house infrastructure to support self-hosting Matomo on your own servers then there’s also the option of Matomo On-Premise. Here you’ll get full security knowing the data is on your own servers. 

    Setup will also require technical knowledge. There will also be costs associated with acquiring your own servers, and keeping up with regular maintenance and updates. With On-Premise you get maximum flexibility, with no data limits whatsoever. But if you’re coming over from Google Analytics and don’t have the infrastructure and team to host On-Premise, the Matomo Cloud could be right for you.

    Learn more about Matomo On-Premise.

    Where do you go from here ?

    Getting 10 millions hits per month is no small feat, it’s actually pretty fantastic. But if it means having to shell out USD150,000 just to be able to continue with Google Analytics, we feel your problem could be fixed with Matomo Cloud. You could then put the rest of the money you save to better use.

    If you choose Matomo, you now have the option to : 

    • Raise your data limits for a fraction of Google Analytics 360’s price
    • Get a comprehensive range of analytics features for the most impactful insights to ensure your website continues excelling
    • Get data that’s not sampled – meaning 100% accuracy in your reports
    • Migrate your data easily with the help of Matomo’s support team

    We’ll have you covered. 

    By sharing with you the options and advice we gave to Mark, we hope you’ll be able to find a solution that makes your life easier and solves the issue of data restrictions forever.

    The team at Matomo is here to help you every step of the way to ensure a stress-free transition from Google Analytics if that is what works best for you.

    For next steps, why not check out our pricing page to see what could suit your needs !

    References :

    [1] Terms of Service. (2018, July 24). In Google Analytics Terms of Service. Retrieved June 12, 2019, from https://www.google.com/analytics/terms/us.html

    [2] Data limits. (n.d.). In Analytics Help Data limits. Retrieved June 12, 2019, from https://support.google.com/analytics/answer/1070983?hl=en