Recherche avancée

Médias (0)

Mot : - Tags -/médias

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (44)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (8189)

  • Capture from multiple streams concurrently, best way to do it and how to reduce CPU usage

    19 juin 2019, par DRONE_6969

    I am currently in the process of writing an application that will capture a lot of RTSP streams(in my case its 12) and display it on the QT widget. The problem arouses when I am going beyond around 6-7 streams, the CPU usage spikes and there is visible stutter.

    The reason why I think that it is not QT draw function is because I have done some checking to measure how much time it takes to draw an incoming image from camera and just sample images I had, it is always a lot less than 33 milliseconds(even if there are 12 widgets being updated).

    I also just ran opencv capture method without drawing and got pretty much the same CPU consumption as if I was drawing the frames (lost like 10% CPU at most and GPU usage went to zero).

    IMPORTANT : I am using RTSP stream which is a h264 stream.

    IF IT MATTERS MY SPECS :

    Intel Core i7-6700 @ 3.40GHZ(8 CPUS)
    Memory : 16gb
    GPU : Intel HD Graphics 530

    (Also I ran my code on a computer with dedicated Graphics card, it did eliminate some stutter but CPU usage is still pretty high)

    I am currently using OPENCV 4.1.0 with GSTREAMER enabled and built, I also have the OPENCV-WORLD version, there is no difference in performance.

    I have created a special class called Camera that holds its frame size constraints and various control functions as well stream function. The stream function is being ran on a separate thread, whenever stream() function is done with current frame it sends ready Mat via onNewFrame event I created which converts to QPixmap and updates widget’s lastImage variable. This way I can update image in a more thread safe way.

    I have tried to manipulate those VideoCapture.set() values, but it didn’t really help.

    This is my stream function (Ignore the bool return, it doesn’t do anything it is a remnant from couple of minutes ago when I was trying to use std::async) :

    bool Camera::stream() {
       /* This function is meant to run on a separate thread and fill up the buffer independantly of
       main stream thread */
       //cv::setNumThreads(100);
       /* Rules for these slightly changed! */
       Mat pre;  // Grab initial undoctored frame
       //pre = Mat::zeros(size, CV_8UC1);
       Mat frame; // Final modified frame
       frame = Mat::zeros(size, CV_8UC1);
       if (!pre.isContinuous()) pre = pre.clone();

       ipCam.open(streamUrl, CAP_FFMPEG);


       while (ipCam.isOpened() && capture) {
           // If camera is opened wel need to capture and process the frame
           try {
               auto start = std::chrono::system_clock::now();

               ipCam >> pre;

               if (pre.empty()) {
                   /* Check for blank frame, return error if there is a blank frame*/
                   cerr << id << ": ERROR! blank frame grabbed\n";
                   for (FrameListener* i : clients) {
                       i->onNotification(1); // Notify clients about this shit
                   }
                   break;
               }

               else {
                   // Only continue if frame not empty

                   if (pre.cols != size.width && pre.rows != size.height) {
                       resize(pre, frame, size);
                       pre.release();
                   }
                   else {
                       frame = pre;
                   }

                   dPacket* pack = new dPacket{id,&frame};
                   for (auto i : clients) {
                       i->onPNewFrame(pack);
                   }
                   frame.release();
                   delete pack;
               }
           }

           catch (int e) {
               cout << endl << "-----Exception during capture process! CODE " << e << endl;
           }
           // End camera manipulations
       }

       cout << "Camera timed out, or connection is closed..." << endl;
       if (tryResetConnection) {
           cout << "Reconnection flag is set, retrying after 3 seconds..." << endl;
           for (FrameListener* i : clients) {
               i->onNotification(-1); // Notify clients about this shit
           }
           this_thread::sleep_for(chrono::milliseconds(3000));
           stream();
       }

       return true;
    }

    This is my onPNewFrame function. The conversion is still being done on camera’s thread because it was called within stream() and therefore is within that scope(and I also checked) :

    void GLWidget::onPNewFrame(dPacket* inPack) {
       lastFlag = 0;

       if (bufferEnabled) {
           buffer.push(QPixmap::fromImage(toQImageFromPMat(inPack->frame)));
       }
       else {
           if (playing) {
               /* Only process if this widget is playing */
               frameProcessing = true;
               lastImage.convertFromImage(toQImageFromPMat(inPack->frame));
               frameProcessing = false;
           }
       }

       if (lastFlag != -1 && !lastImage.isNull()) {
           connecting = false;
       }
       else {
           connecting = true;
       }
    }

    This is my Mat to QImage :

    QImage GLWidget::toQImageFromPMat(cv::Mat* mat) {



       return QImage(mat->data, mat->cols, mat->rows, QImage::Format_RGB888).rgbSwapped();

    NOTE : not converting does not result in CPU boost (at least not a significant one).

    Minimal verifiable example

    This program is large. I am going to paste GLWidget.cpp and GLWidget.h as well as Camera.h and Camera.cpp. You can put GLWidget into anything just as long as you spawn more than 6 of it. Camera relies on the CamUtils, but it is possible to just paste url in videocapture

    I also supplied CamUtils, just in case

    Camera.h :

    #pragma once
    #include <iostream>
    #include <vector>
    #include <fstream>
    #include <map>
    #include <string>
    #include <sstream>
    #include <algorithm>
    #include "FrameListener.h"
    #include
    #include <thread>
    #include "CamUtils.h"
    #include <ctime>
    #include "dPacket.h"

    using namespace std;
    using namespace cv;

    class Camera
    {

       /*
           CLEANED UP!
           Camera now is only responsible for streaming and echoing captured frames.
           Frames are now wrapped into dPacket struct.
       */


    private:
       string id;
       vector clients;
       VideoCapture ipCam;
       string streamUrl;
       Size size;
       bool tryResetConnection = false;

       //TODO: Remove these as they are not going to be used going on:
       bool isPlaying = true;
       bool capture = true;

       //SECRET FEATURES:
       bool detect = false;


    public:
       Camera(string url, int width = 480, int height = 240, bool detect_=false);
       bool stream();
       void setReconnectable(bool newReconStatus);
       void addListener(FrameListener* client);
       vector<bool> getState();    // Returns current state: vector[0] stream state; vector[1] stream state; TODO: Remove this as this is no longer should control behaviour
       void killStream();
       bool getReconnectable();
    };

    </bool></ctime></thread></algorithm></sstream></string></map></fstream></vector></iostream>

    Camera.cpp

    #include "Camera.h"


    Camera::Camera(string url, int width, int height, bool detect_) // Default 240p
    {
       streamUrl = url; // Prepare url
       size = Size(width, height);
       detect = detect_;

    }

    void Camera::addListener(FrameListener* client) {
       clients.push_back(client);
    }


    /*
                   TEST CAMERAS(Paste into cameras.dViewer):
                   {"id":"96a73796-c129-46fc-9c01-40acd8ed7122","ip":"176.57.73.231","password":"null","username":"null"},
                   {"id":"96a73796-c129-46fc-9c01-40acd8ed7122","ip":"176.57.73.231","password":"null","username":"null"},
                   {"id":"96a73796-c129-46fc-9c01-40acd8ed7144","ip":"172.20.101.13","password":"admin","username":"root"}
                   {"id":"96a73796-c129-46fc-9c01-40acd8ed7144","ip":"172.20.101.13","password":"admin","username":"root"}

    */



    bool Camera::stream() {
       /* This function is meant to run on a separate thread and fill up the buffer independantly of
       main stream thread */
       //cv::setNumThreads(100);
       /* Rules for these slightly changed! */
       Mat pre;  // Grab initial undoctored frame
       //pre = Mat::zeros(size, CV_8UC1);
       Mat frame; // Final modified frame
       frame = Mat::zeros(size, CV_8UC1);
       if (!pre.isContinuous()) pre = pre.clone();

       ipCam.open(streamUrl, CAP_FFMPEG);

       while (ipCam.isOpened() &amp;&amp; capture) {
           // If camera is opened wel need to capture and process the frame
           try {
               auto start = std::chrono::system_clock::now();

               ipCam >> pre;

               if (pre.empty()) {
                   /* Check for blank frame, return error if there is a blank frame*/
                   cerr &lt;&lt; id &lt;&lt; ": ERROR! blank frame grabbed\n";
                   for (FrameListener* i : clients) {
                       i->onNotification(1); // Notify clients about this shit
                   }
                   break;
               }

               else {
                   // Only continue if frame not empty

                   if (pre.cols != size.width &amp;&amp; pre.rows != size.height) {
                       resize(pre, frame, size);
                       pre.release();
                   }
                   else {
                       frame = pre;
                   }

                   auto end = std::chrono::system_clock::now();
                   std::time_t ts = std::chrono::system_clock::to_time_t(end);
                   dPacket* pack = new dPacket{ id,&amp;frame};
                   for (auto i : clients) {
                       i->onPNewFrame(pack);
                   }
                   frame.release();
                   delete pack;
               }
           }

           catch (int e) {
               cout &lt;&lt; endl &lt;&lt; "-----Exception during capture process! CODE " &lt;&lt; e &lt;&lt; endl;
           }
           // End camera manipulations
       }

       cout &lt;&lt; "Camera timed out, or connection is closed..." &lt;&lt; endl;
       if (tryResetConnection) {
           cout &lt;&lt; "Reconnection flag is set, retrying after 3 seconds..." &lt;&lt; endl;
           for (FrameListener* i : clients) {
               i->onNotification(-1); // Notify clients about this shit
           }
           this_thread::sleep_for(chrono::milliseconds(3000));
           stream();
       }

       return true;
    }


    void Camera::killStream(){
       tryResetConnection = false;
       capture = false;
       ipCam.release();
    }

    void Camera::setReconnectable(bool reconFlag) {
       tryResetConnection = reconFlag;
    }

    bool Camera::getReconnectable() {
       return tryResetConnection;
    }

    vector<bool> Camera::getState() {
       vector<bool> states;
       states.push_back(isPlaying);
       states.push_back(ipCam.isOpened());
       return states;
    }



    </bool></bool>

    GLWidget.h :

    #ifndef GLWIDGET_H
    #define GLWIDGET_H

    #include <qopenglwidget>
    #include <qmouseevent>
    #include "FrameListener.h"
    #include "Camera.h"
    #include "FrameListener.h"
    #include
    #include "Camera.h"
    #include "CamUtils.h"
    #include
    #include "dPacket.h"
    #include <chrono>
    #include <ctime>
    #include
    #include "FullScreenVideo.h"
    #include <qmovie>
    #include "helper.h"
    #include <iostream>
    #include <qpainter>
    #include <qtimer>

    class Helper;

    class GLWidget : public QOpenGLWidget, public FrameListener
    {
       Q_OBJECT

    public:
       GLWidget(std::string camId, CamUtils *cUtils, int width, int height, bool denyFullScreen_ = false, bool detectFlag_=false, QWidget* parent = nullptr);
       void killStream();
       ~GLWidget();

    public slots:
       void animate();
       void setBufferEnabled(bool setState);
       void setCameraRetryConnection(bool setState);
       void GLUpdate();            // Call to update the widget
       void onRightClickMenu(const QPoint&amp; point);

    protected:
       void paintEvent(QPaintEvent* event) override;
       void onPNewFrame(dPacket* frame);
       void onNotification(int alert_code);


    private:
       // Objects and resourses
       Helper* helper;
       Camera* cam;
       CamUtils* camUtils;
       QTimer* timer; // Keep track of update
       QPixmap lastImage;
       QMovie* connMov;
       QMovie* test;

       QPixmap logo;

       // Control fields
       int width;
       int height;
       int camUtilsAddr;
       int elapsed;
       std::thread* camThread;
       std::string camId;
       bool denyFullScreen = false;
       bool playing = true;
       bool streaming = true;
       bool debug = false;
       bool connecting = true;
       int lastFlag = 0;


       // Debug fields
       std::chrono::high_resolution_clock::time_point lastFrameAt;
       std::chrono::high_resolution_clock::time_point now;
       std::chrono::duration<double> painTime; // time took to draw last frame

       //Buffer stuff
       std::queue<qpixmap> buffer;
       bool bufferEnabled = false;
       bool initialBuffer = false;
       bool buffering = true;
       bool frameProcessing = false;



       //Functions
       QImage toQImageFromPMat(cv::Mat* inFrame);
       void mousePressEvent(QMouseEvent* event) override;
       void drawImageGLLatest(QPainter* painter, QPaintEvent* event, int elapsed);
       void drawOnPaused(QPainter* painter, QPaintEvent* event, int elapsed);
       void drawOnStatus(int statusFlag, QPainter* painter, QPaintEvent* event, int elapsed);
    };

    #endif

    </qpixmap></double></qtimer></qpainter></iostream></qmovie></ctime></chrono></qmouseevent></qopenglwidget>

    GLWidget.cpp :

    #include "glwidget.h"
    #include <future>


    FullScreenVideo* fullScreen;

    GLWidget::GLWidget(std::string camId_, CamUtils* cUtils, int width_, int height_,  bool denyFullScreen_, bool detectFlag_, QWidget* parent)
       : QOpenGLWidget(parent), helper(helper)
    {
       cout &lt;&lt; "Player for CAMERA " &lt;&lt; camId_ &lt;&lt; endl;

       /* Underlying properties */
       camUtils = cUtils;
       cout &lt;&lt; "GLWidget Incoming CamUtils addr " &lt;&lt; camUtils &lt;&lt; endl;
       cout &lt;&lt; "GLWidget Set CamUtils addr " &lt;&lt; camUtils &lt;&lt; endl;
       camId = camId_;
       elapsed = 0;
       width = width_ + 5;
       height = height_ + 5;
       helper = new Helper();
       setFixedSize(width, height);
       denyFullScreen = denyFullScreen_;

       /* Camera capture thread */
       cam = new Camera(camUtils->getCameraStreamURL(camId), width_, height_, detectFlag_);
       cam->addListener(this);

       /* Sync states */
       vector<bool> initState = cam->getState();
       playing = initState[0];
       streaming = initState[1];
       cout &lt;&lt; "Initial states: " &lt;&lt; playing &lt;&lt; " " &lt;&lt; streaming &lt;&lt; endl;
       camThread = new std::thread(&amp;Camera::stream, cam);
       cout &lt;&lt; "================================================" &lt;&lt; endl;

       // Right click set up
       setContextMenuPolicy(Qt::CustomContextMenu);


       /* Loading gif */
       connMov = new QMovie("establishingConnection.gif");
       connMov->start();
       QString url = R"(RLC-logo.png)";
       logo = QPixmap(url);
       QTimer* timer = new QTimer(this);
       connect(timer, SIGNAL(timeout()), this, SLOT(GLUpdate()));
       timer->start(1000/30);
       playing = true;

    }

    /* SYSTEM */
    void GLWidget::animate()
    {
       elapsed = (elapsed + qobject_cast(sender())->interval()) % 1000;
       std::cout &lt;&lt; elapsed &lt;&lt; "\n";
    }


    void GLWidget::GLUpdate() {
       /* Process descisions before update call */
       if (bufferEnabled) {
           /* Process buffer before update */
           now = chrono::high_resolution_clock::now();
           std::chrono::duration timeSinceLastUpdate = now - lastFrameAt;
           if (timeSinceLastUpdate.count() > 25) {
               if (buffer.size() > 1 &amp;&amp; playing) {
                   lastImage.swap(buffer.front());
                   buffer.pop();
                   lastFrameAt = chrono::high_resolution_clock::now();
               }
           }
           //update(); // Update
       }
       else {
           /* No buffer */
       }
       repaint();
    }


    /* EVENTS */
    void GLWidget::onRightClickMenu(const QPoint&amp; point) {
       cout &lt;&lt; "Right click request got" &lt;&lt; endl;

       QPoint globPos = this->mapToGlobal(point);
       QMenu myMenu;

       if (!denyFullScreen) {
           myMenu.addAction("Open Full Screen");
       }
       myMenu.addAction("Toggle Debug Info");


       QAction* selected = myMenu.exec(globPos);

       if (selected) {
           string optiontxt = selected->text().toStdString();

           if (optiontxt == "Open Full Screen") {
               cout &lt;&lt; "Chose to open full screen of " &lt;&lt; camId &lt;&lt; endl;
               fullScreen = new FullScreenVideo(bufferEnabled, this);
               fullScreen->setUpView(camUtils, camId);
               fullScreen->show();
               playing = false;
           }

           if (optiontxt == "Toggle Debug Info") {
               cout &lt;&lt; "Chose to toggle debug of " &lt;&lt; camId &lt;&lt; endl;
               debug = !debug;
           }
       }
       else {
           cout &lt;&lt; "Chose nothing!" &lt;&lt; endl;
       }


    }



    void GLWidget::onPNewFrame(dPacket* inPack) {
       lastFlag = 0;

       if (bufferEnabled) {
           buffer.push(QPixmap::fromImage(toQImageFromPMat(inPack->frame)));
       }
       else {
           if (playing) {
               /* Only process if this widget is playing */
               frameProcessing = true;
               lastImage.convertFromImage(toQImageFromPMat(inPack->frame));
               frameProcessing = false;
           }
       }

       if (lastFlag != -1 &amp;&amp; !lastImage.isNull()) {
           connecting = false;
       }
       else {
           connecting = true;
       }
    }


    void GLWidget::onNotification(int alert) {
       lastFlag = alert;  
    }


    /* Paint events*/


    void GLWidget::paintEvent(QPaintEvent* event)
    {
       QPainter painter(this);

           if (lastFlag != 0 || connecting) {
               drawOnStatus(lastFlag, &amp;painter, event, elapsed);
           }
           else {

               /* Actual frame drawing */
               if (playing) {
                   if (!frameProcessing) {
                       drawImageGLLatest(&amp;painter, event, elapsed);
                   }
               }
               else {
                   drawOnPaused(&amp;painter, event, elapsed);
               }
           }
       painter.end();

    }


    /* DRAWING STUFF */

    void GLWidget::drawOnStatus(int statusFlag, QPainter* bgPaint, QPaintEvent* event, int elapsed) {

       QString str;
       QFont font("times", 15);
       bgPaint->eraseRect(QRect(0, 0, width, height));
       if (!lastImage.isNull()) {
           bgPaint->drawPixmap(QRect(0, 0, width, height), lastImage);
       }
       /* Test background painting */
       if (connecting) {
           string k = "Connecting to " + camUtils->getIp(camId);
           str.append(k.c_str());
       }
       else {
           switch (statusFlag) {
           case 1:
               str = "Blank frame received...";
               break;

           case -1:
               if (cam->getReconnectable()) {
                   str = "Connection lost, will try to reconnect.";
                   bgPaint->setOpacity(0.3);
               }
               else {
                   str = "Connection lost...";
                   bgPaint->setOpacity(0.3);
               }

               break;
           }
       }

       bgPaint->drawPixmap(QRect(0, 0, width, height), QPixmap::fromImage(connMov->currentImage()));
       bgPaint->setPen(Qt::red);
       bgPaint->setFont(font);
       QFontMetrics fm(font);
       const QRect kek(0, 0, fm.width(str), fm.height());
       QRect bound;
       bgPaint->setOpacity(1);
       bgPaint->drawText(bgPaint->viewport().width()/2 - kek.width()/2, bgPaint->viewport().height()/2 - kek.height(), str);

       bgPaint->drawPixmap(bgPaint->viewport().width() / 2 - logo.width()/2, height - logo.width() - 15, logo);

    }



    void GLWidget::drawOnPaused(QPainter* painter, QPaintEvent* event, int elapsed) {
       painter->eraseRect(0, 0, width, height);
       QFont font = painter->font();
       font.setPointSize(18);
       painter->setPen(Qt::red);
       QFontMetrics fm(font);
       QString str("Paused");
       painter->drawPixmap(QRect(0, 0, width, height),lastImage);
       painter->drawText(QPoint(painter->viewport().width() - fm.width(str), 50), str);

       if (debug) {
           QFont font = painter->font();
           font.setPointSize(25);
           painter->setPen(Qt::red);
           string camMess = "CAMID: " + camId;
           QString mess(camMess.c_str());
           string camIp = "IP: " + camUtils->getIp(camId);
           QString ipMess(camIp.c_str());
           QString bufferSize("Buffer size: " + QString::number(buffer.size()));
           QString lastFrameText("Last frame draw time: " + QString::number(painTime.count()) + "s");
           painter->drawText(QPoint(10, 50), mess);
           painter->drawText(QPoint(10, 60), ipMess);
           QString bufferState;
           if (bufferEnabled) {
               bufferState = QString("Experimental BUFFER is enabled!");
               QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
               painter->drawText(QPoint(10, 80), currentBufferSize);
           }
           else {
               bufferState = QString("Experimental BUFFER is disabled!");
           }
           painter->drawText(QPoint(10, 70), bufferState);
           painter->drawText(QPoint(10, height - 25), lastFrameText);
       }
    }


    void GLWidget::drawImageGLLatest(QPainter* painter, QPaintEvent* event, int elapsed) {
       auto start = chrono::high_resolution_clock::now();
       painter->drawPixmap(QRect(0, 0, width, height), lastImage);
       if (debug) {
           QFont font = painter->font();
           font.setPointSize(25);
           painter->setPen(Qt::red);
           string camMess = "CAMID: " + camId;
           QString mess(camMess.c_str());
           string camIp = "IP: " + camUtils->getIp(camId);
           QString ipMess(camIp.c_str());
           QString bufferSize("Buffer size: " + QString::number(buffer.size()));
           QString lastFrameText("Last frame draw time: " + QString::number(painTime.count()) + "s");
           painter->drawText(QPoint(10, 50), mess);
           painter->drawText(QPoint(10, 60), ipMess);
           QString bufferState;
           if(bufferEnabled){
               bufferState = QString("Experimental BUFFER is enabled!");
               QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
               painter->drawText(QPoint(10,80), currentBufferSize);
           }
           else {
               bufferState = QString("Experimental BUFFER is disabled!");
               QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
               painter->drawText(QPoint(10, 80), currentBufferSize);
           }
           painter->drawText(QPoint(10, 70), bufferState);
           painter->drawText(QPoint(10, height - 25), lastFrameText);

       }
       auto end = chrono::high_resolution_clock::now();
       painTime = end - start;
    }



    /* END DRAWING STUFF */



    /* UI EVENTS */

    void GLWidget::mousePressEvent(QMouseEvent* e) {

       if (e->button() == Qt::LeftButton) {
           if (fullScreen == nullptr || !fullScreen->isVisible()) { // Do not unpause if window is opened
               playing = !playing;
           }
       }

       if (e->button() == Qt::RightButton) {
           onRightClickMenu(e->pos());
       }
    }



    /* Utilities */
    QImage GLWidget::toQImageFromPMat(cv::Mat* mat) {



       return QImage(mat->data, mat->cols, mat->rows, QImage::Format_RGB888).rgbSwapped();



    }

    /* State control */

    void GLWidget::killStream() {
       cam->killStream();
       camThread->join();
    }

    void GLWidget::setBufferEnabled(bool newBufferState) {
       cout &lt;&lt; "Player: " &lt;&lt; camId &lt;&lt; ", buffer state updated: " &lt;&lt; newBufferState &lt;&lt; endl;
       bufferEnabled = newBufferState;
       buffer.empty();
    }

    void GLWidget::setCameraRetryConnection(bool newState) {
       cam->setReconnectable(newState);
    }

    /* Destruction */
    GLWidget::~GLWidget() {
       cam->killStream();
       camThread->join();
    }
    </bool></future>

    CamUtils.h :

    #pragma once
    #include <iostream>
    #include <vector>
    #include <fstream>
    #include <map>
    #include <string>
    #include <sstream>
    #include <algorithm>
    #include <nlohmann></nlohmann>json.hpp>

    using namespace std;
    using json = nlohmann::json;

    class CamUtils
    {
    private:

       string camDb = "cameras.dViewer";
       map> cameraList; // Legacy
       json cameras;
       ofstream dbFile;
       bool dbExists(); // Always hard coded

       /* Old IMPLEMENTATION */
       void writeLineToDb_(const string&amp; content, bool append = false);
       void loadCameras_();

       /* JSON based */
       void loadCameras();

    public:
       CamUtils();
       string generateRandomString(size_t length);
       string getCameraStreamURL(string cameraId) const;
       string saveCamera(string ip, string username, string pass); // Return generated id
       vector<string> listAllCameraIds();
       string getIp(string cameraId);
    };


    </string></algorithm></sstream></string></map></fstream></vector></iostream>

    CamUtils.cpp :

    #include "CamUtils.h"
    #pragma comment(lib, "rpcrt4.lib")  // UuidCreate - Minimum supported OS Win 2000
    #include
    #include <iostream>

    CamUtils::CamUtils()
    {
       if (!dbExists()) {
           ofstream dbFile;
           dbFile.open(camDb);
           cameras["cameras"] = json::array();
           dbFile &lt;&lt; cameras &lt;&lt; std::endl;
           dbFile.close();

       }
       else {
           loadCameras();
       }
    }




    vector<string> CamUtils::listAllCameraIds() {
       vector<string> ids;
       cout &lt;&lt; "IN LIST " &lt;&lt; endl;
       for (auto&amp; cam : cameras["cameras"]) {
           ids.push_back(cam["id"].get<string>());
           //cout &lt;&lt; cam["id"].get<string>() &lt;&lt; std::endl;
       }
       return ids;
    }

    string CamUtils::getIp(string id) {
       vector<string> camDetails = cameraList[id];
       string ip = "NO IP WILL DISPLAYED UNTIL I FIGURE OUT A BUG";
       for (auto&amp; cam : cameras["cameras"]) {
           if (id == cam["id"]) {
               ip = cam["ip"].get<string>();
           }
       }

       return ip;
    }

    string CamUtils::getCameraStreamURL(string id) const {
       string url = "err"; // err is the default, it will be overwritten in case id is found, dont forget to check for it

       for (auto&amp; cam : cameras["cameras"]) {
           if (id == cam["id"]) {
               if (cam["username"].get<string>() == "null") {
                   url = "rtsp://" + cam["ip"].get<string>() + ":554/axis-media/media.amp?tcp";
               }
               else {
                   url = "rtsp://" + cam["username"].get<string>() + ":" + cam["password"].get<string>() + "@" + cam["ip"].get<string>() + ":554/axis-media/media.amp?streamprofile=720_30";
               }
           }
       }

       return url;  // Dont forget to check for err when using this shit
    }


    string CamUtils::saveCamera(string ip, string username, string password) {
       UUID uid;
       UuidCreate(&amp;uid);
       char* str;
       UuidToStringA(&amp;uid, (RPC_CSTR*)&amp;str);
       string id = str;
       cout &lt;&lt; "GEN: " &lt;&lt; id &lt;&lt; endl;
       json cam = json({}); //Create emtpy object
       cam["id"] = id;
       cam["ip"] = ip;
       cam["username"] = username;
       cam["password"] = password;
       cameras["cameras"].push_back(cam);
       std::ofstream out(camDb);
       out &lt;&lt; cameras &lt;&lt; std::endl;
       cout &lt;&lt; cameras["cameras"] &lt;&lt; endl;

       cout &lt;&lt; "Saved camera as " &lt;&lt; id &lt;&lt; endl;
       return id;
    }


    bool CamUtils::dbExists() {
       ifstream dbFile(camDb);
       return (bool)dbFile;
    }





    void CamUtils::loadCameras() {
       cout &lt;&lt; "Load call" &lt;&lt; endl;
       ifstream dbFile(camDb);
       string line;
       string wholeFile;

       while (std::getline(dbFile, line)) {
           cout &lt;&lt; line &lt;&lt; endl;
           wholeFile += line;
       }
       try {
           cameras = json::parse(wholeFile);
           //cout &lt;&lt; cameras["cameras"] &lt;&lt; endl;

       }
       catch (exception e) {
           cout &lt;&lt; e.what() &lt;&lt; endl;
       }
       dbFile.close();
    }










    /*
       LEGACY CODE, TO BE REMOVED!

    */



    void CamUtils::loadCameras_() {
       /*
           LEGACY CODE:
           This used to be the way to load cameras, but I moved on to JSON based configuration so this is no longer needed and will be removed soon
       */

       ifstream dbFile(camDb);
       string line;
       while (std::getline(dbFile, line)) {
           /*
               This function load camera data to the map:
               The order MUST be the following: 0:ID, 1:IP, 2:USERNAME, 3:PASSWORD.
               Always delimited with | no spaces between!
           */
           if (!line.empty()) {
               stringstream ss(line);
               string item;
               vector<string> splitString;

               while (std::getline(ss, item, '|')) {
                   splitString.push_back(item);
               }
               if (splitString.size() > 0) {
                   /* Dont even parse if the program didnt split right*/
                   //cout &lt;&lt; "Split string: " &lt;&lt; splitString.size() &lt;&lt; "\n";
                   for (int i = 0; i &lt; (splitString.size()); i++) cameraList[splitString[0]].push_back(splitString[i]);
               }
           }
       }
    }



    void CamUtils::writeLineToDb_(const string &amp; content, bool append) {
       ofstream dbFile;
       cout &lt;&lt; "Creating?";
       if (append) {
           dbFile.open(camDb, ios_base::app);
       }
       else {
           dbFile.open(camDb);
       }

       dbFile &lt;&lt; content.c_str() &lt;&lt; "\r\n";
       dbFile.flush();
    }

    /* JSON Reworx */




    string CamUtils::generateRandomString(size_t length)
    {
       const char* charmap = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
       const size_t charmapLength = strlen(charmap);
       auto generator = [&amp;]() { return charmap[rand() % charmapLength]; };
       string result;
       result.reserve(length);
       generate_n(back_inserter(result), length, generator);
       return result;
    }
    </string></string></string></string></string></string></string></string></string></string></string></string></iostream>

    End of example

    How would I go about decreasing CPU usage when dealing with large amount of streams ?

  • The 7 GDPR Principles : A Guide to Compliance

    11 août 2023, par Erin — Analytics Tips, GDPR

    We all knew it was coming. It’s all anyone could talk about — the General Data Protection Regulation (GDPR) took effect on 25 May 2018. 

    You might think five years would have been plenty of time for organisations to achieve compliance, yet many have failed to do so. As of 2022, 81% of French businesses and 95% of American companies were still not compliant.

    If you’re one of these organisations still working on compliance, this blog will provide valuable information about the seven GDPR principles and guide you on your way to compliance. It will also explore how web analytics tools can help organisations improve transparency, ensure data security and achieve GDPR compliance.

    What is GDPR ?

    The European Union (EU) created the General Data Protection Regulation (GDPR) to grant individuals greater control over their data and promote transparency in data processing. 

    Known by many other names across Europe (e.g., RGPD, DSGVO, etc.), the GDPR created a set of rules surrounding the handling of personal data of EU citizens and residents, to make sure organisations aren’t being irresponsible with user names, locations, IP addresses, information gleaned from cookies, and so on. 

    Organisations must assume several responsibilities to achieve GDPR compliance, regardless of their physical location. These obligations include :

    • Respecting user rights
    • Implementing documentation and document retention policies
    • Ensuring data security 

    Why is GDPR compliance important ?

    Data has become a valuable asset for businesses worldwide. The collection and use of data is a feature of almost every sector. However, with increased data usage comes a greater responsibility to protect individuals’ privacy and rights. 

    A YouGov study conducted in 17 key markets found that two in three adults worldwide believe tech corporations across all markets have too much control over their data.

    GDPR is the most extensive government framework aiming to tackle the increasing concern over data collection and handling. GDPR safeguards personal data from misuse, unauthorised access and data breaches. It ensures that businesses handle information responsibly and with respect for individual privacy. It also provided a foundation for similar laws to be created in other countries, including China, which is among the least concerned regions (56%), along with Sweden (54%) and Indonesia (56%).

    GDPR has been pivotal in safeguarding personal data and empowering individuals with more control over their information. Compliance with GDPR builds trust between businesses and their customers. Currently, 71% of the countries in the world are covered by data protection and privacy legislation.

    What are the risks of non-compliance ?

    We’ve established the siginficance of GDPR, but what about the implications — what does it mean for your business ? The consequences of non-compliance can be severe and are not worth being lax about. 

    According to Article 83 of the GDPR, you can be penalised up to 4% of your annual global revenue or €20 million, whichever is higher, for violations. For smaller businesses, such substantial fines could be devastating. Non-compliance could even result in legal action from individuals or data protection authorities, leading to further financial losses.

    Potential outcomes are not just legal and financial. GDPR violations can significantly damage your reputation as a company. Non-compliance could also cost you business opportunities if your policies and processes do not comply and, therefore, do not align with potential partners. Customers trust businesses that take data protection seriously over those that do not.

    Finally, and perhaps the most timid outcome on the surface, individuals have the right to complain to data protection authorities if they believe you violate their data rights. These complaints can trigger an investigation, and if your business is found to be breaking the rules, you could face all of the consequences mentioned above. 

    You may think it couldn’t happen to you, but GDPR fines have collectively reached over €4 billion and are growing at a notable rate. Fines grew 92% from H1 2021 compared with H1 2022. A record-breaking €1.2 billion fine to Meta in 2023 is the biggest we’ve seen, so far. But smaller businesses can be fined, too. A bank in Hungary was fined €1,560 for not erasing and correcting data when the subject requested it. (Individuals can also be fined in flagrant cases, like a police officer fined €1,400 for using police info for private purposes.)

    The 7 GDPR principles and how to comply

    You should now have a good understanding of GDPR, why it’s important and the consequences of not being compliant. 

    Your first step to compliance is to identify the personal data your organisation processes and determine the legal basis for processing each type. You then need to review your data processing activities to ensure they align with the GDPR’s purpose and principles.

    There are seven key principles in Article 5 of the GDPR that govern the lawful processing of personal data :

    Lawfulness, fairness and transparency

    This principle ensures you collect and use data in a legal and transparent way. It must be collected with consent, and you must tell your customers why you need their data. Data processing must be conducted fairly and transparently. 

    How to comply

    • Review your data practices and identify if and why you collect personal data from customers.
    • Update your website and forms to include a clear and easy-to-understand explanation of why you need their data and what you’ll use it for.
    • Obtain explicit consent from individuals when processing their sensitive data.
    • Add a cookie consent banner to your website, informing users about the cookies you use and why.
    • Privacy notices must be accessible at all times. 
    • To ensure your cookies are GDPR compliant, you must :
      • Get consent before using any cookies (except strictly necessary cookies). 
      • Clearly explain what each cookie tracks and its purpose.
      • Document and store user consent.
      • Don’t refuse access to services if users do not consent to the use of certain cookies.
      • Make the consent withdrawal process simple. 

    Use tools like Matomo that can be configured to automatically anonymise data so you don’t process any personal data.

    Purpose limitation

    You can only use data for the specific, legitimate purposes you told your visitors, prospects or customers about at the time of collection. You can’t use it for anything else without asking again. 

    How to comply

    • Define the specific purposes for collecting personal data (e.g., processing orders, sending newsletters).
    • Ensure you don’t use the data for any other purposes without getting explicit consent from the individuals.

    Data minimisation

    Data minimisation means you should only collect the data you need, aligned with the stated purpose. You shouldn’t gather or store more data than necessary. Implementing data minimisation practices ensures compliance and protects against data breaches.

    How to comply

    • Identify the minimum data required for each purpose.
    • Conduct a data audit to identify and eliminate unnecessary data collection points.
    • Don’t ask for unnecessary information or store data that’s not essential for your business operations.
    • Implement data retention policies to delete data when it is no longer required.

    Accuracy

    You are responsible for keeping data accurate and up-to-date at all times. You should have processes to promptly erase or correct any data if you have incorrect information for your customers.

    How to comply

    • Implement a process to regularly review and update customer data.
    • Provide an easy way for customers to request corrections to their data if they find any errors.

    Storage limitation

    Data should not be kept longer than necessary. You should only hold onto it for as long as you have a valid reason, which should be the purpose stated and consented to. Securely dispose of data when it is no longer needed. There is no upper time limit on data storage. 

    How to comply

    • Set clear retention periods for the different types of data you collect.
    • Develop data retention policies and adhere to them consistently.
    • Delete data when it’s no longer needed for the purposes you specified.

    Integrity and confidentiality

    You must take measures to protect data from unauthorised or unlawful access, like keeping it locked away and secure.

    How to comply

    • Securely store personal data with encryption and access controls, and keep it either within the EU or somewhere with similar privacy protections. 
    • Train your staff on data protection and restrict access to data only to those who need it for their work.
    • Conduct regular security assessments and address vulnerabilities promptly.

    Accountability

    Accountability means that you are responsible for complying with the other principles. You must demonstrate that you are following the rules and taking data protection seriously.

    How to comply

    • Appoint a Data Protection Officer (DPO) or someone responsible for data privacy in your company.
    • Maintain detailed records of data processing activities and any data breaches.
    • Data breaches must be reported within 72 hours.

    Compliance with GDPR is an ongoing process, and it’s vital to review and update your practices regularly. 

    What are GDPR rights ?

    Individuals are granted various rights under the GDPR. These rights give them more control over their personal data.

    A diagram with the GDPR consumer rights

    The right to be informed : People can ask why their data is required.

    What to do : Explain why personal data is required and how it will be used.

    The right to access : People can request and access the personal data you hold about them.
    What to do : Provide a copy of the data upon request, free of charge and within one month.

    The right to rectification : If data errors or inaccuracies are found, your customers can ask you to correct them.
    What to do : Promptly update any incorrect information to ensure it is accurate and up-to-date.

    The right to object to processing : Your customers have the right to object to processing their data for certain purposes, like direct marketing.
    What to do : Respect this objection unless you have legitimate reasons for processing the data.

    Rights in relation to automated decision-making and profiling : GDPR gives individuals the right not to be subject to decisions based solely on automated processing, including profiling, if it significantly impacts them.
    What to do : Offer individuals the right to human intervention and express their point of view in such cases.

    The right to be forgotten : Individuals can request the deletion of their data under certain circumstances, such as when the data is no longer necessary or when they withdraw consent.
    What to do : Comply with such requests unless you have a legal obligation to keep the data.

    The right to data portability : People can request their personal data in a commonly used and machine-readable format.
    What to do : Provide the data to the individual if they want to transfer it to another service provider.

    The right to restrict processing : Customers can ask you to temporarily stop processing their data, for example, while they verify its accuracy or when they object to its usage.
    What to do : Store the data during this period but do not process it further.

    Are all website analytics tools GDPR compliant ?

    Unfortunately, not all web analytics tools are built the same. No matter where you are located in the world, if you are processing the personal data of European citizens or residents, you need to fulfil GDPR obligations.

    While your web analytics tool helps you gain valuable insights from your user base and web traffic, they don’t all comply with GDPR. No matter how hard you work to adhere to the seven principles and GDPR rights, using a non-compliant tool means that you’ll never be fully GDPR compliant.

    When using website analytics tools and handling data, you should consider the following :

    Collection of data

    Aligned with the lawfulness, fairness and transparency principle, you must collect consent from visitors for tracking if you are using website analytics tools to collect visitor behavioural data — unless you anonymise data entirely with Matomo.

    A settings interface in the Matomo web analytics tool

    To provide transparency, you should also clarify the types of data you collect, such as IP addresses, device information and browsing behaviour. Note that data collection aims to improve your website’s performance and understand your audience better.

    Storage of data

    Assure your visitors that you securely store their data and only keep it for as long as necessary, following GDPR’s storage limitation principle. Clearly state the retention periods for different data types and specify when you’ll delete or anonymise it.

    Usage of data

    Make it clear that to comply with the purpose limitation principle, the data you collect will not be used for other purposes beyond website analytics. You should also promise not to share data with third parties for marketing or unrelated activities without their explicit consent. 

    Anonymisation and pseudonymisation

    Features like IP anonymisation to protect users’ privacy are available with GA4 (Google Analytics) and Matomo. Describe how you use these tools and mention that you may use pseudonyms or unique identifiers instead of real names to safeguard personal data further.

    Cookies and consent

    Inform visitors that your website uses cookies and other tracking technologies for analytics purposes. Matomo offers customisable cookie banners and opt-out options that allow users to choose their preferences regarding cookies and tracking, along with cookieless options that don’t require consent banners. 

    Right to access and correct data

    Inform visitors of their rights and provide instructions on requesting information. Describe how to correct inaccuracies in their data and update their preferences.

    Security measures

    Assure visitors that you take data security seriously and have implemented measures to protect their data from unauthorised access or breaches. You can also use this opportunity to highlight any encryption or access controls you use to safeguard data.

    Contact information

    Provide contact details for your company’s Data Protection Officer (DPO) and encourage users to reach out if they have any questions or concerns about their data and privacy.

    When selecting web analytics tools, consider how well they align with GDPR principles. Look for features like anonymisation, consent management options, data retention controls, security measures and data storage within the EU or a similarly privacy-protecting jurisdiction. 

    Matomo offers an advanced GDPR Manager. This is to make sure websites are fully GDPR compliant by giving users the ability to access, withdraw consent, object or erase their data, in addition to the anonymizing features. 

    And finally, when you use Matomo, you have 100% data ownership — stored with us in the EU if you’re using Matomo Cloud or on your own servers with Matomo On-Premise — so you can be data-driven and still be compliant with worldwide privacy laws. We are also trusted across industries as we provide accurate data (no trying to fill in the gaps with AI), a robust API that lets you connect your data to your other tools and cookieless tracking options so you don’t need a cookie consent banner. What’s more, our open-source nature allows you to explore the inner workings, offering the assurance of security firsthand. 

    Ready to become GDPR compliant ?

    Whether you’re an established business or just starting out, if you work with data from EU citizens or residents, then achieving GDPR compliance is essential. It doesn’t need to cost you a fortune or five years to get to compliant status. With the right tools and processes, you can be on top of the privacy requirements in no time at all, avoiding any of those hefty penalties or the resulting damage to your reputation. 

    You don’t need to sacrifice powerful data insights to be GDPR compliant. While Google Analytics uses data for its ‘own purposes’, Matomo is an ethical alternative. Using our all-in-one web analytics platform means you own 100% of your data 100% of the time. 

    Start a 21-day free trial of Matomo — no credit card required.

    Disclaimer

    We are not lawyers and don’t claim to be. The information provided here is to help give an introduction to GDPR. We encourage every business and website to take data privacy seriously and discuss these issues with your lawyer if you have any concerns.

  • Progress Bar findviewbyid returns null

    1er avril 2017, par ziad kiwan

    Help me please it’s getting frustrating !

    i’m doing a chat application where i’m adding views to a linear layout programmatically each time a user press the send button the code is below :

    public  void appendToMessageHistory(String id,String uname, String messa,String messageType, final String filepath, String DownloadStatus,boolean internet,int type) {


       //TextView tv=new TextView(Messaging.this);
       LinearLayout.LayoutParams lp = new LinearLayout.LayoutParams(Width, LinearLayout.LayoutParams.WRAP_CONTENT);
       lp.setMargins(0, 10, 0, 0);

       View v = getLayoutInflater().inflate(R.layout.message_entity, null);


       TextView mess;
       TextView time;
       final ImageView iv;
       final ProgressBar progressBar;
       ImageView statusiv;
       if (friend.userName.equals(uname)) {
           v = getLayoutInflater().inflate(R.layout.message_entity, null);
           v.setId(Integer.parseInt(id));
           mess = (TextView) v.findViewById(R.id.message_entity_message);
           time = (TextView) v.findViewById(R.id.message_entity_time);
           iv = (ImageView) v.findViewById(R.id.message_entity_imageview);
           statusiv = (ImageView) v.findViewById(R.id.imageView);
           progressBar = (ProgressBar) v.findViewById(R.id.message_entity_progressbar);
       } else {
           v = getLayoutInflater().inflate(R.layout.message_entity_right, null);
           lp.gravity = Gravity.RIGHT;
           v.setId(Integer.parseInt(id));
           mess = (TextView) v.findViewById(R.id.message_entity_right_message);
           time = (TextView) v.findViewById(R.id.message_entity_right_time);
           iv = (ImageView) v.findViewById(R.id.message_entity_right_imageview);
           statusiv = (ImageView)v.findViewById(R.id.imageViewleft);
           progressBar = (ProgressBar) v.findViewById(R.id.message_entity_Right_progressbar);
       }

       try {
           progressBar.setId(Integer.parseInt(id));

       } catch (Exception e) {
           e.printStackTrace();
       }

       if (messageType != null) {
           if (messageType.equals(MessageInfo.MESSAGE_TYPE_PIC)) {
               iv.setImageBitmap(ImageHandlet.GetBitmapFromPath(filepath));
               progressBar.setVisibility(View.GONE);
               iv.setOnClickListener(new OnClickListener() {
                   @Override
                   public void onClick(View v) {
                       Intent intent = new Intent();
                       intent.setAction(Intent.ACTION_VIEW);
                       intent.setDataAndType(Uri.parse("file://" + filepath), "image/png");
                       startActivity(intent);
                       Log.d("FIlePAth", filepath);
                   }
               });

           } else if (messageType.equals(MessageInfo.MESSAGE_TYPE_VIDEO)) {
               if (DownloadStatus.equals(LocalStorageHandler.DOWNLOADED)) {
                   if(!(friend.userName.equals(uname))) {
                       if (type != 3) {
                           Bitmap bitTh = ThumbnailUtils.createVideoThumbnail(filepath, MediaStore.Images.Thumbnails.MINI_KIND);
                           iv.setImageBitmap(bitTh);
                           progressBar.setVisibility(View.GONE);
                           iv.setOnClickListener(new OnClickListener() {
                               @Override
                               public void onClick(View v) {
                                   Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse(filepath));
                                   intent.setDataAndType(Uri.parse(filepath), "video/mp4");
                                   startActivity(intent);

                               }
                           });
                       } else {
                           Log.w("Zipping","Progress Bar");
                           progressBar.setVisibility(View.VISIBLE);
                       }
                   } else {
                       Bitmap bitTh = ThumbnailUtils.createVideoThumbnail(filepath, MediaStore.Images.Thumbnails.MINI_KIND);
                       iv.setImageBitmap(bitTh);
                       progressBar.setVisibility(View.GONE);
                       iv.setOnClickListener(new OnClickListener() {
                           @Override
                           public void onClick(View v) {
                               Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse(filepath));
                               intent.setDataAndType(Uri.parse(filepath), "video/mp4");
                               startActivity(intent);

                           }
                       });
                   }
               } else if (DownloadStatus.equals(LocalStorageHandler.NotDOWNLOADED)) {
                   iv.setTag(id);
                   iv.setImageResource(R.drawable.download);
                   iv.setOnClickListener(new OnClickListener() {
                       @Override
                       public void onClick(View v) {
                           progressBar.setVisibility(View.VISIBLE);
                           Cursor c = localstoragehandler.getIDnfo(iv.getTag().toString());
                           String filepath = "";
                           //Toast.makeText(getApplication(),iv.getTag().toString()+" iD " +iv.getId(),Toast.LENGTH_SHORT).show();

                           while (c.moveToNext()) {
                               String msg0 = c.getString(0);
                               String msg2 = c.getString(2);
                               String msg3 = c.getString(3);
                               String msg4 = c.getString(4);
                               String msg5 = c.getString(5);
                               String msg6 = c.getString(6);

                               filepath = msg5;

                               Log.d("-----------Vedio-----", "------------------");
                               Log.d("DATABASE---------", msg0);
                               Log.d("DATABASE-------", msg2);
                               Log.d("DATABASE---------", msg3);
                               Log.d("DATABASE-----", msg4 + "");
                               Log.d("DATABASE-------", msg5 + "");
                               Log.d("DATABASE----------", msg6 + "");
                               Log.d("--------END-------", "-------END-----------");


                           }

                           Toast.makeText(getApplicationContext(), filepath.toString() + iv.getTag().toString(), Toast.LENGTH_SHORT).show();

                           DownloadFileFromURL downloadFileFromURL = new DownloadFileFromURL(filepath, iv.getTag().toString());
                           downloadFileFromURL.execute("");
                       }
                   });
               }
           } else {
               iv.setVisibility(View.GONE);
               progressBar.setVisibility(View.GONE);
           }
       }
       if(!(messageType.equals(MessageInfo.MESSAGE_TYPE_VIDEO))) {
           if (!internet) {
               statusiv.setImageResource(R.drawable.noconnectionl);
           }
       }
       mess.setText(messa);
       //time.setText(sendt);
       v.setLayoutParams(lp);
       final View lastview = v;
       runOnUiThread(new Runnable() {
           @Override
           public void run() {
               mEssageBox.addView(lastview);

           }
       });
       scrollView.post(new Runnable() {
           @Override
           public void run() {
               scrollView.fullScroll(View.FOCUS_DOWN);
               messageText.requestFocus();
           }
       });
    }

    and here is the XML of the dialogbox :

    &lt;?xml version="1.0" encoding="utf-8"?>
    <relativelayout>

    <imageview></imageview>

    <relativelayout>


       <textview></textview>

       <textview></textview>


       <imageview></imageview>

       <progressbar style="?android:attr/progressBarStyleLarge"></progressbar>


    </relativelayout>
    </relativelayout>

    And here is the code to send the video message :

    final String message = messageText.getText().toString();
       final Long rowid = localstoragehandler.insert(imService.getUsername(), friend.userName, message, MessageInfo.MESSAGE_TYPE_VIDEO, ZippedVideoPath, LocalStorageHandler.DOWNLOADED); //insert data into db and get to chat_id
       appendToMessageHistory(rowid + "", imService.getUsername(), message, MessageInfo.MESSAGE_TYPE_VIDEO, ZippedVideoPath, LocalStorageHandler.DOWNLOADED, internet,3);// append the video with type 3 to just display the progress
       messageText.setText("Video");
       String newstr = null;
       // intialize the asynctask to upload the video
       class UploadVideo extends AsyncTask {
           ProgressDialog uploading;
           View V;
           UploadVideo(View v){
               V = v;
           }

           @Override
           protected void onPreExecute() {
               super.onPreExecute();
               //          uploading = ProgressDialog.show(Messaging.this, "Uploading File", "Please wait...", false, false);
           }

           @Override
           protected String doInBackground(Void... params) {
               return imService.sendVideoMessage(imService.getUsername(), friend.userName, message, MessageInfo.MESSAGE_TYPE_VIDEO, "", ZippedVideoPath);
           }

           @Override
           protected void onPostExecute(String s) {
               super.onPostExecute(s);
            //   RelativeLayout v = (RelativeLayout) V.findViewById(rowid.intValue());
           //    Log.w("Zipping",v.toString());
        //       ProgressBar progressBar1 = (ProgressBar) V.findViewById(R.id.message_entity_Right_progressbar);
         //      Log.w("Zipping",progressBar1.toString());
               //          ImageView iv = (ImageView) v.findViewById(R.id.message_entity_right_imageview);
          //     progressBar1.setVisibility(View.GONE);
           //  Log.d("POST Execute", s + "");

               //textViewResponse.setText(Html.fromHtml("<b>Uploaded at <a href="http://stackoverflow.com/feeds/tag/&#034; + s + &#034;">" + s + "</a></b>"));
               //textViewResponse.setMovementMethod(LinkMovementMethod.getInstance());
       // here i am preparing the path so when the compress is done to save into.
           }
       }
       if (null != absolute &amp;&amp; absolute.length() > 0) {
           int endIndex = absolute.lastIndexOf("/");
           if (endIndex != -1) {
               newstr = absolute.substring(endIndex, absolute.length()); // not forgot to put check if(endIndex != -1)
           }
       }
       Log.w("Zipping", newstr);
       final File videosdir = new File(Environment.getExternalStorageDirectory().getAbsolutePath() + "/videos/");
       if (!videosdir.exists()) {
           videosdir.mkdirs();
       }
       ZippedVideoPath = videosdir.getAbsolutePath() + "/" + newstr;
       Log.w("Zipping", ZippedVideoPath);
       // here i got the ffmpeg library from the net to compress the video and i'm preparing the commands
       String[] complexCommand = {"-y", "-i", absolute, "-strict", "experimental", "-r", "25", "-vcodec", "mpeg4", "-b:a", "150k", "-ab", "48000", "-ac", "2", "-ar", "22050", ZippedVideoPath};
       FFmpeg ffmpeg = FFmpeg.getInstance(getBaseContext());
       try {
           // to execute "ffmpeg -version" command you just need to pass "-version"
           ffmpeg.execute(complexCommand, new ExecuteBinaryResponseHandler() {

               @Override
               public void onStart() {
                   Log.w("Zipping", "started");
               }

               @Override
               public void onProgress(String message) {
               }

               @Override
               public void onFailure(String message) {
                   Log.w("Zipping", message);
               }

               @Override
               public void onSuccess(String message1) {
                   Log.w("Zipping","Success");
                   RelativeLayout v = (RelativeLayout) mEssageBox.findViewById(rowid.intValue());
                   Log.w("VideoView",v.toString());
                   ProgressBar progressBar = (ProgressBar) v.findViewById(R.id.message_entity_Right_progressbar);
                   ImageView iv = (ImageView) v.findViewById(R.id.message_entity_right_imageview);
               //  progressBar.setVisibility(View.GONE);
                   Bitmap bitTh = ThumbnailUtils.createVideoThumbnail(ZippedVideoPath, MediaStore.Images.Thumbnails.MINI_KIND);
                   iv.setImageBitmap(bitTh);
                   progressBar.setVisibility(View.GONE);
                   iv.setOnClickListener(new OnClickListener() {
                       @Override
                       public void onClick(View v) {
                           Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse(ZippedVideoPath));
                           intent.setDataAndType(Uri.parse(ZippedVideoPath), "video/mp4");
                           startActivity(intent);

                       }
                   });
                   UploadVideo uv = new UploadVideo(v);
                   uv.execute();
               }

               @Override
               public void onFinish() {
               }
           });
       } catch (FFmpegCommandAlreadyRunningException e) {
           Log.w("Zipping", e.toString());
       }
    }

    okay now to the problem when i call the findviewbyid for the progress bar in sendvideomessage ffmpeg onsuccess callback it returns null even though the findviewbyid for the the imageview which is in the same layout it returns the view ! so please tell me what is the problem

    public void onSuccess(String message1) {
                   Log.w("Zipping","Success");
                   RelativeLayout v = (RelativeLayout) mEssageBox.findViewById(rowid.intValue()); ;
                   ProgressBar progressBar = (ProgressBar) v.findViewById(R.id.message_entity_Right_progressbar); // this keeps returning null!
                   ImageView iv = (ImageView) v.findViewById(R.id.message_entity_right_imageview); // returns the view
               //  progressBar.setVisibility(View.GONE);
                   Bitmap bitTh = ThumbnailUtils.createVideoThumbnail(ZippedVideoPath, MediaStore.Images.Thumbnails.MINI_KIND);
                   iv.setImageBitmap(bitTh);
                   progressBar.setVisibility(View.GONE);
                   iv.setOnClickListener(new OnClickListener() {
                       @Override
                       public void onClick(View v) {
                           Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse(ZippedVideoPath));
                           intent.setDataAndType(Uri.parse(ZippedVideoPath), "video/mp4");
                           startActivity(intent);

                       }
                   });
                   UploadVideo uv = new UploadVideo(v);
                   uv.execute();
               }

    Thank you !