
Recherche avancée
Autres articles (71)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Modifier la date de publication
21 juin 2013, parComment changer la date de publication d’un média ?
Il faut au préalable rajouter un champ "Date de publication" dans le masque de formulaire adéquat :
Administrer > Configuration des masques de formulaires > Sélectionner "Un média"
Dans la rubrique "Champs à ajouter, cocher "Date de publication "
Cliquer en bas de la page sur Enregistrer -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (10876)
-
ffmpeg -Video live feed
22 juin 2019, par RajeevI am using AMCREST security camera at my home. My objective is to get a live feed one of IP camera attached to my NVR to a webportal using rtsp ://My environment is Raspberry pi.
I am able to successfully start the ffserver but conversion is failing when I am trying pass the input video and stream it to video.ffm
I have tried various combination of parameters in the command but the below one seems to be very close where I got only one error ( av_interleaved_write_frame() : Connection reset by peer)
$ffmpeg -thread_queue_size 800 -i "rtsp://home:Home1234@192.168.1.32:554/cam/realmonitor?channel=4&subtype=0" -f lavfi -i aevalsrc=0 http://127.0.0.1:8090/video.ffm
******* ffmpeg server configuration file content : etc/livestream.conf
#Default port
HTTPPort 8090
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 100000
CustomLog -
#############################################################
<feed>
File /tmp/video.ffm # this creates a temp video.ffm file where streams are read/write
FileMaxSize 0.5G
ACL allow localhost
ACL allow 127.0.0.1
ACL allow 192.168.0.0 192.168.255.255
</feed>
<stream stream="stream">
# streaming for webm file
# run : ffserver -f /etc/ffserver.conf
# run : ffmpeg -i videoname.mp4 http://localhost:8090/video.ffm
# error : encoder setup failed
Feed video.ffm
Format webm
# Audio settings
AudioCodec vorbis
AudioBitRate 64 # Audio bitrate
# Video settings
VideoCodec libvpx
VideoSize 720x486 # Video resolution
VideoFrameRate 30 # Video FPS
AVOptionVideo flags +global_header # Parameters passed to encoder
AVOptionVideo cpu-used 0
AVOptionVideo qmin 10 # lower the better, min 0
AVOptionVideo qmax 42 # higher outputs bad quality, max 63
AVOptionVideo quality good
AVOptionAudio flags +global_header
PreRoll 15
StartSendOnKey
VideoBitRate 400 # Video bitrate
</stream>
###########################################################################
# Audio only
# run ffmpeg -i audio.mp3 http://localhost:8090/audio.ffm
# run http://localhost:8090/audio in vlc or browser
<feed>
File /tmp/audio.ffm
FileMaxSize 1G
ACL allow localhost
ACL allow 127.0.0.1
ACL allow 192.168.0.0 192.168.255.255
</feed>
<stream audio="audio">
Feed audio.ffm
Format mp2 #audio format
AudioCodec libmp3lame #audio codec
AudioBitRate 64 #audio bitrate
AudioChannels 1 #audio channel, 1 for mono and 2 for stereo
AudioSampleRate 44100
NoVideo #discard video
</stream>
####################################################################
#view status of ffserver
<stream>
Format status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
</stream>
# Redirect index.html to the appropriate site
<redirect>
URL http://www.ffmpeg.org/
</redirect>*****output of ffmpeg server running successfully on separate console***
$ ffserver -f /etc/livestream.conf
ffserver version 3.2.14-1 deb9u1+rpt1 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 6.3.0 (Raspbian 6.3.0-18+rpi1+deb9u1) 20170516
configuration : —prefix=/usr —extra-version=’1 deb9u1+rpt1’ —toolchain=hardened —libdir=/usr/lib/arm-linux-gnueabihf —incdir=/usr/include/arm-linux-gnueabihf —enable-gpl —disable-stripping —enable-avresample —enable-avisynth —enable-gnutls —enable-ladspa —enable-libass —enable-libbluray —enable-libbs2b —enable-libcaca —enable-libcdio —enable-libebur128 —enable-libflite —enable-libfontconfig —enable-libfreetype —enable-libfribidi —enable-libgme —enable-libgsm —enable-libmp3lame —enable-libopenjpeg —enable-libopenmpt —enable-libopus —enable-libpulse —enable-librubberband —enable-libshine —enable-libsnappy —enable-libsoxr —enable-libspeex —enable-libssh —enable-libtheora —enable-libtwolame —enable-libvorbis —enable-libvpx —enable-libwavpack —enable-libwebp —enable-libx265 —enable-libxvid —enable-libzmq —enable-libzvbi —enable-omx —enable-omx-rpi —enable-mmal —enable-openal —enable-opengl —enable-sdl2 —enable-libdc1394 —enable-libiec61883 —arch=armhf —enable-chromaprint —enable-frei0r —enable-libopencv —enable-libx264 —enable-shared
libavutil 55. 34.101 / 55. 34.101
libavcodec 57. 64.101 / 57. 64.101
libavformat 57. 56.101 / 57. 56.101
libavdevice 57. 1.100 / 57. 1.100
libavfilter 6. 65.100 / 6. 65.100
libavresample 3. 1. 0 / 3. 1. 0
libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100
libpostproc 54. 1.100 / 54. 1.100
/etc/livestream.conf:45 : Setting default value for audio sample rate = 22050. Use NoDefaults to disable it.
/etc/livestream.conf:45 : Setting default value for audio channel count = 1. Use NoDefaults to disable it.
/etc/livestream.conf:45 : Setting default value for video bit rate tolerance = 100000. Use NoDefaults to disable it.
/etc/livestream.conf:45 : Setting default value for video rate control equation = tex^qComp. Use NoDefaults to disable it.
/etc/livestream.conf:45 : Setting default value for video max rate = 13749264. Use NoDefaults to disable it.
/etc/livestream.conf:45 : Setting default value for video buffer size = 800000. Use NoDefaults to disable it.
Fri Jun 21 19:43:59 2019 FFserver started.
2nd Console
$$ffmpeg -thread_queue_size 800 -i "rtsp://home:Home1234@192.168.1.32:554/cam/realmonitor?channel=4&subtype=0" -f lavfi -i aevalsrc=0 http://127.0.0.1:8090/video.ffm
*************************Output**********
Input #0, rtsp, from 'rtsp://home:Home1234@192.168.1.32:554/cam/realmonitor?channel=4&subtype=0':
Metadata:
title : Media Server
Duration: N/A, start: 0.290000, bitrate: N/A
Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 2304x1296 [SAR 1:1 DAR 16:9], 20 fps, 250 tbr, 90k tbn, 40 tbc
Input #1, lavfi, from 'aevalsrc=0':
Duration: N/A, start: 0.000000, bitrate: 2822 kb/s
Stream #1:0: Audio: pcm_f64le, 44100 Hz, mono, dbl, 2822 kb/s
[swscaler @ 0x256dd80] deprecated pixel format used, make sure you did set range correctly
[libvpx @ 0x2564190] v1.6.1
Output #0, ffm, to 'http://127.0.0.1:8090/video.ffm':
Metadata:
title : Media Server
creation_time : now
encoder : Lavf57.56.101
Stream #0:0: Audio: vorbis (libvorbis), 22050 Hz, mono, fltp, 64 kb/s
Metadata:
encoder : Lavc57.64.101 libvorbis
Stream #0:1: Video: vp8 (libvpx), yuv420p, 720x486 [SAR 6:5 DAR 16:9], q=10-42, 400 kb/s, 20 fps, 1000k tbn, 30 tbc
Metadata:
encoder : Lavc57.64.101 libvpx
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 800000 vbv_delay: -1
Stream mapping:
Stream #1:0 -> #0:0 (pcm_f64le (native) -> vorbis (libvorbis))
Stream #0:0 -> #0:1 (h264 (native) -> vp8 (libvpx))
Press [q] to stop, [?] for help
frame= 2 fps=1.2 q=0.0 size= 8kB time=00:00:00.03 bitrate=1966.0kbits/s dup=1 drop=0 speed=0.0av_interleaved_write_frame(): Connection reset by peer
Error writing trailer of http://127.0.0.1:8090/video.ffm: Connection reset by peer frame= 2 fps=1.1 q=0.0 Lsize= 40kB time=00:00:00.03 bitrate=9830.2kbits/s dup=1 drop=0 speed=0.0189x
video:29kB audio:0kB subtitle:0kB other streams:0kB global headers:4kB muxing overhead: 36.183796%
Conversion failedMy expectation is that ffmpeg will start writing data to video.ffm located in /tmp directory so that I can read the data from our browser or vlc media player by just entering the following link
http://localhost:8090/stream****** Update after 2 hr******
I made slight change to the command parameter and my out changed as well.What it looks like the temporary video file that got generated in /tmp folder is not getting consumes by video.ffm ( I may be wrong in my analysis)
ffmpeg -thread_queue_size 1200 -i "rtsp://home:Home1234@192.168.1.32:554/cam/realmonitor?channel=4&subtype=0" -f lavfi -i aevalsrc=0 -override_ffserver http://127.0.0.1:8090/video.ffm
***************** Output ****
Past duration 0.660332 too large 376kB time=00:00:01.03 bitrate=2978.9kbits/s dup=3 drop=5 speed=0.171x
Past duration 0.637352 too large 840kB time=00:00:03.72 bitrate=1847.5kbits/s dup=14 drop=5 speed=0.213x
Past duration 0.678307 too large
Past duration 0.713280 too large 1180kB time=00:00:05.52 bitrate=1749.0kbits/s dup=21 drop=5 speed=0.218x
Past duration 0.901085 too large 1372kB time=00:00:06.72 bitrate=1670.4kbits/s dup=26 drop=5 speed=0.22x
Past duration 0.948051 too large 2456kB time=00:00:12.97 bitrate=1551.1kbits/s dup=51 drop=5 speed=0.226x
[rtsp @ 0x1fd7670] Thread message queue blocking; consider raising the thread_queue_size option (current value: 1200)
Past duration 0.713280 too large 3336kB time=00:00:17.64 bitrate=1549.0kbits/s dup=70 drop=5 speed=0.228x
[rtsp @ 0x1fd7670] max delay reached. need to consume packetbitrate=1537.0kbits/s dup=76 drop=5 speed=0.229x
[rtsp @ 0x1fd7670] RTP: missed 30 packets
[rtsp @ 0x1fd7670] max delay reached. need to consume packetbitrate=1533.3kbits/s dup=78 drop=5 speed=0.228x
[rtsp @ 0x1fd7670] RTP: missed 134 packets
[rtsp @ 0x1fd7670] max delay reached. need to consume packetbitrate=1539.5kbits/s dup=82 drop=5 speed=0.229x
[rtsp @ 0x1fd7670] RTP: missed 111 packets
[rtsp @ 0x1fd7670] max delay reached. need to consume packetbitrate=1526.3kbits/s dup=84 drop=5 speed=0.229x
[rtsp @ 0x1fd7670] RTP: missed 19 packets
[rtsp @ 0x1fd7670] max delay reached. need to consume packetbitrate=1521.7kbits/s dup=92 drop=5 speed=0.23x
[rtsp @ 0x1fd7670] RTP: missed 626 packets
Past duration 0.651329 too large 4408kB time=00:00:23.64 bitrate=1527.0kbits/s dup=94 drop=5 speed=0.23x
[rtsp @ 0x1fd7670] max delay reached. need to consume packetbitrate=1516.7kbits/s dup=94 drop=5 speed=0.23x
[rtsp @ 0x1fd7670] RTP: missed 134 packets
[rtsp @ 0x1fd7670] max delay reached. need to consume packetbitrate=1521.3kbits/s dup=98 drop=5 speed=0.231x
[rtsp @ 0x1fd7670] RTP: missed 123 packets
Past duration 0.633354 too large 4608kB time=00:00:24.98 bitrate=1511.0kbits/s dup=99 drop=5 speed=0.23x
[rtsp @ 0x1fd7670] max delay reached. need to consume packetbitrate=1523.4kbits/s dup=99 drop=5 speed=0.231x -
Exceeded GA’s 10M hits data limit, now what ?
21 juin 2019, par Joselyn Khor -
Capture from multiple streams concurrently, best way to do it and how to reduce CPU usage
19 juin 2019, par DRONE_6969I am currently in the process of writing an application that will capture a lot of RTSP streams(in my case its 12) and display it on the QT widget. The problem arouses when I am going beyond around 6-7 streams, the CPU usage spikes and there is visible stutter.
The reason why I think that it is not QT draw function is because I have done some checking to measure how much time it takes to draw an incoming image from camera and just sample images I had, it is always a lot less than 33 milliseconds(even if there are 12 widgets being updated).
I also just ran opencv capture method without drawing and got pretty much the same CPU consumption as if I was drawing the frames (lost like 10% CPU at most and GPU usage went to zero).
IMPORTANT : I am using RTSP stream which is a h264 stream.
IF IT MATTERS MY SPECS :
Intel Core i7-6700 @ 3.40GHZ(8 CPUS)
Memory : 16gb
GPU : Intel HD Graphics 530(Also I ran my code on a computer with dedicated Graphics card, it did eliminate some stutter but CPU usage is still pretty high)
I am currently using OPENCV 4.1.0 with GSTREAMER enabled and built, I also have the OPENCV-WORLD version, there is no difference in performance.
I have created a special class called Camera that holds its frame size constraints and various control functions as well stream function. The stream function is being ran on a separate thread, whenever stream() function is done with current frame it sends ready Mat via onNewFrame event I created which converts to QPixmap and updates widget’s lastImage variable. This way I can update image in a more thread safe way.
I have tried to manipulate those VideoCapture.set() values, but it didn’t really help.
This is my stream function (Ignore the bool return, it doesn’t do anything it is a remnant from couple of minutes ago when I was trying to use std::async) :
bool Camera::stream() {
/* This function is meant to run on a separate thread and fill up the buffer independantly of
main stream thread */
//cv::setNumThreads(100);
/* Rules for these slightly changed! */
Mat pre; // Grab initial undoctored frame
//pre = Mat::zeros(size, CV_8UC1);
Mat frame; // Final modified frame
frame = Mat::zeros(size, CV_8UC1);
if (!pre.isContinuous()) pre = pre.clone();
ipCam.open(streamUrl, CAP_FFMPEG);
while (ipCam.isOpened() && capture) {
// If camera is opened wel need to capture and process the frame
try {
auto start = std::chrono::system_clock::now();
ipCam >> pre;
if (pre.empty()) {
/* Check for blank frame, return error if there is a blank frame*/
cerr << id << ": ERROR! blank frame grabbed\n";
for (FrameListener* i : clients) {
i->onNotification(1); // Notify clients about this shit
}
break;
}
else {
// Only continue if frame not empty
if (pre.cols != size.width && pre.rows != size.height) {
resize(pre, frame, size);
pre.release();
}
else {
frame = pre;
}
dPacket* pack = new dPacket{id,&frame};
for (auto i : clients) {
i->onPNewFrame(pack);
}
frame.release();
delete pack;
}
}
catch (int e) {
cout << endl << "-----Exception during capture process! CODE " << e << endl;
}
// End camera manipulations
}
cout << "Camera timed out, or connection is closed..." << endl;
if (tryResetConnection) {
cout << "Reconnection flag is set, retrying after 3 seconds..." << endl;
for (FrameListener* i : clients) {
i->onNotification(-1); // Notify clients about this shit
}
this_thread::sleep_for(chrono::milliseconds(3000));
stream();
}
return true;
}This is my onPNewFrame function. The conversion is still being done on camera’s thread because it was called within stream() and therefore is within that scope(and I also checked) :
void GLWidget::onPNewFrame(dPacket* inPack) {
lastFlag = 0;
if (bufferEnabled) {
buffer.push(QPixmap::fromImage(toQImageFromPMat(inPack->frame)));
}
else {
if (playing) {
/* Only process if this widget is playing */
frameProcessing = true;
lastImage.convertFromImage(toQImageFromPMat(inPack->frame));
frameProcessing = false;
}
}
if (lastFlag != -1 && !lastImage.isNull()) {
connecting = false;
}
else {
connecting = true;
}
}This is my Mat to QImage :
QImage GLWidget::toQImageFromPMat(cv::Mat* mat) {
return QImage(mat->data, mat->cols, mat->rows, QImage::Format_RGB888).rgbSwapped();NOTE : not converting does not result in CPU boost (at least not a significant one).
Minimal verifiable example
This program is large. I am going to paste GLWidget.cpp and GLWidget.h as well as Camera.h and Camera.cpp. You can put GLWidget into anything just as long as you spawn more than 6 of it. Camera relies on the CamUtils, but it is possible to just paste url in videocapture
I also supplied CamUtils, just in case
Camera.h :
#pragma once
#include <iostream>
#include <vector>
#include <fstream>
#include <map>
#include <string>
#include <sstream>
#include <algorithm>
#include "FrameListener.h"
#include
#include <thread>
#include "CamUtils.h"
#include <ctime>
#include "dPacket.h"
using namespace std;
using namespace cv;
class Camera
{
/*
CLEANED UP!
Camera now is only responsible for streaming and echoing captured frames.
Frames are now wrapped into dPacket struct.
*/
private:
string id;
vector clients;
VideoCapture ipCam;
string streamUrl;
Size size;
bool tryResetConnection = false;
//TODO: Remove these as they are not going to be used going on:
bool isPlaying = true;
bool capture = true;
//SECRET FEATURES:
bool detect = false;
public:
Camera(string url, int width = 480, int height = 240, bool detect_=false);
bool stream();
void setReconnectable(bool newReconStatus);
void addListener(FrameListener* client);
vector<bool> getState(); // Returns current state: vector[0] stream state; vector[1] stream state; TODO: Remove this as this is no longer should control behaviour
void killStream();
bool getReconnectable();
};
</bool></ctime></thread></algorithm></sstream></string></map></fstream></vector></iostream>Camera.cpp
#include "Camera.h"
Camera::Camera(string url, int width, int height, bool detect_) // Default 240p
{
streamUrl = url; // Prepare url
size = Size(width, height);
detect = detect_;
}
void Camera::addListener(FrameListener* client) {
clients.push_back(client);
}
/*
TEST CAMERAS(Paste into cameras.dViewer):
{"id":"96a73796-c129-46fc-9c01-40acd8ed7122","ip":"176.57.73.231","password":"null","username":"null"},
{"id":"96a73796-c129-46fc-9c01-40acd8ed7122","ip":"176.57.73.231","password":"null","username":"null"},
{"id":"96a73796-c129-46fc-9c01-40acd8ed7144","ip":"172.20.101.13","password":"admin","username":"root"}
{"id":"96a73796-c129-46fc-9c01-40acd8ed7144","ip":"172.20.101.13","password":"admin","username":"root"}
*/
bool Camera::stream() {
/* This function is meant to run on a separate thread and fill up the buffer independantly of
main stream thread */
//cv::setNumThreads(100);
/* Rules for these slightly changed! */
Mat pre; // Grab initial undoctored frame
//pre = Mat::zeros(size, CV_8UC1);
Mat frame; // Final modified frame
frame = Mat::zeros(size, CV_8UC1);
if (!pre.isContinuous()) pre = pre.clone();
ipCam.open(streamUrl, CAP_FFMPEG);
while (ipCam.isOpened() && capture) {
// If camera is opened wel need to capture and process the frame
try {
auto start = std::chrono::system_clock::now();
ipCam >> pre;
if (pre.empty()) {
/* Check for blank frame, return error if there is a blank frame*/
cerr << id << ": ERROR! blank frame grabbed\n";
for (FrameListener* i : clients) {
i->onNotification(1); // Notify clients about this shit
}
break;
}
else {
// Only continue if frame not empty
if (pre.cols != size.width && pre.rows != size.height) {
resize(pre, frame, size);
pre.release();
}
else {
frame = pre;
}
auto end = std::chrono::system_clock::now();
std::time_t ts = std::chrono::system_clock::to_time_t(end);
dPacket* pack = new dPacket{ id,&frame};
for (auto i : clients) {
i->onPNewFrame(pack);
}
frame.release();
delete pack;
}
}
catch (int e) {
cout << endl << "-----Exception during capture process! CODE " << e << endl;
}
// End camera manipulations
}
cout << "Camera timed out, or connection is closed..." << endl;
if (tryResetConnection) {
cout << "Reconnection flag is set, retrying after 3 seconds..." << endl;
for (FrameListener* i : clients) {
i->onNotification(-1); // Notify clients about this shit
}
this_thread::sleep_for(chrono::milliseconds(3000));
stream();
}
return true;
}
void Camera::killStream(){
tryResetConnection = false;
capture = false;
ipCam.release();
}
void Camera::setReconnectable(bool reconFlag) {
tryResetConnection = reconFlag;
}
bool Camera::getReconnectable() {
return tryResetConnection;
}
vector<bool> Camera::getState() {
vector<bool> states;
states.push_back(isPlaying);
states.push_back(ipCam.isOpened());
return states;
}
</bool></bool>GLWidget.h :
#ifndef GLWIDGET_H
#define GLWIDGET_H
#include <qopenglwidget>
#include <qmouseevent>
#include "FrameListener.h"
#include "Camera.h"
#include "FrameListener.h"
#include
#include "Camera.h"
#include "CamUtils.h"
#include
#include "dPacket.h"
#include <chrono>
#include <ctime>
#include
#include "FullScreenVideo.h"
#include <qmovie>
#include "helper.h"
#include <iostream>
#include <qpainter>
#include <qtimer>
class Helper;
class GLWidget : public QOpenGLWidget, public FrameListener
{
Q_OBJECT
public:
GLWidget(std::string camId, CamUtils *cUtils, int width, int height, bool denyFullScreen_ = false, bool detectFlag_=false, QWidget* parent = nullptr);
void killStream();
~GLWidget();
public slots:
void animate();
void setBufferEnabled(bool setState);
void setCameraRetryConnection(bool setState);
void GLUpdate(); // Call to update the widget
void onRightClickMenu(const QPoint& point);
protected:
void paintEvent(QPaintEvent* event) override;
void onPNewFrame(dPacket* frame);
void onNotification(int alert_code);
private:
// Objects and resourses
Helper* helper;
Camera* cam;
CamUtils* camUtils;
QTimer* timer; // Keep track of update
QPixmap lastImage;
QMovie* connMov;
QMovie* test;
QPixmap logo;
// Control fields
int width;
int height;
int camUtilsAddr;
int elapsed;
std::thread* camThread;
std::string camId;
bool denyFullScreen = false;
bool playing = true;
bool streaming = true;
bool debug = false;
bool connecting = true;
int lastFlag = 0;
// Debug fields
std::chrono::high_resolution_clock::time_point lastFrameAt;
std::chrono::high_resolution_clock::time_point now;
std::chrono::duration<double> painTime; // time took to draw last frame
//Buffer stuff
std::queue<qpixmap> buffer;
bool bufferEnabled = false;
bool initialBuffer = false;
bool buffering = true;
bool frameProcessing = false;
//Functions
QImage toQImageFromPMat(cv::Mat* inFrame);
void mousePressEvent(QMouseEvent* event) override;
void drawImageGLLatest(QPainter* painter, QPaintEvent* event, int elapsed);
void drawOnPaused(QPainter* painter, QPaintEvent* event, int elapsed);
void drawOnStatus(int statusFlag, QPainter* painter, QPaintEvent* event, int elapsed);
};
#endif
</qpixmap></double></qtimer></qpainter></iostream></qmovie></ctime></chrono></qmouseevent></qopenglwidget>GLWidget.cpp :
#include "glwidget.h"
#include <future>
FullScreenVideo* fullScreen;
GLWidget::GLWidget(std::string camId_, CamUtils* cUtils, int width_, int height_, bool denyFullScreen_, bool detectFlag_, QWidget* parent)
: QOpenGLWidget(parent), helper(helper)
{
cout << "Player for CAMERA " << camId_ << endl;
/* Underlying properties */
camUtils = cUtils;
cout << "GLWidget Incoming CamUtils addr " << camUtils << endl;
cout << "GLWidget Set CamUtils addr " << camUtils << endl;
camId = camId_;
elapsed = 0;
width = width_ + 5;
height = height_ + 5;
helper = new Helper();
setFixedSize(width, height);
denyFullScreen = denyFullScreen_;
/* Camera capture thread */
cam = new Camera(camUtils->getCameraStreamURL(camId), width_, height_, detectFlag_);
cam->addListener(this);
/* Sync states */
vector<bool> initState = cam->getState();
playing = initState[0];
streaming = initState[1];
cout << "Initial states: " << playing << " " << streaming << endl;
camThread = new std::thread(&Camera::stream, cam);
cout << "================================================" << endl;
// Right click set up
setContextMenuPolicy(Qt::CustomContextMenu);
/* Loading gif */
connMov = new QMovie("establishingConnection.gif");
connMov->start();
QString url = R"(RLC-logo.png)";
logo = QPixmap(url);
QTimer* timer = new QTimer(this);
connect(timer, SIGNAL(timeout()), this, SLOT(GLUpdate()));
timer->start(1000/30);
playing = true;
}
/* SYSTEM */
void GLWidget::animate()
{
elapsed = (elapsed + qobject_cast(sender())->interval()) % 1000;
std::cout << elapsed << "\n";
}
void GLWidget::GLUpdate() {
/* Process descisions before update call */
if (bufferEnabled) {
/* Process buffer before update */
now = chrono::high_resolution_clock::now();
std::chrono::duration timeSinceLastUpdate = now - lastFrameAt;
if (timeSinceLastUpdate.count() > 25) {
if (buffer.size() > 1 && playing) {
lastImage.swap(buffer.front());
buffer.pop();
lastFrameAt = chrono::high_resolution_clock::now();
}
}
//update(); // Update
}
else {
/* No buffer */
}
repaint();
}
/* EVENTS */
void GLWidget::onRightClickMenu(const QPoint& point) {
cout << "Right click request got" << endl;
QPoint globPos = this->mapToGlobal(point);
QMenu myMenu;
if (!denyFullScreen) {
myMenu.addAction("Open Full Screen");
}
myMenu.addAction("Toggle Debug Info");
QAction* selected = myMenu.exec(globPos);
if (selected) {
string optiontxt = selected->text().toStdString();
if (optiontxt == "Open Full Screen") {
cout << "Chose to open full screen of " << camId << endl;
fullScreen = new FullScreenVideo(bufferEnabled, this);
fullScreen->setUpView(camUtils, camId);
fullScreen->show();
playing = false;
}
if (optiontxt == "Toggle Debug Info") {
cout << "Chose to toggle debug of " << camId << endl;
debug = !debug;
}
}
else {
cout << "Chose nothing!" << endl;
}
}
void GLWidget::onPNewFrame(dPacket* inPack) {
lastFlag = 0;
if (bufferEnabled) {
buffer.push(QPixmap::fromImage(toQImageFromPMat(inPack->frame)));
}
else {
if (playing) {
/* Only process if this widget is playing */
frameProcessing = true;
lastImage.convertFromImage(toQImageFromPMat(inPack->frame));
frameProcessing = false;
}
}
if (lastFlag != -1 && !lastImage.isNull()) {
connecting = false;
}
else {
connecting = true;
}
}
void GLWidget::onNotification(int alert) {
lastFlag = alert;
}
/* Paint events*/
void GLWidget::paintEvent(QPaintEvent* event)
{
QPainter painter(this);
if (lastFlag != 0 || connecting) {
drawOnStatus(lastFlag, &painter, event, elapsed);
}
else {
/* Actual frame drawing */
if (playing) {
if (!frameProcessing) {
drawImageGLLatest(&painter, event, elapsed);
}
}
else {
drawOnPaused(&painter, event, elapsed);
}
}
painter.end();
}
/* DRAWING STUFF */
void GLWidget::drawOnStatus(int statusFlag, QPainter* bgPaint, QPaintEvent* event, int elapsed) {
QString str;
QFont font("times", 15);
bgPaint->eraseRect(QRect(0, 0, width, height));
if (!lastImage.isNull()) {
bgPaint->drawPixmap(QRect(0, 0, width, height), lastImage);
}
/* Test background painting */
if (connecting) {
string k = "Connecting to " + camUtils->getIp(camId);
str.append(k.c_str());
}
else {
switch (statusFlag) {
case 1:
str = "Blank frame received...";
break;
case -1:
if (cam->getReconnectable()) {
str = "Connection lost, will try to reconnect.";
bgPaint->setOpacity(0.3);
}
else {
str = "Connection lost...";
bgPaint->setOpacity(0.3);
}
break;
}
}
bgPaint->drawPixmap(QRect(0, 0, width, height), QPixmap::fromImage(connMov->currentImage()));
bgPaint->setPen(Qt::red);
bgPaint->setFont(font);
QFontMetrics fm(font);
const QRect kek(0, 0, fm.width(str), fm.height());
QRect bound;
bgPaint->setOpacity(1);
bgPaint->drawText(bgPaint->viewport().width()/2 - kek.width()/2, bgPaint->viewport().height()/2 - kek.height(), str);
bgPaint->drawPixmap(bgPaint->viewport().width() / 2 - logo.width()/2, height - logo.width() - 15, logo);
}
void GLWidget::drawOnPaused(QPainter* painter, QPaintEvent* event, int elapsed) {
painter->eraseRect(0, 0, width, height);
QFont font = painter->font();
font.setPointSize(18);
painter->setPen(Qt::red);
QFontMetrics fm(font);
QString str("Paused");
painter->drawPixmap(QRect(0, 0, width, height),lastImage);
painter->drawText(QPoint(painter->viewport().width() - fm.width(str), 50), str);
if (debug) {
QFont font = painter->font();
font.setPointSize(25);
painter->setPen(Qt::red);
string camMess = "CAMID: " + camId;
QString mess(camMess.c_str());
string camIp = "IP: " + camUtils->getIp(camId);
QString ipMess(camIp.c_str());
QString bufferSize("Buffer size: " + QString::number(buffer.size()));
QString lastFrameText("Last frame draw time: " + QString::number(painTime.count()) + "s");
painter->drawText(QPoint(10, 50), mess);
painter->drawText(QPoint(10, 60), ipMess);
QString bufferState;
if (bufferEnabled) {
bufferState = QString("Experimental BUFFER is enabled!");
QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
painter->drawText(QPoint(10, 80), currentBufferSize);
}
else {
bufferState = QString("Experimental BUFFER is disabled!");
}
painter->drawText(QPoint(10, 70), bufferState);
painter->drawText(QPoint(10, height - 25), lastFrameText);
}
}
void GLWidget::drawImageGLLatest(QPainter* painter, QPaintEvent* event, int elapsed) {
auto start = chrono::high_resolution_clock::now();
painter->drawPixmap(QRect(0, 0, width, height), lastImage);
if (debug) {
QFont font = painter->font();
font.setPointSize(25);
painter->setPen(Qt::red);
string camMess = "CAMID: " + camId;
QString mess(camMess.c_str());
string camIp = "IP: " + camUtils->getIp(camId);
QString ipMess(camIp.c_str());
QString bufferSize("Buffer size: " + QString::number(buffer.size()));
QString lastFrameText("Last frame draw time: " + QString::number(painTime.count()) + "s");
painter->drawText(QPoint(10, 50), mess);
painter->drawText(QPoint(10, 60), ipMess);
QString bufferState;
if(bufferEnabled){
bufferState = QString("Experimental BUFFER is enabled!");
QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
painter->drawText(QPoint(10,80), currentBufferSize);
}
else {
bufferState = QString("Experimental BUFFER is disabled!");
QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
painter->drawText(QPoint(10, 80), currentBufferSize);
}
painter->drawText(QPoint(10, 70), bufferState);
painter->drawText(QPoint(10, height - 25), lastFrameText);
}
auto end = chrono::high_resolution_clock::now();
painTime = end - start;
}
/* END DRAWING STUFF */
/* UI EVENTS */
void GLWidget::mousePressEvent(QMouseEvent* e) {
if (e->button() == Qt::LeftButton) {
if (fullScreen == nullptr || !fullScreen->isVisible()) { // Do not unpause if window is opened
playing = !playing;
}
}
if (e->button() == Qt::RightButton) {
onRightClickMenu(e->pos());
}
}
/* Utilities */
QImage GLWidget::toQImageFromPMat(cv::Mat* mat) {
return QImage(mat->data, mat->cols, mat->rows, QImage::Format_RGB888).rgbSwapped();
}
/* State control */
void GLWidget::killStream() {
cam->killStream();
camThread->join();
}
void GLWidget::setBufferEnabled(bool newBufferState) {
cout << "Player: " << camId << ", buffer state updated: " << newBufferState << endl;
bufferEnabled = newBufferState;
buffer.empty();
}
void GLWidget::setCameraRetryConnection(bool newState) {
cam->setReconnectable(newState);
}
/* Destruction */
GLWidget::~GLWidget() {
cam->killStream();
camThread->join();
}
</bool></future>CamUtils.h :
#pragma once
#include <iostream>
#include <vector>
#include <fstream>
#include <map>
#include <string>
#include <sstream>
#include <algorithm>
#include <nlohmann></nlohmann>json.hpp>
using namespace std;
using json = nlohmann::json;
class CamUtils
{
private:
string camDb = "cameras.dViewer";
map> cameraList; // Legacy
json cameras;
ofstream dbFile;
bool dbExists(); // Always hard coded
/* Old IMPLEMENTATION */
void writeLineToDb_(const string& content, bool append = false);
void loadCameras_();
/* JSON based */
void loadCameras();
public:
CamUtils();
string generateRandomString(size_t length);
string getCameraStreamURL(string cameraId) const;
string saveCamera(string ip, string username, string pass); // Return generated id
vector<string> listAllCameraIds();
string getIp(string cameraId);
};
</string></algorithm></sstream></string></map></fstream></vector></iostream>CamUtils.cpp :
#include "CamUtils.h"
#pragma comment(lib, "rpcrt4.lib") // UuidCreate - Minimum supported OS Win 2000
#include
#include <iostream>
CamUtils::CamUtils()
{
if (!dbExists()) {
ofstream dbFile;
dbFile.open(camDb);
cameras["cameras"] = json::array();
dbFile << cameras << std::endl;
dbFile.close();
}
else {
loadCameras();
}
}
vector<string> CamUtils::listAllCameraIds() {
vector<string> ids;
cout << "IN LIST " << endl;
for (auto& cam : cameras["cameras"]) {
ids.push_back(cam["id"].get<string>());
//cout << cam["id"].get<string>() << std::endl;
}
return ids;
}
string CamUtils::getIp(string id) {
vector<string> camDetails = cameraList[id];
string ip = "NO IP WILL DISPLAYED UNTIL I FIGURE OUT A BUG";
for (auto& cam : cameras["cameras"]) {
if (id == cam["id"]) {
ip = cam["ip"].get<string>();
}
}
return ip;
}
string CamUtils::getCameraStreamURL(string id) const {
string url = "err"; // err is the default, it will be overwritten in case id is found, dont forget to check for it
for (auto& cam : cameras["cameras"]) {
if (id == cam["id"]) {
if (cam["username"].get<string>() == "null") {
url = "rtsp://" + cam["ip"].get<string>() + ":554/axis-media/media.amp?tcp";
}
else {
url = "rtsp://" + cam["username"].get<string>() + ":" + cam["password"].get<string>() + "@" + cam["ip"].get<string>() + ":554/axis-media/media.amp?streamprofile=720_30";
}
}
}
return url; // Dont forget to check for err when using this shit
}
string CamUtils::saveCamera(string ip, string username, string password) {
UUID uid;
UuidCreate(&uid);
char* str;
UuidToStringA(&uid, (RPC_CSTR*)&str);
string id = str;
cout << "GEN: " << id << endl;
json cam = json({}); //Create emtpy object
cam["id"] = id;
cam["ip"] = ip;
cam["username"] = username;
cam["password"] = password;
cameras["cameras"].push_back(cam);
std::ofstream out(camDb);
out << cameras << std::endl;
cout << cameras["cameras"] << endl;
cout << "Saved camera as " << id << endl;
return id;
}
bool CamUtils::dbExists() {
ifstream dbFile(camDb);
return (bool)dbFile;
}
void CamUtils::loadCameras() {
cout << "Load call" << endl;
ifstream dbFile(camDb);
string line;
string wholeFile;
while (std::getline(dbFile, line)) {
cout << line << endl;
wholeFile += line;
}
try {
cameras = json::parse(wholeFile);
//cout << cameras["cameras"] << endl;
}
catch (exception e) {
cout << e.what() << endl;
}
dbFile.close();
}
/*
LEGACY CODE, TO BE REMOVED!
*/
void CamUtils::loadCameras_() {
/*
LEGACY CODE:
This used to be the way to load cameras, but I moved on to JSON based configuration so this is no longer needed and will be removed soon
*/
ifstream dbFile(camDb);
string line;
while (std::getline(dbFile, line)) {
/*
This function load camera data to the map:
The order MUST be the following: 0:ID, 1:IP, 2:USERNAME, 3:PASSWORD.
Always delimited with | no spaces between!
*/
if (!line.empty()) {
stringstream ss(line);
string item;
vector<string> splitString;
while (std::getline(ss, item, '|')) {
splitString.push_back(item);
}
if (splitString.size() > 0) {
/* Dont even parse if the program didnt split right*/
//cout << "Split string: " << splitString.size() << "\n";
for (int i = 0; i < (splitString.size()); i++) cameraList[splitString[0]].push_back(splitString[i]);
}
}
}
}
void CamUtils::writeLineToDb_(const string & content, bool append) {
ofstream dbFile;
cout << "Creating?";
if (append) {
dbFile.open(camDb, ios_base::app);
}
else {
dbFile.open(camDb);
}
dbFile << content.c_str() << "\r\n";
dbFile.flush();
}
/* JSON Reworx */
string CamUtils::generateRandomString(size_t length)
{
const char* charmap = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const size_t charmapLength = strlen(charmap);
auto generator = [&]() { return charmap[rand() % charmapLength]; };
string result;
result.reserve(length);
generate_n(back_inserter(result), length, generator);
return result;
}
</string></string></string></string></string></string></string></string></string></string></string></string></iostream>End of example
How would I go about decreasing CPU usage when dealing with large amount of streams ?