Recherche avancée

Médias (0)

Mot : - Tags -/presse-papier

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (95)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8553)

  • ffmpeg produces duplicate pts with "wallclock_as_timestamps 1" option on MKV

    15 avril 2024, par Jax2171

    I need to get real time reference of every keyframe captured by an IP camera. The -wallclock_as_timestamps 1 option seems to do the trick for us, however we are forced to replace the TS output container with MKV to get a correct PTS epoch value 1712996356.833000.

    


    Here is the ffmpeg command used :

    


    ffmpeg -report -use_wallclock_as_timestamps 1 -rtsp_transport tcp -i rtsp://user:password1@192.168.5.21/cam/realmonitor?channel=1channel1[1]=1subtype=0 -c:v copy -c:a aac -copyts -f matroska -y rec.mkv


    


    The capture process runs without any relevant worning or error messages.

    


    However, playing the captured video with any player shows very short and evident but very annoying lags. Upon investigation I discovered that many frame PTSs have the same value. The command I used to show duplicate PTSs is as follows :

    


    ffprobe -v error -show_entries frame=pkt_pts_time -select_streams v -of csv=p=0 rec.mkv | sort | uniq -d


    


    On a recording of about 10 minutes the result of the duplicate PTS is the following :

    


    1713086493.367000
1713086493.368000
1713086493.370000
1713086493.372000
1713086543.714000
1713086558.793000
1713086558.817000
1713086558.872000
1713086561.780000
1713086564.642000
1713086564.657000
1713086564.778000
1713086565.794000
...


    


    I'm not sure if the lag problem is caused by this, however the problem does not occur with the TS container, which however I cannot use due to the PTS values being roundly 33 bit.

    


    The -vsync 0 or -vsync 2 options on input or output didn't help.

    


    This is the log using the -report option :

    


        ffmpeg started on 2024-04-15 at 09:04:38
Report written to "ffmpeg-20240415-090438.log"
Log level: 48
Command line:
ffmpeg -report -stats -hide_banner -use_wallclock_as_timestamps 1 -rtsp_transport tcp -i "rtsp://user:password1@192.168.5.21/cam/realmonitor?channel=1channel1[1]=1subtype=0" -c:v copy -c:a aac -copyts -f matroska -y rec.mkv
Splitting the commandline.
Reading option '-report' ... matched as option 'report' (generate a report) with argument '1'.
Reading option '-stats' ... matched as option 'stats' (print progress report during encoding) with argument '1'.
Reading option '-hide_banner' ... matched as option 'hide_banner' (do not show program banner) with argument '1'.
Reading option '-use_wallclock_as_timestamps' ... matched as AVOption 'use_wallclock_as_timestamps' with argument '1'.
Reading option '-rtsp_transport' ... matched as AVOption 'rtsp_transport' with argument 'tcp'.
Reading option '-i' ... matched as input url with argument 'rtsp://user:password1@192.168.5.21/cam/realmonitor?channel=1channel1[1]=1subtype=0'.
Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'copy'.
Reading option '-c:a' ... matched as option 'c' (codec name) with argument 'aac'.
Reading option '-copyts' ... matched as option 'copyts' (copy timestamps) with argument '1'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'matroska'.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option 'rec.mkv' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option report (generate a report) with argument 1.
Applying option stats (print progress report during encoding) with argument 1.
Applying option hide_banner (do not show program banner) with argument 1.
Applying option copyts (copy timestamps) with argument 1.
Applying option y (overwrite output files) with argument 1.
Successfully parsed a group of options.
Parsing a group of options: input url rtsp://user:password1@192.168.5.21/cam/realmonitor?channel=1channel1[1]=1subtype=0.
Successfully parsed a group of options.
Opening an input file: rtsp://user:password1@192.168.5.21/cam/realmonitor?channel=1channel1[1]=1subtype=0.
[tcp @ 0x1646660] No default whitelist set
[tcp @ 0x1646660] Original list of addresses:
[tcp @ 0x1646660] Address 192.168.5.21 port 554
[tcp @ 0x1646660] Interleaved list of addresses:
[tcp @ 0x1646660] Address 192.168.5.21 port 554
[tcp @ 0x1646660] Starting connection attempt to 192.168.5.21 port 554
[tcp @ 0x1646660] Successfully connected to 192.168.5.21 port 554
[rtsp @ 0x1645e70] SDP:
v=0
o=- 2251950012 2251950012 IN IP4 0.0.0.0
s=Media Server
c=IN IP4 0.0.0.0
t=0 0
a=control:*
a=packetization-supported:DH
a=rtppayload-supported:DH
a=range:npt=now-
a=x-packetization-supported:IV
a=x-rtppayload-supported:IV
m=video 0 RTP/AVP 96
a=control:trackID=0
a=framerate:25.000000
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1;profile-level-id=4D4028;sprop-parameter-sets=Z01AKKaAeAIn5ZuAgICgAAADACAAAAZQgAA=,aO48gAA=
a=recvonly
m=audio 0 RTP/AVP 97
a=control:trackID=1
a=rtpmap:97 MPEG4-GENERIC/16000
a=fmtp:97 streamtype=5;profile-level-id=1;mode=AAC-hbr;sizelength=13;indexlength=3;indexdeltalength=3;config=1408
a=recvonly

[rtsp @ 0x1645e70] video codec set to: h264
[rtsp @ 0x1645e70] RTP Packetization Mode: 1
[rtsp @ 0x1645e70] RTP Profile IDC: 4d Profile IOP: 40 Level: 28
[rtsp @ 0x1645e70] Extradata set to 0x164af98 (size: 39)
[rtsp @ 0x1645e70] audio codec set to: aac
[rtsp @ 0x1645e70] audio samplerate set to: 16000
[rtsp @ 0x1645e70] audio channels set to: 1
[rtsp @ 0x1645e70] setting jitter buffer size to 0
[rtsp @ 0x1645e70] setting jitter buffer size to 0
[rtsp @ 0x1645e70] hello state=0
Failed to parse interval end specification ''
[h264 @ 0x164ab30] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x164ab30] nal_unit_type: 8(PPS), nal_ref_idc: 3
[h264 @ 0x164ab30] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x164ab30] nal_unit_type: 8(PPS), nal_ref_idc: 3
[h264 @ 0x164ab30] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x164ab30] nal_unit_type: 8(PPS), nal_ref_idc: 3
[h264 @ 0x164ab30] nal_unit_type: 5(IDR), nal_ref_idc: 3
[h264 @ 0x164ab30] Format yuvj420p chosen by get_format().
[h264 @ 0x164ab30] Reinit context to 1920x1088, pix_fmt: yuvj420p
[h264 @ 0x164ab30] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 3
[h264 @ 0x164ab30] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 3
[h264 @ 0x164ab30] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 3
[h264 @ 0x164ab30] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 3
[h264 @ 0x164ab30] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 3
[h264 @ 0x164ab30] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 3
[rtsp @ 0x1645e70] All info found
Input #0, rtsp, from 'rtsp://user:password1@192.168.5.21/cam/realmonitor?channel=1channel1[1]=1subtype=0':
  Metadata:
    title           : Media Server
  Duration: N/A, start: 1713164678.794625, bitrate: N/A
    Stream #0:0, 22, 1/90000: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 1920x1080, 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:1, 15, 1/16000: Audio: aac (LC), 16000 Hz, mono, fltp
Successfully opened the file.
Parsing a group of options: output url rec.mkv.
Applying option c:v (codec name) with argument copy.
Applying option c:a (codec name) with argument aac.
Applying option f (force format) with argument matroska.
Successfully parsed a group of options.
Opening an output file: rec.mkv.
[file @ 0x1699f30] Setting default whitelist 'file,crypto,data'
Successfully opened the file.
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:1 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
detected 4 logical cores
[graph_0_in_0_1 @ 0x1682bb0] Setting 'time_base' to value '1/16000'
[graph_0_in_0_1 @ 0x1682bb0] Setting 'sample_rate' to value '16000'
[graph_0_in_0_1 @ 0x1682bb0] Setting 'sample_fmt' to value 'fltp'
[graph_0_in_0_1 @ 0x1682bb0] Setting 'channel_layout' to value '0x4'
[graph_0_in_0_1 @ 0x1682bb0] tb:1/16000 samplefmt:fltp samplerate:16000 chlayout:0x4
[format_out_0_1 @ 0x187f2e0] Setting 'sample_fmts' to value 'fltp'
[format_out_0_1 @ 0x187f2e0] Setting 'sample_rates' to value '96000|88200|64000|48000|44100|32000|24000|22050|16000|12000|11025|8000|7350'
[AVFilterGraph @ 0x164fd70] query_formats: 4 queried, 9 merged, 0 already done, 0 delayed
[matroska @ 0x169c330] get_metadata_duration returned: 0
Output #0, matroska, to 'rec.mkv':
  Metadata:
    title           : Media Server
    encoder         : Lavf58.45.100
    Stream #0:0, 0, 1/1000: Video: h264 (Main) (H264 / 0x34363248), yuvj420p(pc, bt709, progressive), 1920x1080, q=2-31, 25 fps, 25 tbr, 1k tbn, 90k tbc
    Stream #0:1, 0, 1/1000: Audio: aac (LC) ([255][0][0][0] / 0x00FF), 16000 Hz, mono, fltp, 69 kb/s
    Metadata:
      encoder         : Lavc58.91.100 aac
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:1 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[matroska @ 0x169c330] Starting new cluster with timestamp 1713164678731 at offset 770 bytes
[matroska @ 0x169c330] Writing block of size 581 with pts 1713164678731, dts 1713164678731, duration 64 at relative offset 14 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 517 with pts 1713164678795, dts 1713164678795, duration 64 at relative offset 602 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 376900 with pts 1713164678872, dts 1713164678872, duration 40 at relative offset 1126 in cluster at offset 770. TrackNumber 1, keyframe 1
[matroska @ 0x169c330] Writing block of size 8172 with pts 1713164678912, dts 1713164678912, duration 40 at relative offset 378034 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 672 with pts 1713164678912, dts 1713164678912, duration 64 at relative offset 386213 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 550 with pts 1713164679177, dts 1713164679177, duration 64 at relative offset 386892 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 7654 with pts 1713164679178, dts 1713164679178, duration 40 at relative offset 387449 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 7483 with pts 1713164679213, dts 1713164679213, duration 40 at relative offset 395110 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 7703 with pts 1713164679242, dts 1713164679242, duration 40 at relative offset 402600 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 565 with pts 1713164679242, dts 1713164679242, duration 64 at relative offset 410310 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 7650 with pts 1713164679271, dts 1713164679271, duration 40 at relative offset 410882 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 585 with pts 1713164679271, dts 1713164679271, duration 64 at relative offset 418539 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 8682 with pts 1713164679301, dts 1713164679301, duration 40 at relative offset 419131 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 8888 with pts 1713164679330, dts 1713164679330, duration 40 at relative offset 427820 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 506 with pts 1713164679330, dts 1713164679330, duration 64 at relative offset 436715 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 8019 with pts 1713164679360, dts 1713164679360, duration 40 at relative offset 437228 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 7919 with pts 1713164679361, dts 1713164679361, duration 40 at relative offset 445254 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 7822 with pts 1713164679361, dts 1713164679361, duration 40 at relative offset 453180 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 699 with pts 1713164679361, dts 1713164679361, duration 64 at relative offset 461009 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 619 with pts 1713164679361, dts 1713164679361, duration 64 at relative offset 461715 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 7768 with pts 1713164679362, dts 1713164679362, duration 40 at relative offset 462341 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 8469 with pts 1713164679362, dts 1713164679362, duration 40 at relative offset 470116 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 601 with pts 1713164679362, dts 1713164679362, duration 64 at relative offset 478592 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 559 with pts 1713164679363, dts 1713164679363, duration 64 at relative offset 479200 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 8265 with pts 1713164679366, dts 1713164679366, duration 40 at relative offset 479766 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 7766 with pts 1713164679406, dts 1713164679406, duration 40 at relative offset 488038 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 531 with pts 1713164679415, dts 1713164679415, duration 64 at relative offset 495811 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 7753 with pts 1713164679446, dts 1713164679446, duration 40 at relative offset 496349 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 8274 with pts 1713164679486, dts 1713164679486, duration 40 at relative offset 504109 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 569 with pts 1713164679496, dts 1713164679496, duration 64 at relative offset 512390 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 8445 with pts 1713164679526, dts 1713164679526, duration 40 at relative offset 512966 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 522 with pts 1713164679535, dts 1713164679535, duration 64 at relative offset 521418 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 7922 with pts 1713164679566, dts 1713164679566, duration 40 at relative offset 521947 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 7954 with pts 1713164679606, dts 1713164679606, duration 40 at relative offset 529876 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 503 with pts 1713164679615, dts 1713164679615, duration 64 at relative offset 537837 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 11167 with pts 1713164679646, dts 1713164679646, duration 40 at relative offset 538347 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 503 with pts 1713164679655, dts 1713164679655, duration 64 at relative offset 549521 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 10534 with pts 1713164679686, dts 1713164679686, duration 40 at relative offset 550031 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 7607 with pts 1713164679726, dts 1713164679726, duration 40 at relative offset 560572 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 478 with pts 1713164679772, dts 1713164679772, duration 64 at relative offset 568186 in cluster at offset 770. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 7842 with pts 1713164679774, dts 1713164679774, duration 40 at relative offset 568671 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 9862 with pts 1713164679806, dts 1713164679806, duration 40 at relative offset 576520 in cluster at offset 770. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Starting new cluster with timestamp 1713164679815 at offset 587166 bytes
[matroska @ 0x169c330] Writing block of size 449 with pts 1713164679815, dts 1713164679815, duration 64 at relative offset 14 in cluster at offset 587166. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 379456 with pts 1713164679870, dts 1713164679870, duration 40 at relative offset 470 in cluster at offset 587166. TrackNumber 1, keyframe 1
[matroska @ 0x169c330] Writing block of size 415 with pts 1713164679903, dts 1713164679903, duration 64 at relative offset 379934 in cluster at offset 587166. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 7008 with pts 1713164679905, dts 1713164679905, duration 40 at relative offset 380356 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 6917 with pts 1713164679925, dts 1713164679925, duration 40 at relative offset 387371 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 513 with pts 1713164679935, dts 1713164679935, duration 64 at relative offset 394295 in cluster at offset 587166. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 7111 with pts 1713164679966, dts 1713164679966, duration 40 at relative offset 394815 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 753 with pts 1713164679975, dts 1713164679975, duration 64 at relative offset 401933 in cluster at offset 587166. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 7091 with pts 1713164680006, dts 1713164680006, duration 40 at relative offset 402693 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 7045 with pts 1713164680045, dts 1713164680045, duration 40 at relative offset 409791 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 659 with pts 1713164680055, dts 1713164680055, duration 64 at relative offset 416843 in cluster at offset 587166. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 6983 with pts 1713164680086, dts 1713164680086, duration 40 at relative offset 417509 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 6932 with pts 1713164680127, dts 1713164680127, duration 40 at relative offset 424499 in cluster at offset 587166. TrackNumber 1, keyframe 0
frame=   35 fps=0.0 q=-1.0 size=     512kB time=475879:04:40.20 bitrate=   0.0kbits/s speed=3.35e+09x    
[matroska @ 0x169c330] Writing block of size 691 with pts 1713164680135, dts 1713164680135, duration 64 at relative offset 431438 in cluster at offset 587166. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 6990 with pts 1713164680166, dts 1713164680166, duration 40 at relative offset 432136 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 651 with pts 1713164680176, dts 1713164680176, duration 64 at relative offset 439133 in cluster at offset 587166. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 7046 with pts 1713164680206, dts 1713164680206, duration 40 at relative offset 439791 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 7130 with pts 1713164680246, dts 1713164680246, duration 40 at relative offset 446844 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 601 with pts 1713164680255, dts 1713164680255, duration 64 at relative offset 453981 in cluster at offset 587166. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 7205 with pts 1713164680286, dts 1713164680286, duration 40 at relative offset 454589 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 561 with pts 1713164680295, dts 1713164680295, duration 64 at relative offset 461801 in cluster at offset 587166. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 6936 with pts 1713164680326, dts 1713164680326, duration 40 at relative offset 462369 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 6822 with pts 1713164680366, dts 1713164680366, duration 40 at relative offset 469312 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 621 with pts 1713164680375, dts 1713164680375, duration 64 at relative offset 476141 in cluster at offset 587166. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 6845 with pts 1713164680405, dts 1713164680405, duration 40 at relative offset 476769 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 6848 with pts 1713164680445, dts 1713164680445, duration 40 at relative offset 483621 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 588 with pts 1713164680455, dts 1713164680455, duration 64 at relative offset 490476 in cluster at offset 587166. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 6828 with pts 1713164680486, dts 1713164680486, duration 40 at relative offset 491071 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 546 with pts 1713164680495, dts 1713164680495, duration 64 at relative offset 497906 in cluster at offset 587166. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 6845 with pts 1713164680526, dts 1713164680526, duration 40 at relative offset 498459 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 6924 with pts 1713164680566, dts 1713164680566, duration 40 at relative offset 505311 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 508 with pts 1713164680576, dts 1713164680576, duration 64 at relative offset 512242 in cluster at offset 587166. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 6844 with pts 1713164680606, dts 1713164680606, duration 40 at relative offset 512757 in cluster at offset 587166. TrackNumber 1, keyframe 0
frame=   48 fps= 47 q=-1.0 size=     512kB time=475879:04:40.72 bitrate=   0.0kbits/s speed=1.66e+09x    
[matroska @ 0x169c330] Writing block of size 587 with pts 1713164680615, dts 1713164680615, duration 64 at relative offset 519608 in cluster at offset 587166. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 6859 with pts 1713164680645, dts 1713164680645, duration 40 at relative offset 520202 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 6855 with pts 1713164680686, dts 1713164680686, duration 40 at relative offset 527068 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 573 with pts 1713164680695, dts 1713164680695, duration 64 at relative offset 533930 in cluster at offset 587166. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 6881 with pts 1713164680726, dts 1713164680726, duration 40 at relative offset 534510 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 10773 with pts 1713164680766, dts 1713164680766, duration 40 at relative offset 541398 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 520 with pts 1713164680775, dts 1713164680775, duration 64 at relative offset 552178 in cluster at offset 587166. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 6923 with pts 1713164680805, dts 1713164680805, duration 40 at relative offset 552705 in cluster at offset 587166. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Starting new cluster with timestamp 1713164680815 at offset 1146808 bytes
[matroska @ 0x169c330] Writing block of size 580 with pts 1713164680815, dts 1713164680815, duration 64 at relative offset 14 in cluster at offset 1146808. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 380085 with pts 1713164680864, dts 1713164680864, duration 40 at relative offset 601 in cluster at offset 1146808. TrackNumber 1, keyframe 1
[matroska @ 0x169c330] Writing block of size 9916 with pts 1713164680896, dts 1713164680896, duration 40 at relative offset 380694 in cluster at offset 1146808. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 541 with pts 1713164680901, dts 1713164680901, duration 64 at relative offset 390617 in cluster at offset 1146808. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 5877 with pts 1713164680925, dts 1713164680925, duration 40 at relative offset 391165 in cluster at offset 1146808. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] Writing block of size 529 with pts 1713164680935, dts 1713164680935, duration 64 at relative offset 397049 in cluster at offset 1146808. TrackNumber 2, keyframe 1
[matroska @ 0x169c330] Writing block of size 6661 with pts 1713164680966, dts 1713164680966, duration 40 at relative offset 397585 in cluster at offset 1146808. TrackNumber 1, keyframe 0
[matroska @ 0x169c330] end duration = 1713164681006
[matroska @ 0x169c330] stream 0 end duration = 1713164681006
[matroska @ 0x169c330] stream 1 end duration = 1713164680999
frame=   54 fps= 42 q=-1.0 Lsize=    1515kB time=475879:04:40.99 bitrate=   0.0kbits/s speed=1.33e+09x    
video:1493kB audio:20kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.099897%
Input file #0 (rtsp://user:password1@192.168.5.21/cam/realmonitor?channel=1channel1[1]=1subtype=0):
  Input stream #0:0 (video): 54 packets read (1529156 bytes); 
  Input stream #0:1 (audio): 35 packets read (9268 bytes); 35 frames decoded (35840 samples); 
  Total: 89 packets (1538424 bytes) demuxed
Output file #0 (rec.mkv):
  Output stream #0:0 (video): 54 packets muxed (1529156 bytes); 
  Output stream #0:1 (audio): 35 frames encoded (35840 samples); 36 packets muxed (20446 bytes); 
  Total: 90 packets (1549602 bytes) muxed
35 frames successfully decoded, 0 decoding errors
[AVIOContext @ 0x1667620] Statistics: 2 seeks, 7 writeouts
[aac @ 0x1673880] Qavg: 142.738
Exiting normally, received signal 15.


    


    In this short 3 second capture the duplicate timestamps are 1713164679.361000 and 1713164679.362000.

    


    How can I solve this problem ? What different approach could I use to achieve this goal ?

    


    Thanks in advance.

    


  • How to Choose the Optimal Multi-Touch Attribution Model for Your Organisation

    13 mars 2023, par Erin — Analytics Tips

    If you struggle to connect the dots on your customer journeys, you are researching the correct solution. 

    Multi-channel attribution models allow you to better understand the users’ paths to conversion and identify key channels and marketing assets that assist them.

    That said, each attribution model has inherent limitations, which make the selection process even harder.

    This guide explains how to choose the optimal multi-touch attribution model. We cover the pros and cons of popular attribution models, main evaluation criteria and how-to instructions for model implementation. 

    Pros and Cons of Different Attribution Models 

    Types of Attribution Models

    First Interaction 

    First Interaction attribution model (also known as first touch) assigns full credit to the conversion to the first channel, which brought in a lead. However, it doesn’t report other interactions the visitor had before converting.

    Marketers, who are primarily focused on demand generation and user acquisition, find the first touch attribution model useful to evaluate and optimise top-of-the-funnel (ToFU). 

    Pros 

    • Reflects the start of the customer journey
    • Shows channels that bring in the best-qualified leads 
    • Helps track brand awareness campaigns

    Cons 

    • Ignores the impact of later interactions at the middle and bottom of the funnel 
    • Doesn’t provide a full picture of users’ decision-making process 

    Last Interaction 

    Last Interaction attribution model (also known as last touch) shifts the entire credit allocation to the last channel before conversion. But it doesn’t account for the contribution of all other channels. 

    If your focus is conversion optimization, the last-touch model helps you determine which channels, assets or campaigns seal the deal for the prospect. 

    Pros 

    • Reports bottom-of-the-funnel events
    • Requires minimal data and configurations 
    • Helps estimate cost-per-lead or cost-per-acquisition

    Cons 

    • No visibility into assisted conversions and prior visitor interactions 
    • Overemphasise the importance of the last channel (which can often be direct traffic) 

    Last Non-Direct Interaction 

    Last Non-Direct attribution excludes direct traffic from the calculation and assigns the full conversion credit to the preceding channel. For example, a paid ad will receive 100% of credit for conversion if a visitor goes directly to your website to buy a product. 

    Last Non-Direct attribution provides greater clarity into the bottom-of-the-funnel (BoFU). events. Yet, it still under-reports the role other channels played in conversion. 

    Pros 

    • Improved channel visibility, compared to Last-Touch 
    • Avoids over-valuing direct visits
    • Reports on lead-generation efforts

    Cons 

    • Doesn’t work for account-based marketing (ABM) 
    • Devalues the quality over quantity of leads 

    Linear Model

    Linear attribution model assigns equal credit for a conversion to all tracked touchpoints, regardless of their impact on the visitor’s decision to convert.

    It helps you understand the full conversion path. But this model doesn’t distinguish between the importance of lead generation activities versus nurturing touches.

    Pros 

    • Focuses on all touch points associated with a conversion 
    • Reflects more steps in the customer journey 
    • Helps analyse longer sales cycles

    Cons 

    • Doesn’t accurately reflect the varying roles of each touchpoint 
    • Can dilute the credit if too many touchpoints are involved 

    Time Decay Model 

    Time decay models assumes that the closer a touchpoint is to the conversion, the greater its influence. Pre-conversion touchpoints get the highest credit, while the first ones are ranked lower (5%-5%-10%-15%-25%-30%).

    This model better reflects real-life customer journeys. However, it devalues the impact of brand awareness and demand-generation campaigns. 

    Pros 

    • Helps track longer sales cycles and reports on each touchpoint involved 
    • Allows customising the half-life of decay to improve reporting 
    • Promotes conversion optimization at BoFu stages

    Cons 

    • Can prompt marketers to curtail ToFU spending, which would translate to fewer qualified leads at lower stages
    • Doesn’t reflect highly-influential events at earlier stages (e.g., a product demo request or free account registration, which didn’t immediately lead to conversion)

    Position-Based Model 

    Position-Based attribution model (also known as the U-shaped model) allocates the biggest credit to the first and the last interaction (40% each). Then distributes the remaining 20% across other touches. 

    For many marketers, that’s the preferred multi-touch attribution model as it allows optimising both ToFU and BoFU channels. 

    Pros 

    • Helps establish the main channels for lead generation and conversion
    • Adds extra layers of visibility, compared to first- and last-touch attribution models 
    • Promotes budget allocation toward the most strategic touchpoints

    Cons 

    • Diminishes the importance of lead nurturing activities as more credit gets assigned to demand-gen and conversion-generation channels
    • Limited flexibility since it always assigns a fixed amount of credit to the first and last touchpoints, and the remaining credit is divided evenly among the other touchpoints

    How to Choose the Right Multi-Touch Attribution Model For Your Business 

    If you’re deciding which attribution model is best for your business, prepare for a heated discussion. Each one has its trade-offs as it emphasises or devalues the role of different channels and marketing activities.

    To reach a consensus, the best strategy is to evaluate each model against three criteria : Your marketing objectives, sales cycle length and data availability. 

    Marketing Objectives 

    Businesses generate revenue in many ways : Through direct sales, subscriptions, referral fees, licensing agreements, one-off or retainer services. Or any combination of these activities. 

    In each case, your marketing strategy will look different. For example, SaaS and direct-to-consumer (DTC) eCommerce brands have to maximise both demand generation and conversion rates. In contrast, a B2B cybersecurity consulting firm is more interested in attracting qualified leads (as opposed to any type of traffic) and progressively nurturing them towards a big-ticket purchase. 

    When selecting a multi-touch attribution model, prioritise your objectives first. Create a simple scoreboard, where your team ranks various channels and campaign types you rely on to close sales. 

    Alternatively, you can survey your customers to learn how they first heard about your company and what eventually triggered their conversion. Having data from both sides can help you cross-validate your assumptions and eliminate some biases. 

    Then consider which model would best reflect the role and importance of different channels in your sales cycle. Speaking of which….

    Sales Cycle Length 

    As shoppers, we spend less time deciding on a new toothpaste brand versus contemplating a new IT system purchase. Factors like industry, business model (B2C, DTC, B2B, B2BC), and deal size determine the average cycle length in your industry. 

    Statistically, low-ticket B2C sales can happen within just several interactions. The average B2B decision-making process can have over 15 steps, spread over several months. 

    That’s why not all multi-touch attribution models work equally well for each business. Time-decay suits better B2B companies, while B2C usually go for position-based or linear attribution. 

    Data Availability 

    Businesses struggle with multi-touch attribution model implementation due to incomplete analytics data. 

    Our web analytics tool captures more data than Google Analytics. That’s because we rely on a privacy-focused tracking mechanism, which allows you to collect analytics without showing a cookie consent banner in markets outside of Germany and the UK. 

    Cookie consent banners are mandatory with Google Analytics. Yet, almost 40% of global consumers reject it. This results in gaps in your analytics and subsequent inconsistencies in multi-touch attribution reports. With Matomo, you can compliantly collect more data for accurate reporting. 

    Some companies also struggle to connect collected insights to individual shoppers. With Matomo, you can cross-attribute users across browning sessions, using our visitors’ tracking feature

    When you already know a user’s identifier (e.g., full name or email address), you can track their on-site behaviours over time to better understand how they interact with your content and complete their purchases. Quick disclaimer, though, visitors’ tracking may not be considered compliant with certain data privacy laws. Please consult with a local authority if you have doubts. 

    How to Implement Multi-Touch Attribution

    Multi-touch attribution modelling implementation is like a “seek and find” game. You have to identify all significant touchpoints in your customers’ journeys. And sometimes also brainstorm new ways to uncover the missing parts. Then figure out the best way to track users’ actions at those stages (aka do conversion and events tracking). 

    Here’s a step-by-step walkthrough to help you get started. 

    Select a Multi-Touch Attribution Tool 

    The global marketing attribution software is worth $3.1 billion. Meaning there are plenty of tools, differing in terms of accuracy, sophistication and price.

    To make the right call prioritise five factors :

    • Available models : Look for a solution that offers multiple options and allows you to experiment with different modelling techniques or develop custom models. 
    • Implementation complexity : Some providers offer advanced data modelling tools for creating custom multi-touch attribution models, but offer few out-of-the-box modelling options. 
    • Accuracy : Check if the shortlisted tool collects the type of data you need. Prioritise providers who are less dependent on third-party cookies and allow you to identify repeat users. 
    • Your marketing stack : Some marketing attribution tools come with useful add-ons such as tag manager, heatmaps, form analytics, user session recordings and A/B testing tools. This means you can collect more data for multi-channel modelling with them instead of investing in extra software. 
    • Compliance : Ensure that the selected multi-attribution analytics software wouldn’t put you at risk of GDPR non-compliance when it comes to user privacy and consent to tracking/analysis. 

    Finally, evaluate the adoption costs. Free multi-channel analytics tools come with data quality and consistency trade-offs. Premium attribution tools may have “hidden” licensing costs and bill you for extra data integrations. 

    Look for a tool that offers a good price-to-value ratio (i.e., one that offers extra perks for a transparent price). 

    Set Up Proper Data Collection 

    Multi-touch attribution requires ample user data. To collect the right type of insights you need to set up : 

    • Website analytics : Ensure that you have all tracking codes installed (and working correctly !) to capture pageviews, on-site actions, referral sources and other data points around what users do on page. 
    • Tags : Add tracking parameters to monitor different referral channels (e.g., “facebook”), campaign types (e.g., ”final-sale”), and creative assets (e.g., “banner-1”). Tags help you get a clearer picture of different touchpoints. 
    • Integrations : To better identify on-site users and track their actions, you can also populate your attribution tool with data from your other tools – CRM system, A/B testing app, etc. 

    Finally, think about the ideal lookback window — a bounded time frame you’ll use to calculate conversions. For example, Matomo has a default windows of 7, 30 or 90 days. But you can configure a custom period to better reflect your average sales cycle. For instance, if you’re selling makeup, a shorter window could yield better results. But if you’re selling CRM software for the manufacturing industry, consider extending it.

    Configure Goals and Events 

    Goals indicate your main marketing objectives — more traffic, conversions and sales. In web analytics tools, you can measure these by tracking specific user behaviours. 

    For example : If your goal is lead generation, you can track :

    • Newsletter sign ups 
    • Product demo requests 
    • Gated content downloads 
    • Free trial account registration 
    • Contact form submission 
    • On-site call bookings 

    In each case, you can set up a unique tag to monitor these types of requests. Then analyse conversion rates — the percentage of users who have successfully completed the action. 

    To collect sufficient data for multi-channel attribution modelling, set up Goal Tracking for different types of touchpoints (MoFU & BoFU) and asset types (contact forms, downloadable assets, etc). 

    Your next task is to figure out how users interact with different on-site assets. That’s when Event Tracking comes in handy. 

    Event Tracking reports notify you about specific actions users take on your website. With Matomo Event Tracking, you can monitor where people click on your website, on which pages they click newsletter subscription links, or when they try to interact with static content elements (e.g., a non-clickable banner). 

    Using in-depth user behavioural reports, you can better understand which assets play a key role in the average customer journey. Using this data, you can localise “leaks” in your sales funnel and fix them to increase conversion rates.

    Test and Validated the Selected Model 

    A common challenge of multi-channel attribution modelling is determining the correct correlation and causality between exposure to touchpoints and purchases. 

    For example, a user who bought a discounted product from a Facebook ad would act differently than someone who purchased a full-priced product via a newsletter link. Their rate of pre- and post-sales exposure will also differ a lot — and your attribution model may not always accurately capture that. 

    That’s why you have to continuously test and tweak the selected model type. The best approach for that is lift analysis. 

    Lift analysis means comparing how your key metrics (e.g., revenue or conversion rates) change among users who were exposed to a certain campaign versus a control group. 

    In the case of multi-touch attribution modelling, you have to monitor how your metrics change after you’ve acted on the model recommendations (e.g., invested more in a well-performing referral channel or tried a new brand awareness Twitter ad). Compare the before and after ROI. If you see a positive dynamic, your model works great. 

    The downside of this approach is that you have to invest a lot upfront. But if your goal is to create a trustworthy attribution model, the best way to validate is to act on its suggestions and then test them against past results. 

    Conclusion

    A multi-touch attribution model helps you measure the impact of different channels, campaign types, and marketing assets on metrics that matter — conversion rate, sales volumes and ROI. 

    Using this data, you can invest budgets into the best-performing channels and confidently experiment with new campaign types. 

    As a Matomo user, you also get to do so without breaching customers’ privacy or compromising on analytics accuracy.

    Start using accurate multi-channel attribution in Matomo. Get your free 21-day trial now. No credit card required.

  • Capture from multiple streams concurrently, best way to do it and how to reduce CPU usage

    19 juin 2019, par DRONE_6969

    I am currently in the process of writing an application that will capture a lot of RTSP streams(in my case its 12) and display it on the QT widget. The problem arouses when I am going beyond around 6-7 streams, the CPU usage spikes and there is visible stutter.

    The reason why I think that it is not QT draw function is because I have done some checking to measure how much time it takes to draw an incoming image from camera and just sample images I had, it is always a lot less than 33 milliseconds(even if there are 12 widgets being updated).

    I also just ran opencv capture method without drawing and got pretty much the same CPU consumption as if I was drawing the frames (lost like 10% CPU at most and GPU usage went to zero).

    IMPORTANT : I am using RTSP stream which is a h264 stream.

    IF IT MATTERS MY SPECS :

    Intel Core i7-6700 @ 3.40GHZ(8 CPUS)
    Memory : 16gb
    GPU : Intel HD Graphics 530

    (Also I ran my code on a computer with dedicated Graphics card, it did eliminate some stutter but CPU usage is still pretty high)

    I am currently using OPENCV 4.1.0 with GSTREAMER enabled and built, I also have the OPENCV-WORLD version, there is no difference in performance.

    I have created a special class called Camera that holds its frame size constraints and various control functions as well stream function. The stream function is being ran on a separate thread, whenever stream() function is done with current frame it sends ready Mat via onNewFrame event I created which converts to QPixmap and updates widget’s lastImage variable. This way I can update image in a more thread safe way.

    I have tried to manipulate those VideoCapture.set() values, but it didn’t really help.

    This is my stream function (Ignore the bool return, it doesn’t do anything it is a remnant from couple of minutes ago when I was trying to use std::async) :

    bool Camera::stream() {
       /* This function is meant to run on a separate thread and fill up the buffer independantly of
       main stream thread */
       //cv::setNumThreads(100);
       /* Rules for these slightly changed! */
       Mat pre;  // Grab initial undoctored frame
       //pre = Mat::zeros(size, CV_8UC1);
       Mat frame; // Final modified frame
       frame = Mat::zeros(size, CV_8UC1);
       if (!pre.isContinuous()) pre = pre.clone();

       ipCam.open(streamUrl, CAP_FFMPEG);


       while (ipCam.isOpened() && capture) {
           // If camera is opened wel need to capture and process the frame
           try {
               auto start = std::chrono::system_clock::now();

               ipCam >> pre;

               if (pre.empty()) {
                   /* Check for blank frame, return error if there is a blank frame*/
                   cerr << id << ": ERROR! blank frame grabbed\n";
                   for (FrameListener* i : clients) {
                       i->onNotification(1); // Notify clients about this shit
                   }
                   break;
               }

               else {
                   // Only continue if frame not empty

                   if (pre.cols != size.width && pre.rows != size.height) {
                       resize(pre, frame, size);
                       pre.release();
                   }
                   else {
                       frame = pre;
                   }

                   dPacket* pack = new dPacket{id,&frame};
                   for (auto i : clients) {
                       i->onPNewFrame(pack);
                   }
                   frame.release();
                   delete pack;
               }
           }

           catch (int e) {
               cout << endl << "-----Exception during capture process! CODE " << e << endl;
           }
           // End camera manipulations
       }

       cout << "Camera timed out, or connection is closed..." << endl;
       if (tryResetConnection) {
           cout << "Reconnection flag is set, retrying after 3 seconds..." << endl;
           for (FrameListener* i : clients) {
               i->onNotification(-1); // Notify clients about this shit
           }
           this_thread::sleep_for(chrono::milliseconds(3000));
           stream();
       }

       return true;
    }

    This is my onPNewFrame function. The conversion is still being done on camera’s thread because it was called within stream() and therefore is within that scope(and I also checked) :

    void GLWidget::onPNewFrame(dPacket* inPack) {
       lastFlag = 0;

       if (bufferEnabled) {
           buffer.push(QPixmap::fromImage(toQImageFromPMat(inPack->frame)));
       }
       else {
           if (playing) {
               /* Only process if this widget is playing */
               frameProcessing = true;
               lastImage.convertFromImage(toQImageFromPMat(inPack->frame));
               frameProcessing = false;
           }
       }

       if (lastFlag != -1 && !lastImage.isNull()) {
           connecting = false;
       }
       else {
           connecting = true;
       }
    }

    This is my Mat to QImage :

    QImage GLWidget::toQImageFromPMat(cv::Mat* mat) {



       return QImage(mat->data, mat->cols, mat->rows, QImage::Format_RGB888).rgbSwapped();

    NOTE : not converting does not result in CPU boost (at least not a significant one).

    Minimal verifiable example

    This program is large. I am going to paste GLWidget.cpp and GLWidget.h as well as Camera.h and Camera.cpp. You can put GLWidget into anything just as long as you spawn more than 6 of it. Camera relies on the CamUtils, but it is possible to just paste url in videocapture

    I also supplied CamUtils, just in case

    Camera.h :

    #pragma once
    #include <iostream>
    #include <vector>
    #include <fstream>
    #include <map>
    #include <string>
    #include <sstream>
    #include <algorithm>
    #include "FrameListener.h"
    #include
    #include <thread>
    #include "CamUtils.h"
    #include <ctime>
    #include "dPacket.h"

    using namespace std;
    using namespace cv;

    class Camera
    {

       /*
           CLEANED UP!
           Camera now is only responsible for streaming and echoing captured frames.
           Frames are now wrapped into dPacket struct.
       */


    private:
       string id;
       vector clients;
       VideoCapture ipCam;
       string streamUrl;
       Size size;
       bool tryResetConnection = false;

       //TODO: Remove these as they are not going to be used going on:
       bool isPlaying = true;
       bool capture = true;

       //SECRET FEATURES:
       bool detect = false;


    public:
       Camera(string url, int width = 480, int height = 240, bool detect_=false);
       bool stream();
       void setReconnectable(bool newReconStatus);
       void addListener(FrameListener* client);
       vector<bool> getState();    // Returns current state: vector[0] stream state; vector[1] stream state; TODO: Remove this as this is no longer should control behaviour
       void killStream();
       bool getReconnectable();
    };

    </bool></ctime></thread></algorithm></sstream></string></map></fstream></vector></iostream>

    Camera.cpp

    #include "Camera.h"


    Camera::Camera(string url, int width, int height, bool detect_) // Default 240p
    {
       streamUrl = url; // Prepare url
       size = Size(width, height);
       detect = detect_;

    }

    void Camera::addListener(FrameListener* client) {
       clients.push_back(client);
    }


    /*
                   TEST CAMERAS(Paste into cameras.dViewer):
                   {"id":"96a73796-c129-46fc-9c01-40acd8ed7122","ip":"176.57.73.231","password":"null","username":"null"},
                   {"id":"96a73796-c129-46fc-9c01-40acd8ed7122","ip":"176.57.73.231","password":"null","username":"null"},
                   {"id":"96a73796-c129-46fc-9c01-40acd8ed7144","ip":"172.20.101.13","password":"admin","username":"root"}
                   {"id":"96a73796-c129-46fc-9c01-40acd8ed7144","ip":"172.20.101.13","password":"admin","username":"root"}

    */



    bool Camera::stream() {
       /* This function is meant to run on a separate thread and fill up the buffer independantly of
       main stream thread */
       //cv::setNumThreads(100);
       /* Rules for these slightly changed! */
       Mat pre;  // Grab initial undoctored frame
       //pre = Mat::zeros(size, CV_8UC1);
       Mat frame; // Final modified frame
       frame = Mat::zeros(size, CV_8UC1);
       if (!pre.isContinuous()) pre = pre.clone();

       ipCam.open(streamUrl, CAP_FFMPEG);

       while (ipCam.isOpened() &amp;&amp; capture) {
           // If camera is opened wel need to capture and process the frame
           try {
               auto start = std::chrono::system_clock::now();

               ipCam >> pre;

               if (pre.empty()) {
                   /* Check for blank frame, return error if there is a blank frame*/
                   cerr &lt;&lt; id &lt;&lt; ": ERROR! blank frame grabbed\n";
                   for (FrameListener* i : clients) {
                       i->onNotification(1); // Notify clients about this shit
                   }
                   break;
               }

               else {
                   // Only continue if frame not empty

                   if (pre.cols != size.width &amp;&amp; pre.rows != size.height) {
                       resize(pre, frame, size);
                       pre.release();
                   }
                   else {
                       frame = pre;
                   }

                   auto end = std::chrono::system_clock::now();
                   std::time_t ts = std::chrono::system_clock::to_time_t(end);
                   dPacket* pack = new dPacket{ id,&amp;frame};
                   for (auto i : clients) {
                       i->onPNewFrame(pack);
                   }
                   frame.release();
                   delete pack;
               }
           }

           catch (int e) {
               cout &lt;&lt; endl &lt;&lt; "-----Exception during capture process! CODE " &lt;&lt; e &lt;&lt; endl;
           }
           // End camera manipulations
       }

       cout &lt;&lt; "Camera timed out, or connection is closed..." &lt;&lt; endl;
       if (tryResetConnection) {
           cout &lt;&lt; "Reconnection flag is set, retrying after 3 seconds..." &lt;&lt; endl;
           for (FrameListener* i : clients) {
               i->onNotification(-1); // Notify clients about this shit
           }
           this_thread::sleep_for(chrono::milliseconds(3000));
           stream();
       }

       return true;
    }


    void Camera::killStream(){
       tryResetConnection = false;
       capture = false;
       ipCam.release();
    }

    void Camera::setReconnectable(bool reconFlag) {
       tryResetConnection = reconFlag;
    }

    bool Camera::getReconnectable() {
       return tryResetConnection;
    }

    vector<bool> Camera::getState() {
       vector<bool> states;
       states.push_back(isPlaying);
       states.push_back(ipCam.isOpened());
       return states;
    }



    </bool></bool>

    GLWidget.h :

    #ifndef GLWIDGET_H
    #define GLWIDGET_H

    #include <qopenglwidget>
    #include <qmouseevent>
    #include "FrameListener.h"
    #include "Camera.h"
    #include "FrameListener.h"
    #include
    #include "Camera.h"
    #include "CamUtils.h"
    #include
    #include "dPacket.h"
    #include <chrono>
    #include <ctime>
    #include
    #include "FullScreenVideo.h"
    #include <qmovie>
    #include "helper.h"
    #include <iostream>
    #include <qpainter>
    #include <qtimer>

    class Helper;

    class GLWidget : public QOpenGLWidget, public FrameListener
    {
       Q_OBJECT

    public:
       GLWidget(std::string camId, CamUtils *cUtils, int width, int height, bool denyFullScreen_ = false, bool detectFlag_=false, QWidget* parent = nullptr);
       void killStream();
       ~GLWidget();

    public slots:
       void animate();
       void setBufferEnabled(bool setState);
       void setCameraRetryConnection(bool setState);
       void GLUpdate();            // Call to update the widget
       void onRightClickMenu(const QPoint&amp; point);

    protected:
       void paintEvent(QPaintEvent* event) override;
       void onPNewFrame(dPacket* frame);
       void onNotification(int alert_code);


    private:
       // Objects and resourses
       Helper* helper;
       Camera* cam;
       CamUtils* camUtils;
       QTimer* timer; // Keep track of update
       QPixmap lastImage;
       QMovie* connMov;
       QMovie* test;

       QPixmap logo;

       // Control fields
       int width;
       int height;
       int camUtilsAddr;
       int elapsed;
       std::thread* camThread;
       std::string camId;
       bool denyFullScreen = false;
       bool playing = true;
       bool streaming = true;
       bool debug = false;
       bool connecting = true;
       int lastFlag = 0;


       // Debug fields
       std::chrono::high_resolution_clock::time_point lastFrameAt;
       std::chrono::high_resolution_clock::time_point now;
       std::chrono::duration<double> painTime; // time took to draw last frame

       //Buffer stuff
       std::queue<qpixmap> buffer;
       bool bufferEnabled = false;
       bool initialBuffer = false;
       bool buffering = true;
       bool frameProcessing = false;



       //Functions
       QImage toQImageFromPMat(cv::Mat* inFrame);
       void mousePressEvent(QMouseEvent* event) override;
       void drawImageGLLatest(QPainter* painter, QPaintEvent* event, int elapsed);
       void drawOnPaused(QPainter* painter, QPaintEvent* event, int elapsed);
       void drawOnStatus(int statusFlag, QPainter* painter, QPaintEvent* event, int elapsed);
    };

    #endif

    </qpixmap></double></qtimer></qpainter></iostream></qmovie></ctime></chrono></qmouseevent></qopenglwidget>

    GLWidget.cpp :

    #include "glwidget.h"
    #include <future>


    FullScreenVideo* fullScreen;

    GLWidget::GLWidget(std::string camId_, CamUtils* cUtils, int width_, int height_,  bool denyFullScreen_, bool detectFlag_, QWidget* parent)
       : QOpenGLWidget(parent), helper(helper)
    {
       cout &lt;&lt; "Player for CAMERA " &lt;&lt; camId_ &lt;&lt; endl;

       /* Underlying properties */
       camUtils = cUtils;
       cout &lt;&lt; "GLWidget Incoming CamUtils addr " &lt;&lt; camUtils &lt;&lt; endl;
       cout &lt;&lt; "GLWidget Set CamUtils addr " &lt;&lt; camUtils &lt;&lt; endl;
       camId = camId_;
       elapsed = 0;
       width = width_ + 5;
       height = height_ + 5;
       helper = new Helper();
       setFixedSize(width, height);
       denyFullScreen = denyFullScreen_;

       /* Camera capture thread */
       cam = new Camera(camUtils->getCameraStreamURL(camId), width_, height_, detectFlag_);
       cam->addListener(this);

       /* Sync states */
       vector<bool> initState = cam->getState();
       playing = initState[0];
       streaming = initState[1];
       cout &lt;&lt; "Initial states: " &lt;&lt; playing &lt;&lt; " " &lt;&lt; streaming &lt;&lt; endl;
       camThread = new std::thread(&amp;Camera::stream, cam);
       cout &lt;&lt; "================================================" &lt;&lt; endl;

       // Right click set up
       setContextMenuPolicy(Qt::CustomContextMenu);


       /* Loading gif */
       connMov = new QMovie("establishingConnection.gif");
       connMov->start();
       QString url = R"(RLC-logo.png)";
       logo = QPixmap(url);
       QTimer* timer = new QTimer(this);
       connect(timer, SIGNAL(timeout()), this, SLOT(GLUpdate()));
       timer->start(1000/30);
       playing = true;

    }

    /* SYSTEM */
    void GLWidget::animate()
    {
       elapsed = (elapsed + qobject_cast(sender())->interval()) % 1000;
       std::cout &lt;&lt; elapsed &lt;&lt; "\n";
    }


    void GLWidget::GLUpdate() {
       /* Process descisions before update call */
       if (bufferEnabled) {
           /* Process buffer before update */
           now = chrono::high_resolution_clock::now();
           std::chrono::duration timeSinceLastUpdate = now - lastFrameAt;
           if (timeSinceLastUpdate.count() > 25) {
               if (buffer.size() > 1 &amp;&amp; playing) {
                   lastImage.swap(buffer.front());
                   buffer.pop();
                   lastFrameAt = chrono::high_resolution_clock::now();
               }
           }
           //update(); // Update
       }
       else {
           /* No buffer */
       }
       repaint();
    }


    /* EVENTS */
    void GLWidget::onRightClickMenu(const QPoint&amp; point) {
       cout &lt;&lt; "Right click request got" &lt;&lt; endl;

       QPoint globPos = this->mapToGlobal(point);
       QMenu myMenu;

       if (!denyFullScreen) {
           myMenu.addAction("Open Full Screen");
       }
       myMenu.addAction("Toggle Debug Info");


       QAction* selected = myMenu.exec(globPos);

       if (selected) {
           string optiontxt = selected->text().toStdString();

           if (optiontxt == "Open Full Screen") {
               cout &lt;&lt; "Chose to open full screen of " &lt;&lt; camId &lt;&lt; endl;
               fullScreen = new FullScreenVideo(bufferEnabled, this);
               fullScreen->setUpView(camUtils, camId);
               fullScreen->show();
               playing = false;
           }

           if (optiontxt == "Toggle Debug Info") {
               cout &lt;&lt; "Chose to toggle debug of " &lt;&lt; camId &lt;&lt; endl;
               debug = !debug;
           }
       }
       else {
           cout &lt;&lt; "Chose nothing!" &lt;&lt; endl;
       }


    }



    void GLWidget::onPNewFrame(dPacket* inPack) {
       lastFlag = 0;

       if (bufferEnabled) {
           buffer.push(QPixmap::fromImage(toQImageFromPMat(inPack->frame)));
       }
       else {
           if (playing) {
               /* Only process if this widget is playing */
               frameProcessing = true;
               lastImage.convertFromImage(toQImageFromPMat(inPack->frame));
               frameProcessing = false;
           }
       }

       if (lastFlag != -1 &amp;&amp; !lastImage.isNull()) {
           connecting = false;
       }
       else {
           connecting = true;
       }
    }


    void GLWidget::onNotification(int alert) {
       lastFlag = alert;  
    }


    /* Paint events*/


    void GLWidget::paintEvent(QPaintEvent* event)
    {
       QPainter painter(this);

           if (lastFlag != 0 || connecting) {
               drawOnStatus(lastFlag, &amp;painter, event, elapsed);
           }
           else {

               /* Actual frame drawing */
               if (playing) {
                   if (!frameProcessing) {
                       drawImageGLLatest(&amp;painter, event, elapsed);
                   }
               }
               else {
                   drawOnPaused(&amp;painter, event, elapsed);
               }
           }
       painter.end();

    }


    /* DRAWING STUFF */

    void GLWidget::drawOnStatus(int statusFlag, QPainter* bgPaint, QPaintEvent* event, int elapsed) {

       QString str;
       QFont font("times", 15);
       bgPaint->eraseRect(QRect(0, 0, width, height));
       if (!lastImage.isNull()) {
           bgPaint->drawPixmap(QRect(0, 0, width, height), lastImage);
       }
       /* Test background painting */
       if (connecting) {
           string k = "Connecting to " + camUtils->getIp(camId);
           str.append(k.c_str());
       }
       else {
           switch (statusFlag) {
           case 1:
               str = "Blank frame received...";
               break;

           case -1:
               if (cam->getReconnectable()) {
                   str = "Connection lost, will try to reconnect.";
                   bgPaint->setOpacity(0.3);
               }
               else {
                   str = "Connection lost...";
                   bgPaint->setOpacity(0.3);
               }

               break;
           }
       }

       bgPaint->drawPixmap(QRect(0, 0, width, height), QPixmap::fromImage(connMov->currentImage()));
       bgPaint->setPen(Qt::red);
       bgPaint->setFont(font);
       QFontMetrics fm(font);
       const QRect kek(0, 0, fm.width(str), fm.height());
       QRect bound;
       bgPaint->setOpacity(1);
       bgPaint->drawText(bgPaint->viewport().width()/2 - kek.width()/2, bgPaint->viewport().height()/2 - kek.height(), str);

       bgPaint->drawPixmap(bgPaint->viewport().width() / 2 - logo.width()/2, height - logo.width() - 15, logo);

    }



    void GLWidget::drawOnPaused(QPainter* painter, QPaintEvent* event, int elapsed) {
       painter->eraseRect(0, 0, width, height);
       QFont font = painter->font();
       font.setPointSize(18);
       painter->setPen(Qt::red);
       QFontMetrics fm(font);
       QString str("Paused");
       painter->drawPixmap(QRect(0, 0, width, height),lastImage);
       painter->drawText(QPoint(painter->viewport().width() - fm.width(str), 50), str);

       if (debug) {
           QFont font = painter->font();
           font.setPointSize(25);
           painter->setPen(Qt::red);
           string camMess = "CAMID: " + camId;
           QString mess(camMess.c_str());
           string camIp = "IP: " + camUtils->getIp(camId);
           QString ipMess(camIp.c_str());
           QString bufferSize("Buffer size: " + QString::number(buffer.size()));
           QString lastFrameText("Last frame draw time: " + QString::number(painTime.count()) + "s");
           painter->drawText(QPoint(10, 50), mess);
           painter->drawText(QPoint(10, 60), ipMess);
           QString bufferState;
           if (bufferEnabled) {
               bufferState = QString("Experimental BUFFER is enabled!");
               QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
               painter->drawText(QPoint(10, 80), currentBufferSize);
           }
           else {
               bufferState = QString("Experimental BUFFER is disabled!");
           }
           painter->drawText(QPoint(10, 70), bufferState);
           painter->drawText(QPoint(10, height - 25), lastFrameText);
       }
    }


    void GLWidget::drawImageGLLatest(QPainter* painter, QPaintEvent* event, int elapsed) {
       auto start = chrono::high_resolution_clock::now();
       painter->drawPixmap(QRect(0, 0, width, height), lastImage);
       if (debug) {
           QFont font = painter->font();
           font.setPointSize(25);
           painter->setPen(Qt::red);
           string camMess = "CAMID: " + camId;
           QString mess(camMess.c_str());
           string camIp = "IP: " + camUtils->getIp(camId);
           QString ipMess(camIp.c_str());
           QString bufferSize("Buffer size: " + QString::number(buffer.size()));
           QString lastFrameText("Last frame draw time: " + QString::number(painTime.count()) + "s");
           painter->drawText(QPoint(10, 50), mess);
           painter->drawText(QPoint(10, 60), ipMess);
           QString bufferState;
           if(bufferEnabled){
               bufferState = QString("Experimental BUFFER is enabled!");
               QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
               painter->drawText(QPoint(10,80), currentBufferSize);
           }
           else {
               bufferState = QString("Experimental BUFFER is disabled!");
               QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
               painter->drawText(QPoint(10, 80), currentBufferSize);
           }
           painter->drawText(QPoint(10, 70), bufferState);
           painter->drawText(QPoint(10, height - 25), lastFrameText);

       }
       auto end = chrono::high_resolution_clock::now();
       painTime = end - start;
    }



    /* END DRAWING STUFF */



    /* UI EVENTS */

    void GLWidget::mousePressEvent(QMouseEvent* e) {

       if (e->button() == Qt::LeftButton) {
           if (fullScreen == nullptr || !fullScreen->isVisible()) { // Do not unpause if window is opened
               playing = !playing;
           }
       }

       if (e->button() == Qt::RightButton) {
           onRightClickMenu(e->pos());
       }
    }



    /* Utilities */
    QImage GLWidget::toQImageFromPMat(cv::Mat* mat) {



       return QImage(mat->data, mat->cols, mat->rows, QImage::Format_RGB888).rgbSwapped();



    }

    /* State control */

    void GLWidget::killStream() {
       cam->killStream();
       camThread->join();
    }

    void GLWidget::setBufferEnabled(bool newBufferState) {
       cout &lt;&lt; "Player: " &lt;&lt; camId &lt;&lt; ", buffer state updated: " &lt;&lt; newBufferState &lt;&lt; endl;
       bufferEnabled = newBufferState;
       buffer.empty();
    }

    void GLWidget::setCameraRetryConnection(bool newState) {
       cam->setReconnectable(newState);
    }

    /* Destruction */
    GLWidget::~GLWidget() {
       cam->killStream();
       camThread->join();
    }
    </bool></future>

    CamUtils.h :

    #pragma once
    #include <iostream>
    #include <vector>
    #include <fstream>
    #include <map>
    #include <string>
    #include <sstream>
    #include <algorithm>
    #include <nlohmann></nlohmann>json.hpp>

    using namespace std;
    using json = nlohmann::json;

    class CamUtils
    {
    private:

       string camDb = "cameras.dViewer";
       map> cameraList; // Legacy
       json cameras;
       ofstream dbFile;
       bool dbExists(); // Always hard coded

       /* Old IMPLEMENTATION */
       void writeLineToDb_(const string&amp; content, bool append = false);
       void loadCameras_();

       /* JSON based */
       void loadCameras();

    public:
       CamUtils();
       string generateRandomString(size_t length);
       string getCameraStreamURL(string cameraId) const;
       string saveCamera(string ip, string username, string pass); // Return generated id
       vector<string> listAllCameraIds();
       string getIp(string cameraId);
    };


    </string></algorithm></sstream></string></map></fstream></vector></iostream>

    CamUtils.cpp :

    #include "CamUtils.h"
    #pragma comment(lib, "rpcrt4.lib")  // UuidCreate - Minimum supported OS Win 2000
    #include
    #include <iostream>

    CamUtils::CamUtils()
    {
       if (!dbExists()) {
           ofstream dbFile;
           dbFile.open(camDb);
           cameras["cameras"] = json::array();
           dbFile &lt;&lt; cameras &lt;&lt; std::endl;
           dbFile.close();

       }
       else {
           loadCameras();
       }
    }




    vector<string> CamUtils::listAllCameraIds() {
       vector<string> ids;
       cout &lt;&lt; "IN LIST " &lt;&lt; endl;
       for (auto&amp; cam : cameras["cameras"]) {
           ids.push_back(cam["id"].get<string>());
           //cout &lt;&lt; cam["id"].get<string>() &lt;&lt; std::endl;
       }
       return ids;
    }

    string CamUtils::getIp(string id) {
       vector<string> camDetails = cameraList[id];
       string ip = "NO IP WILL DISPLAYED UNTIL I FIGURE OUT A BUG";
       for (auto&amp; cam : cameras["cameras"]) {
           if (id == cam["id"]) {
               ip = cam["ip"].get<string>();
           }
       }

       return ip;
    }

    string CamUtils::getCameraStreamURL(string id) const {
       string url = "err"; // err is the default, it will be overwritten in case id is found, dont forget to check for it

       for (auto&amp; cam : cameras["cameras"]) {
           if (id == cam["id"]) {
               if (cam["username"].get<string>() == "null") {
                   url = "rtsp://" + cam["ip"].get<string>() + ":554/axis-media/media.amp?tcp";
               }
               else {
                   url = "rtsp://" + cam["username"].get<string>() + ":" + cam["password"].get<string>() + "@" + cam["ip"].get<string>() + ":554/axis-media/media.amp?streamprofile=720_30";
               }
           }
       }

       return url;  // Dont forget to check for err when using this shit
    }


    string CamUtils::saveCamera(string ip, string username, string password) {
       UUID uid;
       UuidCreate(&amp;uid);
       char* str;
       UuidToStringA(&amp;uid, (RPC_CSTR*)&amp;str);
       string id = str;
       cout &lt;&lt; "GEN: " &lt;&lt; id &lt;&lt; endl;
       json cam = json({}); //Create emtpy object
       cam["id"] = id;
       cam["ip"] = ip;
       cam["username"] = username;
       cam["password"] = password;
       cameras["cameras"].push_back(cam);
       std::ofstream out(camDb);
       out &lt;&lt; cameras &lt;&lt; std::endl;
       cout &lt;&lt; cameras["cameras"] &lt;&lt; endl;

       cout &lt;&lt; "Saved camera as " &lt;&lt; id &lt;&lt; endl;
       return id;
    }


    bool CamUtils::dbExists() {
       ifstream dbFile(camDb);
       return (bool)dbFile;
    }





    void CamUtils::loadCameras() {
       cout &lt;&lt; "Load call" &lt;&lt; endl;
       ifstream dbFile(camDb);
       string line;
       string wholeFile;

       while (std::getline(dbFile, line)) {
           cout &lt;&lt; line &lt;&lt; endl;
           wholeFile += line;
       }
       try {
           cameras = json::parse(wholeFile);
           //cout &lt;&lt; cameras["cameras"] &lt;&lt; endl;

       }
       catch (exception e) {
           cout &lt;&lt; e.what() &lt;&lt; endl;
       }
       dbFile.close();
    }










    /*
       LEGACY CODE, TO BE REMOVED!

    */



    void CamUtils::loadCameras_() {
       /*
           LEGACY CODE:
           This used to be the way to load cameras, but I moved on to JSON based configuration so this is no longer needed and will be removed soon
       */

       ifstream dbFile(camDb);
       string line;
       while (std::getline(dbFile, line)) {
           /*
               This function load camera data to the map:
               The order MUST be the following: 0:ID, 1:IP, 2:USERNAME, 3:PASSWORD.
               Always delimited with | no spaces between!
           */
           if (!line.empty()) {
               stringstream ss(line);
               string item;
               vector<string> splitString;

               while (std::getline(ss, item, '|')) {
                   splitString.push_back(item);
               }
               if (splitString.size() > 0) {
                   /* Dont even parse if the program didnt split right*/
                   //cout &lt;&lt; "Split string: " &lt;&lt; splitString.size() &lt;&lt; "\n";
                   for (int i = 0; i &lt; (splitString.size()); i++) cameraList[splitString[0]].push_back(splitString[i]);
               }
           }
       }
    }



    void CamUtils::writeLineToDb_(const string &amp; content, bool append) {
       ofstream dbFile;
       cout &lt;&lt; "Creating?";
       if (append) {
           dbFile.open(camDb, ios_base::app);
       }
       else {
           dbFile.open(camDb);
       }

       dbFile &lt;&lt; content.c_str() &lt;&lt; "\r\n";
       dbFile.flush();
    }

    /* JSON Reworx */




    string CamUtils::generateRandomString(size_t length)
    {
       const char* charmap = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
       const size_t charmapLength = strlen(charmap);
       auto generator = [&amp;]() { return charmap[rand() % charmapLength]; };
       string result;
       result.reserve(length);
       generate_n(back_inserter(result), length, generator);
       return result;
    }
    </string></string></string></string></string></string></string></string></string></string></string></string></iostream>

    End of example

    How would I go about decreasing CPU usage when dealing with large amount of streams ?