Recherche avancée

Médias (91)

Autres articles (58)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (9313)

  • Data Privacy Day 2021 : Five ways to embrace privacy into your business

    27 janvier 2021, par Matomo Core Team — Community, Privacy

    Welcome to Data Privacy Day 2021 !

    This year we are excited to announce that we are participating as a #PrivacyAware Champion for DPD21 through the National Cyber Security Alliance. This means that on this significant day we are in partnership with hundreds of other organisations and businesses to share a unified message that empowers individuals to “Own Your Privacy” and for organisations to “Respect Privacy.”

    "Last year dawned a new era in the way many businesses operate from a traditional office work setting to a remote working from home environment for employees. This now means it’s more important than ever for your employees to understand how to take ownership of their privacy when working online."

    Matthieu - Founder of Matomo

    As a Data Privacy Day #PrivacyAware Champion we would like to provide some practical tips and share examples of how the Matomo team helps employees be privacy aware.

    Five ways to embrace privacy into your business

    1. Create a privacy aware culture within your business

    • Get leadership involved.
    • Appoint privacy ambassadors within your team. 
    • Create a privacy awareness campaign where you educate employees on your company privacy policy. 
    • Share messages about privacy around the office/or in meetings online, on internal message boards, in company newsletters, or emails. 
    • Teach new employees their role in your privacy culture and reinforce throughout their career.

    2. Organise privacy awareness training for your employees

    • Invite outside speakers to talk to employees about why privacy matters. 
    • Engage staff by asking them to consider how privacy and data security applies to the work they do on a daily basis.
    • Encourage employees to complete online courses to gain a better understanding of how to avoid privacy risks.

    3. Help employees manage their individual privacy

    • Better security and privacy behaviours at home will translate to better security and privacy practices at work. 
    • Teach employees how to update their privacy and security settings on personal accounts.
    • Use NCSA’s privacy settings page to help them get started

    4. Add privacy to the employee’s toolbox

    • Give your employees actual tools they can use to improve their privacy, such as company-branded camera covers or privacy screens for their devices, or virtual private networks (VPNs) to secure their connections.

    5. Join Matomo and we’ll be your web analytics experts

    • At Matomo, ensuring our users and customers that their privacy is protected is not only a core component of the work we do, it’s why we do what we do ! Find out how.

    Want to find out more about data privacy download your free DPD 2021 Champion Toolkit and read our post on “Why is privacy important”.

    Team Matomo

    2021 Data Privacy Day Toolkit

    Your guide to Data Privacy Day, January 28, 2021
  • Matomo Launches Global Partner Programme to Deepen Local Connections and Champion Ethical Analytics

    25 juin, par Matomo Core Team — Press Releases

    Matomo introduces a global Partner Programme designed to connect organisations with trusted local experts, advancing its commitment to privacy, data sovereignty, and localisation.

    Wellington, New Zealand 25 June 2025 Matomo, the leading web analytics platform, is
    proud to announce the launch of the Matomo Partner Programme. This new initiative marks a significant step in Matomo’s global growth strategy, bringing together a carefully selected
    network of expert partners to support customers with localised, hightrust analytics services
    rooted in shared values.

    As privacy concerns rise and organisations seek alternatives to mainstream analytics solutions, the need for regional expertise has never been more vital. The Matomo Partner Programme ensures that customers around the world are supported not just by a worldclass platform, but by trusted local professionals who understand their specific regulatory, cultural, and business needs.

    “Matomo is evolving. As privacy regulations become more nuanced and the need for regional
    understanding grows, we’ve made localisation a central pillar of our strategy. Our partners are
    the key to helping customers navigate these complexities with confidence and care,” said
    Adam Taylor, Chief Operating Officer at Matomo.

    Local Experts, Global Values

    At the heart of the Matomo Partner Programme is a commitment to connect clients with local experts who live and breathe their markets. These partners are more than service
    providersthey’re trusted advisors who bring deep insight into their region’s privacy
    legislation, cultural norms, sectorspecific requirements, and digital trends.

    The programme empowers partners to act as extensions of Matomo’s core teams :

    As Customer Success allies, delivering personalised training, support, and technical
    services in local languages and time zones.
    As Sales ambassadors, raising awareness of ethical analytics in both public and private
    sectors, where trust, compliance, and transparency are crucial.

    This decentralised, valuesaligned approach ensures that every Matomo customer benefits
    from localised delivery with global consistency.

    A Programme Designed for Impactful Partnerships

    The Matomo Partner Programme is open to organisations who share a commitment to ethical, open-source analytics and can demonstrate :

    Technical excellence in deploying, configuring, and supporting Matomo Analytics in diverse environments.
    Deep market understanding, allowing them to tell the Matomo story in ways that
    resonate locally.
    Commercial strength to position Matomo across key industries, particularly in sectors with complex compliance and data sovereignty demands.

    Partners who meet these standards will be recognised as ‘Official Matomo Partners’— a symbol of excellence, credibility, and shared purpose. With this status, they gain access to :

    Brand alignment and trust : Strengthen credibility with clients by promoting their
    connection to Matomo and its globally respected ethical stance.
    Go-to-market support : Access to qualified leads, joint marketing, and tools to scale their business in a privacy-first market.
    Strategic collaboration : Early insights into the product roadmap and direct
    engagement with Matomo’s core team.
    Meaningful local impact : Help regional organisations reclaim control of their data and embrace ethical analytics with confidence.

    Ethical Analytics for Today’s World

    Matomo was founded in 2007 with the belief that people should have full control over their data. As the first opensource web analytics platform of its kind, Matomo continues to challenge the dominance of opaque, centralised tools by offering a transparent and flexible alternative that puts users first.

    In today’s landscapemarked by increased regulatory scrutiny, data protection concerns, and rapid advancements in AIMatomo’s approach is more relevant than ever. Opensource technology provides the adaptability organisations need to respond to local expectations while reinforcing digital trust with users.

    Whether it’s a government department, healthcare provider, educational institution, or
    commercial businessMatomo partners are on the ground, ready to help organisations
    transition to analytics that are not only powerful but principled.
  • How to generate a fixed duration and fps for a video using FFmpeg C++ libraries ? [closed]

    4 novembre 2024, par BlueSky Light Programmer

    I'm following the mux official example to write a C++ class that generates a video with a fixed duration (5s) and a fixed fps (60). For some reason, the duration of the output video is 3-4 seconds, although I call the function to write frames 300 times and set the fps to 60.

    


    Can you take a look at the code below and spot what I'm doing wrong ?

    


    #include "ffmpeg.h"&#xA;&#xA;#include <iostream>&#xA;&#xA;static int writeFrame(AVFormatContext *fmt_ctx, AVCodecContext *c,&#xA;                      AVStream *st, AVFrame *frame, AVPacket *pkt);&#xA;&#xA;static void addStream(OutputStream *ost, AVFormatContext *formatContext,&#xA;                      const AVCodec **codec, enum AVCodecID codec_id,&#xA;                      int width, int height, int fps);&#xA;&#xA;static AVFrame *allocFrame(enum AVPixelFormat pix_fmt, int width, int height);&#xA;&#xA;static void openVideo(AVFormatContext *formatContext, const AVCodec *codec,&#xA;                      OutputStream *ost, AVDictionary *opt_arg);&#xA;&#xA;static AVFrame *getVideoFrame(OutputStream *ost,&#xA;                              const std::vector<glubyte>&amp; pixels,&#xA;                              int duration);&#xA;&#xA;static int writeVideoFrame(AVFormatContext *formatContext,&#xA;                           OutputStream *ost,&#xA;                           const std::vector<glubyte>&amp; pixels,&#xA;                           int duration);&#xA;&#xA;static void closeStream(AVFormatContext *formatContext, OutputStream *ost);&#xA;&#xA;static void fillRGBImage(AVFrame *frame, int width, int height,&#xA;                         const std::vector<glubyte>&amp; pixels);&#xA;&#xA;#ifdef av_err2str&#xA;#undef av_err2str&#xA;#include <string>&#xA;av_always_inline std::string av_err2string(int errnum) {&#xA;  char str[AV_ERROR_MAX_STRING_SIZE];&#xA;  return av_make_error_string(str, AV_ERROR_MAX_STRING_SIZE, errnum);&#xA;}&#xA;#define av_err2str(err) av_err2string(err).c_str()&#xA;#endif  // av_err2str&#xA;&#xA;FFmpeg::FFmpeg(int width, int height, int fps, const char *fileName)&#xA;: videoStream{ 0 }&#xA;, formatContext{ nullptr } {&#xA;  const AVOutputFormat *outputFormat;&#xA;  const AVCodec *videoCodec{ nullptr };&#xA;  AVDictionary *opt{ nullptr };&#xA;  int ret{ 0 };&#xA;&#xA;  av_dict_set(&amp;opt, "crf", "17", 0);&#xA;&#xA;  /* Allocate the output media context. */&#xA;  avformat_alloc_output_context2(&amp;this->formatContext, nullptr, nullptr, fileName);&#xA;  if (!this->formatContext) {&#xA;    std::cout &lt;&lt; "Could not deduce output format from file extension: using MPEG." &lt;&lt; std::endl;&#xA;    avformat_alloc_output_context2(&amp;this->formatContext, nullptr, "mpeg", fileName);&#xA;  &#xA;    if (!formatContext)&#xA;      exit(-14);&#xA;  }&#xA;&#xA;  outputFormat = this->formatContext->oformat;&#xA;&#xA;  /* Add the video stream using the default format codecs&#xA;   * and initialize the codecs. */&#xA;  if (outputFormat->video_codec == AV_CODEC_ID_NONE) {&#xA;    std::cout &lt;&lt; "The output format doesn&#x27;t have a default codec video." &lt;&lt; std::endl;&#xA;    exit(-15);&#xA;  }&#xA;&#xA;  addStream(&#xA;    &amp;this->videoStream,&#xA;    this->formatContext,&#xA;    &amp;videoCodec,&#xA;    outputFormat->video_codec,&#xA;    width,&#xA;    height,&#xA;    fps&#xA;  );&#xA;  openVideo(this->formatContext, videoCodec, &amp;this->videoStream, opt);&#xA;  av_dump_format(this->formatContext, 0, fileName, 1);&#xA;  &#xA;  /* open the output file, if needed */&#xA;  if (!(outputFormat->flags &amp; AVFMT_NOFILE)) {&#xA;    ret = avio_open(&amp;this->formatContext->pb, fileName, AVIO_FLAG_WRITE);&#xA;    if (ret &lt; 0) {&#xA;      std::cout &lt;&lt; "Could not open &#x27;" &lt;&lt; fileName &lt;&lt; "&#x27;: " &lt;&lt; std::string{ av_err2str(ret) } &lt;&lt; std::endl;&#xA;      exit(-16);&#xA;    }&#xA;  }&#xA;&#xA;  /* Write the stream header, if any. */&#xA;  ret = avformat_write_header(this->formatContext, &amp;opt);&#xA;  if (ret &lt; 0) {&#xA;    std::cout &lt;&lt; "Error occurred when opening output file: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;    exit(-17);&#xA;  }&#xA;&#xA;  av_dict_free(&amp;opt);&#xA;}&#xA;&#xA;FFmpeg::~FFmpeg() {&#xA;  if (this->formatContext) {&#xA;    /* Close codec. */&#xA;    closeStream(this->formatContext, &amp;this->videoStream);&#xA;&#xA;    if (!(this->formatContext->oformat->flags &amp; AVFMT_NOFILE)) {&#xA;      /* Close the output file. */&#xA;      avio_closep(&amp;this->formatContext->pb);&#xA;    }&#xA;&#xA;    /* free the stream */&#xA;    avformat_free_context(this->formatContext);&#xA;  }&#xA;}&#xA;&#xA;void FFmpeg::Record(&#xA;  const std::vector<glubyte>&amp; pixels,&#xA;  unsigned frameIndex,&#xA;  int duration,&#xA;  bool isLastIndex&#xA;) {&#xA;  static bool encodeVideo{ true };&#xA;  if (encodeVideo)&#xA;    encodeVideo = !writeVideoFrame(this->formatContext,&#xA;                                   &amp;this->videoStream,&#xA;                                   pixels,&#xA;                                   duration);&#xA;&#xA;  if (isLastIndex) {&#xA;    av_write_trailer(this->formatContext);&#xA;    encodeVideo = false;&#xA;  }&#xA;}&#xA;&#xA;int writeFrame(AVFormatContext *fmt_ctx, AVCodecContext *c,&#xA;               AVStream *st, AVFrame *frame, AVPacket *pkt) {&#xA;  int ret;&#xA;&#xA;  // send the frame to the encoder&#xA;  ret = avcodec_send_frame(c, frame);&#xA;  if (ret &lt; 0) {&#xA;    std::cout &lt;&lt; "Error sending a frame to the encoder: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;    exit(-2);&#xA;  }&#xA;&#xA;  while (ret >= 0) {&#xA;    ret = avcodec_receive_packet(c, pkt);&#xA;    if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;      break;&#xA;    else if (ret &lt; 0) {&#xA;      std::cout &lt;&lt; "Error encoding a frame: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;      exit(-3);&#xA;    }&#xA;&#xA;    /* rescale output packet timestamp values from codec to stream timebase */&#xA;    av_packet_rescale_ts(pkt, c->time_base, st->time_base);&#xA;    pkt->stream_index = st->index;&#xA;&#xA;    /* Write the compressed frame to the media file. */&#xA;    ret = av_interleaved_write_frame(fmt_ctx, pkt);&#xA;    /* pkt is now blank (av_interleaved_write_frame() takes ownership of&#xA;     * its contents and resets pkt), so that no unreferencing is necessary.&#xA;     * This would be different if one used av_write_frame(). */&#xA;    if (ret &lt; 0) {&#xA;      std::cout &lt;&lt; "Error while writing output packet: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;      exit(-4);&#xA;    }&#xA;  }&#xA;&#xA;  return ret == AVERROR_EOF ? 1 : 0;&#xA;}&#xA;&#xA;void addStream(OutputStream *ost, AVFormatContext *formatContext,&#xA;               const AVCodec **codec, enum AVCodecID codec_id,&#xA;               int width, int height, int fps) {&#xA;  AVCodecContext *c;&#xA;  int i;&#xA;&#xA;  /* find the encoder */&#xA;  *codec = avcodec_find_encoder(codec_id);&#xA;  if (!(*codec)) {&#xA;    std::cout &lt;&lt; "Could not find encoder for " &lt;&lt; avcodec_get_name(codec_id) &lt;&lt; "." &lt;&lt; std::endl;&#xA;    exit(-5);&#xA;  }&#xA;&#xA;  ost->tmpPkt = av_packet_alloc();&#xA;  if (!ost->tmpPkt) {&#xA;    std::cout &lt;&lt; "Could not allocate AVPacket." &lt;&lt; std::endl;&#xA;    exit(-6);&#xA;  }&#xA;&#xA;  ost->st = avformat_new_stream(formatContext, nullptr);&#xA;  if (!ost->st) {&#xA;    std::cout &lt;&lt; "Could not allocate stream." &lt;&lt; std::endl;&#xA;    exit(-7);&#xA;  }&#xA;&#xA;  ost->st->id = formatContext->nb_streams-1;&#xA;  c = avcodec_alloc_context3(*codec);&#xA;  if (!c) {&#xA;    std::cout &lt;&lt; "Could not alloc an encoding context." &lt;&lt; std::endl;&#xA;    exit(-8);&#xA;  }&#xA;  ost->enc = c;&#xA;&#xA;  switch ((*codec)->type) {&#xA;  case AVMEDIA_TYPE_VIDEO:&#xA;    c->codec_id = codec_id;&#xA;    c->bit_rate = 6000000;&#xA;    /* Resolution must be a multiple of two. */&#xA;    c->width    = width;&#xA;    c->height   = height;&#xA;    /* timebase: This is the fundamental unit of time (in seconds) in terms&#xA;      * of which frame timestamps are represented. For fixed-fps content,&#xA;      * timebase should be 1/framerate and timestamp increments should be&#xA;      * identical to 1. */&#xA;    ost->st->time_base = { 1, fps };&#xA;    c->time_base       = ost->st->time_base;&#xA;    c->framerate       = { fps, 1 };&#xA;&#xA;    c->gop_size      = 0; /* emit one intra frame every twelve frames at most */&#xA;    c->pix_fmt       = AV_PIX_FMT_YUV420P;&#xA;    if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {&#xA;      /* just for testing, we also add B-frames */&#xA;      c->max_b_frames = 2;&#xA;    }&#xA;    if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {&#xA;      /* Needed to avoid using macroblocks in which some coeffs overflow.&#xA;      *  This does not happen with normal video, it just happens here as&#xA;      *  the motion of the chroma plane does not match the luma plane.*/&#xA;      c->mb_decision = 2;&#xA;    }&#xA;    break;&#xA;&#xA;  default:&#xA;    break;&#xA;  }&#xA;&#xA;  /* Some formats want stream headers to be separate. */&#xA;  if (formatContext->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;    c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;}&#xA;&#xA;AVFrame *allocFrame(enum AVPixelFormat pix_fmt, int width, int height) {&#xA;  AVFrame *frame{ av_frame_alloc() };&#xA;  int ret;&#xA;&#xA;  if (!frame)&#xA;    return nullptr;&#xA;&#xA;  frame->format = pix_fmt;&#xA;  frame->width  = width;&#xA;  frame->height = height;&#xA;&#xA;  /* allocate the buffers for the frame data */&#xA;  ret = av_frame_get_buffer(frame, 0);&#xA;  if (ret &lt; 0) {&#xA;    std::cout &lt;&lt; "Could not allocate frame data." &lt;&lt; std::endl;&#xA;    exit(-8);&#xA;  }&#xA;&#xA;  return frame;&#xA;}&#xA;&#xA;void openVideo(AVFormatContext *formatContext, const AVCodec *codec,&#xA;               OutputStream *ost, AVDictionary *opt_arg) {&#xA;  int ret;&#xA;  AVCodecContext *c{ ost->enc };&#xA;  AVDictionary *opt{ nullptr };&#xA;&#xA;  av_dict_copy(&amp;opt, opt_arg, 0);&#xA;&#xA;  /* open the codec */&#xA;  ret = avcodec_open2(c, codec, &amp;opt);&#xA;  av_dict_free(&amp;opt);&#xA;  if (ret &lt; 0) {&#xA;    std::cout &lt;&lt; "Could not open video codec: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;    exit(-9);&#xA;  }&#xA;&#xA;  /* Allocate and init a re-usable frame. */&#xA;  ost->frame = allocFrame(c->pix_fmt, c->width, c->height);&#xA;  if (!ost->frame) {&#xA;    std::cout &lt;&lt; "Could not allocate video frame." &lt;&lt; std::endl;&#xA;    exit(-10);&#xA;  }&#xA;&#xA;  /* If the output format is not RGB24, then a temporary RGB24&#xA;   * picture is needed too. It is then converted to the required&#xA;   * output format. */&#xA;  ost->tmpFrame = nullptr;&#xA;  if (c->pix_fmt != AV_PIX_FMT_RGB24) {&#xA;    ost->tmpFrame = allocFrame(AV_PIX_FMT_RGB24, c->width, c->height);&#xA;    if (!ost->tmpFrame) {&#xA;      std::cout &lt;&lt; "Could not allocate temporary video frame." &lt;&lt; std::endl;&#xA;      exit(-11);&#xA;    }&#xA;  }&#xA;&#xA;  /* Copy the stream parameters to the muxer. */&#xA;  ret = avcodec_parameters_from_context(ost->st->codecpar, c);&#xA;  if (ret &lt; 0) {&#xA;    std::cout &lt;&lt; "Could not copy the stream parameters." &lt;&lt; std::endl;&#xA;    exit(-12);&#xA;  }&#xA;}&#xA;&#xA;AVFrame *getVideoFrame(OutputStream *ost,&#xA;                       const std::vector<glubyte>&amp; pixels,&#xA;                       int duration) {&#xA;  AVCodecContext *c{ ost->enc };&#xA;&#xA;  /* check if we want to generate more frames */&#xA;  if (av_compare_ts(ost->nextPts, c->time_base,&#xA;                    duration, { 1, 1 }) > 0) {&#xA;    return nullptr;&#xA;  }&#xA;&#xA;  /* when we pass a frame to the encoder, it may keep a reference to it&#xA;    * internally; make sure we do not overwrite it here */&#xA;  if (av_frame_make_writable(ost->frame) &lt; 0) {&#xA;    std::cout &lt;&lt; "It wasn&#x27;t possible to make frame writable." &lt;&lt; std::endl;&#xA;    exit(-12);&#xA;  }&#xA;&#xA;  if (c->pix_fmt != AV_PIX_FMT_RGB24) {&#xA;      /* as we only generate a YUV420P picture, we must convert it&#xA;        * to the codec pixel format if needed */&#xA;      if (!ost->swsContext) {&#xA;        ost->swsContext = sws_getContext(c->width, c->height,&#xA;                                         AV_PIX_FMT_RGB24,&#xA;                                         c->width, c->height,&#xA;                                         c->pix_fmt,&#xA;                                         SWS_BICUBIC, nullptr, nullptr, nullptr);&#xA;        if (!ost->swsContext) {&#xA;          std::cout &lt;&lt; "Could not initialize the conversion context." &lt;&lt; std::endl;&#xA;          exit(-13);&#xA;        }&#xA;      }&#xA;&#xA;      fillRGBImage(ost->tmpFrame, c->width, c->height, pixels);&#xA;      sws_scale(ost->swsContext, (const uint8_t * const *) ost->tmpFrame->data,&#xA;                ost->tmpFrame->linesize, 0, c->height, ost->frame->data,&#xA;                ost->frame->linesize);&#xA;  } else&#xA;    fillRGBImage(ost->frame, c->width, c->height, pixels);&#xA;&#xA;  ost->frame->pts = ost->nextPts&#x2B;&#x2B;;&#xA;&#xA;  return ost->frame;&#xA;}&#xA;&#xA;int writeVideoFrame(AVFormatContext *formatContext,&#xA;                    OutputStream *ost,&#xA;                    const std::vector<glubyte>&amp; pixels,&#xA;                    int duration) {&#xA;  return writeFrame(formatContext,&#xA;                    ost->enc,&#xA;                    ost->st,&#xA;                    getVideoFrame(ost, pixels, duration),&#xA;                    ost->tmpPkt);&#xA;}&#xA;&#xA;void closeStream(AVFormatContext *formatContext, OutputStream *ost) {&#xA;  avcodec_free_context(&amp;ost->enc);&#xA;  av_frame_free(&amp;ost->frame);&#xA;  av_frame_free(&amp;ost->tmpFrame);&#xA;  av_packet_free(&amp;ost->tmpPkt);&#xA;  sws_freeContext(ost->swsContext);&#xA;}&#xA;&#xA;static void fillRGBImage(AVFrame *frame, int width, int height,&#xA;                         const std::vector<glubyte>&amp; pixels) {&#xA;  // Copy pixel data into the frame&#xA;  int inputLineSize{ 3 * width };  // 3 bytes per pixel for RGB&#xA;  for (int y{ 0 }; y &lt; height; &#x2B;&#x2B;y) {&#xA;    memcpy(frame->data[0] &#x2B; y * frame->linesize[0],&#xA;           pixels.data() &#x2B; y * inputLineSize,&#xA;           inputLineSize);&#xA;  }&#xA;}&#xA;</glubyte></glubyte></glubyte></glubyte></string></glubyte></glubyte></glubyte></iostream>

    &#xA;