Recherche avancée

Médias (9)

Mot : - Tags -/soundtrack

Autres articles (61)

  • Le plugin : Gestion de la mutualisation

    2 mars 2010, par

    Le plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
    Installation basique
    On installe les fichiers de SPIP sur le serveur.
    On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
    On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
    < ?php (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Formulaire personnalisable

    21 juin 2013, par

    Cette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
    Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire. (...)

Sur d’autres sites (7741)

  • FFmpeg api : combine Camera Stream and Screen Capture or Video File stream to one stream (C/C++)

    31 décembre 2016, par lostin2010

    I have one big question which spent me 2 total days to solve , but fail .

    I want to combine Camera Stream with another stream (.flv,.mpg) to one stream . Just like the picture below. camera is a part of the Live , background is another stream.

    enter image description here

    My camera device is

    [dshow @ 000373e0]  "TTQ HD Camera"
    [dshow @ 000373e0]     Alternative name "@device_pnp_\\?\usb#vid_114d&amp;pid_8455&amp;mi_00#6&amp;1e9bcf33&amp;0&amp;0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global"

    I decode my Camera Stream , its format is YUYV422, and decode another flv file its format is ’YUV420p’.
    I use each decoder of oneself to build its own avfilter , camera is in0, flv file is in1 . and use this filter_spec

    color=c=black@1:s=1920x1080[x0];[in0]null[ine0];[ine0]scale=w=960:h=540[inn0];[x0][inn0]overlay=1920*0/2:1080*0/2[x1];[in1]null[ine1];[ine1]scale=w=1160:h=740[inn1];[x1][inn1]overlay=1920*1/2:1080*0/2[x2];[x2]null[out]

    i build a filter_graph.then I read packet out separately and add_frame to filter.

    for (i = 0; i &lt; video_num; i++)//i=0 camera packet , i=1 flv file packet
    {
       while ((read_frame_done = av_read_frame(ifmt_ctx[i], &amp;packet)) >= 0)
       {
          ret = av_buffersrc_add_frame(filter_ctx[stream_index].buffersrc_ctx[i],     frame[i]);
       }
    }

    then i get frame out into picref

    while (1) {
       ret = av_buffersink_get_frame_flags(filter_ctx[stream_index].buffersink_ctx, picref, 0);
    }

    I encode picref or display it with SDL , I find there is only the flv stream , no camera stream on showing. i don’t know why.
    but if I change the source stream from camera stream to another flv file, means two flv file as source streams, then it is correct like demo picture above. this confuses me a lot .
    who can help me, I will really thank you.

  • Using libavformat to mux H.264 frames into RTP

    22 novembre 2016, par DanielB6

    I have an encoder that produces a series of H.264 I-frames and P-frames. I’m trying to use libavformat to mux and transmit these frames over RTP, but I’m stuck.

    My program sends RTP data, but the RTP timestamp increments by 1 each successive frame, instead of 90000/fps. It also doesn’t look like it’s doing the proper framing for H.264 NAL, since I can’t decode the stream as H.264 in Wireshark.

    I suspect that I’m not setting up the codec information properly, but it appears in many places in the output format context, so it’s unclear what exactly needs to be setup. The examples seem to all copy codec context info from encoders, which isn’t my use case.

    This is what I’m trying :

    int main() {
       AVFormatContext context = avformat_alloc_context();

       if (!context) {
           printf("avformat_alloc_context failed\n");
           return;
       }

       AVOutputFormat *format = av_guess_format("rtp", NULL, NULL);

       if (!format) {
           printf("av_guess_format failed\n");
           return;
       }

       context->oformat = format;

       snprintf(context->filename, sizeof(context->filename), "rtp://%s:%d", "192.168.2.16", 10000);

       if (avio_open(&amp;(context->pb), context->filename, AVIO_FLAG_READ_WRITE) &lt; 0) {
           printf("avio_open failed\n");
           return;
       }

       stream = avformat_new_stream(context, NULL);

       if (!stream) {
           printf("avformat_new_stream failed\n");
           return;
       }

       stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
       stream->codecpar->codec_id = AV_CODEC_ID_H264;
       stream->codecpar->width = 1920;
       stream->codecpar->height = 1080;

       avformat_write_header(context, NULL);

       ...
       write packets
       ...
    }

    Example write packet :

    int write_packet(uint8_t *data, int size) {
       AVPacket p;
       av_init_packet(&amp;p);
       p.data = buffer;
       p.size = size;
       p.stream_index = stream->index;

       av_interleaved_write_frame(context, &amp;p);
    }

    I’ve even went so far to build in libx264, find the encoder, and copy the codec context info from there into the stream codecpar, with the same result. My goal is to build without libx264, and any other libs that aren’t required, but it isn’t clear whether libx264 is required for defaults such as time base.

    How can the libavformat RTP muxer be initialized to properly send H.264 frames over RTCP+RTP ?

  • QSharedMemory in Real-Time process

    21 novembre 2016, par Seungsoo Kim

    I’m trying to use QSharedMemory Class to share video data between two processes.

    So I tried like following method, but it has problem in simultaneous access of two processes.

    Two process crashes when they access sequentially to same memory name(key) "SharedMemory".

    I locked them while they’re used, but also it doesn’t work well.

    How can i avoid this crash ??

    1. Writing to SharedMemory - data type is and this function called by callback.

      QBuffer buffer;
      buffer.open(QBuffer::ReadWrite);
      QDataStream out(&amp;buffer);

      QByteArray outArray = QByteArray::fromRawData(reinterpret_cast<const>(data), strlen(reinterpret_cast<const>(data)));
      out &lt;&lt; width &lt;&lt; height &lt;&lt; step &lt;&lt; cameraId &lt;&lt; strlen(reinterpret_cast<const>(data));
      out.writeRawData(outArray.data(), outArray.size());

      int size = buffer.size();

      sharedMemory.setKey("SharedMemory");

      if (!sharedMemory.isAttached()) {
         printf("Cannot attach to shared memory to update!\n");
      }
      if (!sharedMemory.create(size))
      {
         printf("failed to allocate memory\n");
      }
      sharedMemory.lock();
      char *to = (char*)sharedMemory.data();
      const char *from = buffer.data().data();
      memcpy(to, from,qMin(sharedMemory.size(),size));
      sharedMemory.unlock();
      </const></const></const>
    2. Using data in SharedMemory. - this function is called by QThread, interval 100ms

      QSharedMemory sharedMemory("SharedMemory");
      sharedMemory.lock();
      if (!sharedMemory.attach()) {
         printf("failed to attach to memory\n");
         return;
      }

      QBuffer buffer;
      QDataStream in(&amp;buffer);

      sharedMemory.create(1920 * 1080);
      buffer.setData((char*)sharedMemory.constData(), sharedMemory.size());
      buffer.open(QBuffer::ReadOnly);
      sharedMemory.unlock();
      sharedMemory.detach();

      int r_width = 0;    
      int r_height = 0;
      int r_cameraId = 0;
      int r_step = 0;
      int r_strlen = 0;
      in >> r_width >> r_height >> r_step >> r_cameraId >> r_strlen;

      char* receive = new char[r_strlen];
      in.readRawData(receive, r_strlen);
      //unsigned char* r_receive = new unsigned char[r_strlen];
      //r_receive = (unsigned char*)receive;

      QPixmap backBuffer = QPixmap::fromImage(QImage((unsigned char*)receive, r_width, r_height, r_step, QImage::Format::Format_RGB888));
      ui.label->setPixmap(backBuffer.scaled(ui.label->size(), Qt::KeepAspectRatio));
      ui.label->show();

    please share your idea ! thank you !