Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (38)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (4084)

  • Video encoding green screen

    22 août 2016, par Mohammad Abu Musa

    I am building a screen recorder, the input stream is formatted with PP_VIDEOFRAME_FORMAT_I420 and the output is formatted with AV_PIX_FMT_YUV420P below is the code I use to do the conversion

    const uint8_t* data = static_cast<const>(frame.GetDataBuffer());
       pp::Size size;
       frame.GetSize(&amp;size);
       uint32_t buffersize = frame.GetDataBufferSize();


       if (is_recording_) {
           vpx_codec_iter_t iter = NULL;
           const vpx_codec_cx_pkt_t *pkt;
           // copy the pixels into our "raw input" container.
           int bytes_filled = avpicture_fill(&amp;pic_raw, data, AV_PIX_FMT_YUV420P, out_width, out_height);
           if(!bytes_filled) {
               Logger::Log("Cannot fill the raw input buffer");
               return;
           }

           if(vpx_codec_encode(&amp;codec, &amp;raw, frame_cnt, 1, flags, VPX_DL_REALTIME))
                 die_codec(&amp;codec, "Failed to encode frame");

           while( (pkt = vpx_codec_get_cx_data(&amp;codec, &amp;iter)) ) {
               switch(pkt->kind) {
                   case VPX_CODEC_CX_FRAME_PKT:
                       glb_app_thread.message_loop().PostWork(callback_factory_.NewCallback(&amp;EncoderInstance::write_ivf_frame_header, pkt));
                       glb_app_thread.message_loop().PostWork(callback_factory_.NewCallback(&amp;EncoderInstance::WriteFile, pkt));
                       break;
                   default:break;
               }
           }

           frame_cnt++;
    </const>

    I have three questions :

    1- Is the conversion is done correctly ? do I have to investigate image formats more ? are data channels mapped correctly.

    2- What causes the green screen to show ? what does it mean ?

    3- Is this a thread issue ? I mean is data passed correctly and the conversion is done correctly but the threads are racing

  • Passing pointer to thread via message_loop

    21 août 2016, par Mohammad Abu Musa

    I am using FFMPEG to encode a video, I got a green screen for the packets which I think it means I am getting empty buffer. I suspect I am passing the parameters wrong. I want help in getting the parameters passed correctly.

       vpx_codec_iter_t iter = NULL;
       const vpx_codec_cx_pkt_t *pkt;
       // copy the pixels into our "raw input" container.
       int bytes_filled = avpicture_fill(&amp;pic_raw, data, AV_PIX_FMT_YUV420P, out_width, out_height);
       if(!bytes_filled) {
           Logger::Log("Cannot fill the raw input buffer");
           return;
       }

       if(vpx_codec_encode(&amp;codec, &amp;raw, frame_cnt, 1, flags, VPX_DL_REALTIME))
             die_codec(&amp;codec, "Failed to encode frame");

       while( (pkt = vpx_codec_get_cx_data(&amp;codec, &amp;iter)) ) {
           switch(pkt->kind) {
               case VPX_CODEC_CX_FRAME_PKT:
                   glb_app_thread.message_loop().PostWork(callback_factory_.NewCallback(&amp;EncoderInstance::write_ivf_frame_header, pkt));
                   glb_app_thread.message_loop().PostWork(callback_factory_.NewCallback(&amp;EncoderInstance::WriteFile, pkt));
                   break;
               default:break;
           }
       }

    for the event handler

    void EncoderInstance::WriteFile(int32_t result, const vpx_codec_cx_pkt_t *pkt){
       lock_file_.Acquire();
       (void) fwrite(pkt->data.frame.buf, 1, pkt->data.frame.sz, outfile);
       Logger::Log("Packet written");
       lock_file_.Release();

    }

    My question is, am I passing the argument pkt correctly ? or should I put these packets into a queue then let the writeFile function work on the queue.

  • FFMPEG Green Screen from raw pictures

    21 août 2016, par Mohammad Abu Musa

    I am writing a screen recorder, I get raw data from Chrome and I want to covert it to webm files. I manged successfully to export the videos with the correct periods but the issue that I get a green screen instead of the screen capture. I am not sure what could be wrong.

    Is it writing to file ? or passing the parameters to fwrite function ?

    void EncoderInstance::OnGetFrame(int32_t result, pp::VideoFrame frame) {
       if (result != PP_OK)
           return;

       const uint8_t* data = static_cast<const>(frame.GetDataBuffer());
       pp::Size size;
       frame.GetSize(&amp;size);
       uint32_t buffersize = frame.GetDataBufferSize();


       if (is_recording_) {
           vpx_codec_iter_t iter = NULL;
           const vpx_codec_cx_pkt_t *pkt;
           // copy the pixels into our "raw input" container.
           int bytes_filled = avpicture_fill(&amp;pic_raw, data, AV_PIX_FMT_YUV420P, out_width, out_height);
           if(!bytes_filled) {
               Logger::Log("Cannot fill the raw input buffer");
               return;
           }

           if(vpx_codec_encode(&amp;codec, &amp;raw, frame_cnt, 1, flags, VPX_DL_REALTIME))
                 die_codec(&amp;codec, "Failed to encode frame");

           while( (pkt = vpx_codec_get_cx_data(&amp;codec, &amp;iter)) ) {
               switch(pkt->kind) {
                   case VPX_CODEC_CX_FRAME_PKT:
                       glb_app_thread.message_loop().PostWork(callback_factory_.NewCallback(&amp;EncoderInstance::write_ivf_frame_header, pkt));
                       glb_app_thread.message_loop().PostWork(callback_factory_.NewCallback(&amp;EncoderInstance::WriteFile, pkt));
                       break;
                   default:break;
               }
           }

           frame_cnt++;
       }

       video_track_.RecycleFrame(frame);
       if (need_config_) {
           ConfigureTrack();
           need_config_ = false;
       } else {
           video_track_.GetFrame(
                   callback_factory_.NewCallbackWithOutput(
                           &amp;EncoderInstance::OnGetFrame));
       }
    }
    </const>