Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (54)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Les statuts des instances de mutualisation

    13 mars 2010, par

    Pour des raisons de compatibilité générale du plugin de gestion de mutualisations avec les fonctions originales de SPIP, les statuts des instances sont les mêmes que pour tout autre objets (articles...), seuls leurs noms dans l’interface change quelque peu.
    Les différents statuts possibles sont : prepa (demandé) qui correspond à une instance demandée par un utilisateur. Si le site a déjà été créé par le passé, il est passé en mode désactivé. publie (validé) qui correspond à une instance validée par un (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (7572)

  • hwcontext_vulkan : add support for allocating all planes in a single allocation

    7 décembre 2021, par Wenbin Chen
    hwcontext_vulkan : add support for allocating all planes in a single allocation
    

    VAAPI on Intel can import external frame, but the planes of the external
    frames should be in the same drm object. A new option "contiguous_planes"
    is added to device. This flag tells device to allocate places in one
    memory. When device is derived from vaapi this flag will be enabled.
    A new flag frame_flag is also added to AVVulkanFramesContext. User
    can use this flag to force enable or disable this behaviour.
    A new variable "offset "is added to AVVKFrame. It describe describe the
    offset from the memory currently bound to the VkImage.

    Signed-off-by : Wenbin Chen <wenbin.chen@intel.com>
    Further-modifications-by : Lynne <dev@lynne.ee>

    • [DH] libavutil/hwcontext_vulkan.c
    • [DH] libavutil/hwcontext_vulkan.h
  • mux / remux / box in memory h264 frames to mp4 via C / C++ (no ffmpeg cmdline)

    28 février 2019, par Vans S

    I have looked at various tutorials and I am still struggling to correctly frame raw h264 into an mp4 container. The problem is the stream does not have an end (live) and is in memory, a lot of the examples available of muxers assume the streams are located on disk as files.

    I have tried and looked at
    http://www.ffmpeg.org/doxygen/trunk/doc_2examples_2remuxing_8c-example.html
    https://ffmpeg.org/doxygen/trunk/muxing_8c-source.html
    and countless other examples.

    This is what I have now :

    AVFormatContext *outFmtCtx = NULL;
    AVFormatContext *inFmtCtx = NULL;
    std::vector g_p;

    static int read_packet(void *opaque, uint8_t *pBuf, int nBuf) {
       printf("read_packet %d\n", g_p.size());
       memcpy(pBuf, &amp;g_p[0], g_p.size());
       return g_p.size();
    }

    static int write_packet(void *opaque, uint8_t *buf, int buf_size) {
       printf("write_packet %d %lld\n", buf_size, timestamp_micro());
       return buf_size;
    }

    static int64_t seek_packet(void *opaque, int64_t offset, int whence) {
       printf("seek_packet\n");
       exit(0);
       return 0;
    }

    void create_mp4() {
       av_log_set_level(AV_LOG_DEBUG);

       //alloc memory buffer
       uint8_t *avioc_buffer = NULL;
       int avioc_buffer_size = 8 * 1024 * 1024;
       avioc_buffer = (uint8_t *)av_malloc(avioc_buffer_size);
       if (!avioc_buffer) {
           printf("failed make avio buffer\n");
           exit(1);
       }
       AVIOContext* pIOCtx = avio_alloc_context(avioc_buffer, avioc_buffer_size, 1,
           NULL/*outptr*/, &amp;read_packet, &amp;write_packet, &amp;seek_packet);
       if (!pIOCtx) {
           printf("failed make avio context\n");
           exit(1);
       }

       inFmtCtx = avformat_alloc_context();
       inFmtCtx->pb = pIOCtx;
       inFmtCtx->iformat = av_find_input_format("h264");
       avformat_open_input(&amp;inFmtCtx, "", inFmtCtx->iformat, NULL);

       AVOutputFormat* outFmt = av_guess_format("mp4", NULL, NULL);
       avformat_alloc_output_context2(&amp;outFmtCtx, outFmt, NULL, NULL);

       //set the AIO buffer to the memory one
       outFmtCtx->pb = pIOCtx;
       //outFmtCtx->flags = AVFMT_FLAG_CUSTOM_IO;//pIOCtx;

       AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_H264);
       AVStream * outStrm = avformat_new_stream(outFmtCtx, codec);

       avcodec_get_context_defaults3(outStrm->codec, codec);
       outStrm->id = 0;
       outStrm->codec->coder_type = AVMEDIA_TYPE_VIDEO;
       outStrm->codec->codec_id = AV_CODEC_ID_H264;
       outStrm->codec->bit_rate = 8000000;
       outStrm->codec->width = 1280;
       outStrm->codec->height = 720;
       outStrm->codec->time_base.den = 60;
       outStrm->codec->time_base.num = 1;
       outStrm->codec->gop_size = 0xffffffff;
       outStrm->codec->pix_fmt = AV_PIX_FMT_NV12;
       outStrm->duration = 0;

       //Allow it to play immediately
       AVDictionary* options = nullptr;
       av_dict_set( &amp;options, "movflags", "empty_moov+default_base_moof+frag_keyframe", 0 );

       avformat_write_header(outFmtCtx, &amp;options);
       printf("mp4 muxer created\n");
    }

    //set the first raw h264 frame from capture
    g_p = raw_h264_frame;
    AVPacket pkt;
    //call av_read_frame (infinite loop here on read_packet)
    int wret = av_read_frame(inFmtCtx, &amp;pkt);

    I get an infinite loop on read_packet after calling av_read_frame, I tried to construct the packets myself by also doing

    AVPacket pkt;
    av_init_packet(&amp;pkt);
    if (nFrame == 0) {
       pkt.flags        |= AV_PKT_FLAG_KEY;
    }
    pkt.stream_index  = 0;
    pkt.data          = &amp;raw_h264_frame[0];
    pkt.size          = raw_h264_frame.size();
    //pkt.dts = AV_NOPTS_VALUE;
    //pkt.pts = AV_NOPTS_VALUE;
    pkt.dts = nFrame;
    pkt.pts = nFrame;

    int ret = av_interleaved_write_frame(outFmtCtx, &amp;pkt);
    if (ret &lt; 0) {
       printf("error av_write_frame\n");
       exit(1);
    }

    But this does not work either. Some help would be greatly appreciated or guidance where to look (perhaps drop libFFMPEG and look elsewhere, etc).

    VLC errors look like :

    mp4 warning: out of bound child    �
    mp4 warning: out of bound child    �
    mp4 warning: no chunk defined
    mp4 warning: STTS table of 0 entries
    mp4 warning: cannot select track[Id 0x1]

    or

    mp4 warning: no chunk defined
    mp4 warning: STTS table of 0 entries
    mp4: Fragment sequence discontinuity detected 1 != 0
    avcodec warning: thread type 1: disabling hardware acceleration
    main warning: picture is too late to be displayed (missing 14 ms)
    main warning: picture is too late to be displayed (missing 14 ms)
    main warning: picture is too late to be displayed (missing 14 ms)
  • ffmpeg avcodec_send_packet/avcodec_receive_frame memory leak

    23 janvier 2019, par G Hamlin

    I’m attempting to decode frames, but memory usage grows with every frame (more specifically, with every call to avcodec_send_packet) until finally the code crashes with a bad_alloc. Here’s the basic decode loop :

    int rfret = 0;
    while((rfret = av_read_frame(inctx.get(), &amp;packet)) >= 0){
       if (packet.stream_index == vstrm_idx) {

           //std::cout &lt;&lt; "Sending Packet" &lt;&lt; std::endl;
           int ret = avcodec_send_packet(ctx.get(), &amp;packet);
           if (ret &lt; 0 || ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
               std::cout &lt;&lt; "avcodec_send_packet: " &lt;&lt; ret &lt;&lt; std::endl;
               break;
           }

           while (ret  >= 0) {
               //std::cout &lt;&lt; "Receiving Frame" &lt;&lt; std::endl;
               ret = avcodec_receive_frame(ctx.get(), fr);
               if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
                   //std::cout &lt;&lt; "avcodec_receive_frame: " &lt;&lt; ret &lt;&lt; std::endl;
                   av_frame_unref(fr);
                   // av_frame_free(&amp;fr);
                   break;
               }

               std::cout &lt;&lt; "frame: " &lt;&lt; ctx->frame_number &lt;&lt; std::endl;

               // eventually do something with the frame here...

               av_frame_unref(fr);
               // av_frame_free(&amp;fr);
           }
       }
       else {
           //std::cout &lt;&lt; "Not Video" &lt;&lt; std::endl;
       }
       av_packet_unref(&amp;packet);
    }

    Memory usage/leakage seems to scale with the resolution of the video I’m decoding. For example, for a 3840x2160 resolution video, the memory usage in windows task manager consistently jumps up by about 8mb (1 byte per pixel ??) for each received frame. Do I need to do something besides call av_frame_unref to release the memory ?

    (more) complete code below


    void AVFormatContextDeleter(AVFormatContext* ptr)
    {
       if (ptr) {
           avformat_close_input(&amp;ptr);
       }
    }

    void AVCodecContextDeleter(AVCodecContext* ptr)
    {
       if (ptr) {
           avcodec_free_context(&amp;ptr);
       }
    }

    typedef std::unique_ptr AVFormatContextPtr;
    typedef std::unique_ptr AVCodecContextPtr;

    AVCodecContextPtr createAvCodecContext(AVCodec *vcodec)
    {
       AVCodecContextPtr ctx(avcodec_alloc_context3(vcodec), AVCodecContextDeleter);
       return ctx;
    }

    AVFormatContextPtr createFormatContext(const std::string&amp; filename)
    {
       AVFormatContext* inctxPtr = nullptr;
       int ret = avformat_open_input(&amp;inctxPtr, filename.c_str(), nullptr, nullptr);
       //    int ret = avformat_open_input(&amp;inctx, "D:/Videos/test.mp4", nullptr, nullptr);
       if (ret != 0) {
           inctxPtr = nullptr;
       }

       return AVFormatContextPtr(inctxPtr, AVFormatContextDeleter);
    }

    int testDecode()
    {
       // open input file context
       AVFormatContextPtr inctx = createFormatContext("D:/Videos/Matt Chapman Hi Greg.MOV");

       if (!inctx) {
           // std::cerr &lt;&lt; "fail to avforamt_open_input(\"" &lt;&lt; infile &lt;&lt; "\"): ret=" &lt;&lt; ret;
           return 1;
       }

       // retrieve input stream information
       int ret = avformat_find_stream_info(inctx.get(), nullptr);
       if (ret &lt; 0) {
           //std::cerr &lt;&lt; "fail to avformat_find_stream_info: ret=" &lt;&lt; ret;
           return 2;
       }

       // find primary video stream
       AVCodec* vcodec = nullptr;
       const int vstrm_idx = av_find_best_stream(inctx.get(), AVMEDIA_TYPE_VIDEO, -1, -1, &amp;vcodec, 0);
       if (vstrm_idx &lt; 0) {
           //std::cerr &lt;&lt; "fail to av_find_best_stream: vstrm_idx=" &lt;&lt; vstrm_idx;
           return 3;
       }

       AVCodecParameters* origin_par = inctx->streams[vstrm_idx]->codecpar;
       if (vcodec == nullptr) {  // is this even necessary?
           vcodec = avcodec_find_decoder(origin_par->codec_id);
           if (!vcodec) {
               // Can't find decoder
               return 4;
           }
       }

       AVCodecContextPtr ctx = createAvCodecContext(vcodec);
       if (!ctx) {
           return 5;
       }

       ret = avcodec_parameters_to_context(ctx.get(), origin_par);
       if (ret) {
           return 6;
       }

       ret = avcodec_open2(ctx.get(), vcodec, nullptr);
       if (ret &lt; 0) {
           return 7;
       }

       //print input video stream informataion
       std::cout
               //&lt;&lt; "infile: " &lt;&lt; infile &lt;&lt; "\n"
               &lt;&lt; "format: " &lt;&lt; inctx->iformat->name &lt;&lt; "\n"
               &lt;&lt; "vcodec: " &lt;&lt; vcodec->name &lt;&lt; "\n"
               &lt;&lt; "size:   " &lt;&lt; origin_par->width &lt;&lt; 'x' &lt;&lt; origin_par->height &lt;&lt; "\n"
               &lt;&lt; "fps:    " &lt;&lt; av_q2d(ctx->framerate) &lt;&lt; " [fps]\n"
               &lt;&lt; "length: " &lt;&lt; av_rescale_q(inctx->duration, ctx->time_base, {1,1000}) / 1000. &lt;&lt; " [sec]\n"
               &lt;&lt; "pixfmt: " &lt;&lt; av_get_pix_fmt_name(ctx->pix_fmt) &lt;&lt; "\n"
               &lt;&lt; "frame:  " &lt;&lt; inctx->streams[vstrm_idx]->nb_frames &lt;&lt; "\n"
               &lt;&lt; std::flush;


       AVPacket packet;

       av_init_packet(&amp;packet);
       packet.data = nullptr;
       packet.size = 0;

       AVFrame *fr = av_frame_alloc();
       if (!fr) {
           return 8;
       }

       int rfret = 0;
       while((rfret = av_read_frame(inctx.get(), &amp;packet)) >= 0){
           if (packet.stream_index == vstrm_idx) {

               //std::cout &lt;&lt; "Sending Packet" &lt;&lt; std::endl;
               int ret = avcodec_send_packet(ctx.get(), &amp;packet);
               if (ret &lt; 0 || ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
                   std::cout &lt;&lt; "avcodec_send_packet: " &lt;&lt; ret &lt;&lt; std::endl;
                   break;
               }

               while (ret  >= 0) {
                   //std::cout &lt;&lt; "Receiving Frame" &lt;&lt; std::endl;
                   ret = avcodec_receive_frame(ctx.get(), fr);
                   if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
                       //std::cout &lt;&lt; "avcodec_receive_frame: " &lt;&lt; ret &lt;&lt; std::endl;
                       av_frame_unref(fr);
                       // av_frame_free(&amp;fr);
                       break;
                   }

                   std::cout &lt;&lt; "frame: " &lt;&lt; ctx->frame_number &lt;&lt; std::endl;

                   // do something with the frame here...

                   av_frame_unref(fr);
                   // av_frame_free(&amp;fr);
               }
           }
           else {
               //std::cout &lt;&lt; "Not Video" &lt;&lt; std::endl;
           }
           av_packet_unref(&amp;packet);
       }

       std::cout &lt;&lt; "RFRET = " &lt;&lt; rfret &lt;&lt; std::endl;

       return 0;
    }

    Update 1 : (1/21/2019) Compiling on a different machine and running with different video files I am not seeing the memory usage growing without bound. I’ll try to narrow down where the difference lies (compiler ?, ffmpeg version ?, or video encoding ?)

    Update 2 : (1/21/2019) Ok, it looks like there is some interaction occurring between ffmpeg and Qt’s QCamera. In my application, I’m using Qt to manage the webcam, but decided to use ffmpeg libraries to handle decoding/encoding since Qt doesn’t have as comprehensive support for different codecs. If I have the camera turned on (through Qt), ffmpeg decoding memory consumption grows without bound. If the camera is off, ffmpeg behaves fine. I’ve tried this both with a physical camera (Logitech C920) and with a virtual camera using OBS-Virtualcam, with the same result. So far I’m baffled as to how the two systems are interacting...