Recherche avancée

Médias (9)

Mot : - Tags -/soundtrack

Autres articles (70)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (7547)

  • ffmpeg FPS information mismatch with the video

    8 décembre 2017, par Adorn

    I have a bunch of videos with some statistics of what is happening inside a video. One such piece of information is given in terms of time of the video in seconds up to one decimal point.

    To get the FPS of a video, I am using ffmpeg -i

    But when I manually compute one particular frame’s time using given FPS, it does not match.

    For example, ffmpeg outputs FPS = 30.

    I look at the video statistics, the frame at the 156.8 = 2.368 has to be
    4704'th frame. I open the video using ’skvideo’, read all the frames, and view the 4704'th frame. It is some frame around time 2.12 ?. I checked multiple such instances in multiple videos and this is a common behavior.

    I do not understand why this is so and how can I get around the problem ?

    As such I am not bounded by ffmpeg. Skvideo is being used to read the videos. I tried opencv, as of now it does not work with VideoCapture, and reinstalling it is costly for me time wise. But I guess ’opencv/skvideo’ should not matter, one can count the frames manually as well.

    So, in the solution, I am looking out for -

    1. Given timestamps of inside of a video, how can I find a frame of that particular time location ?

    2. In case someone might have already worked on this, this is related to THUMOS dataset. I am on Ubuntu 16.04

    EDIT_1

    Actually I can be more specific as it is a publicly available data. The time bounds are of an important activity. For example, in a video, when does basketball dunk occurs ? It is given in pairs - [start end]. Some videos have multiple activities, some have only one.

    Here is a sample video, and following are the activity times.

    [[  16.5, 20.8],
    [  26.6, 32.2],
    [  34.8, 42.1],
    [  47.8, 50.0],
    [  58.1, 62.9],
    [  65.6, 67.2],
    [  68.5, 74.0],
    [  76.4, 78.3],
    [  78.7, 79.8],
    [  80.8, 82.1],
    [  85.0, 87.3],
    [  90.1, 91.4],
    [  98.5, 100.3]]

    I also tried checking manually, 32.87 FPS "almost" works for few videos but not for all. and almost means it is off by 10 frames. This is a huge difference for my task, and I need exact frame.

    Also, there has to be some way, because it can be visually observed with multiple video players that times in the dataset are correct.

  • avcodec/mediacodecdec : refactor to take advantage of new decoding api

    16 février 2018, par Aman Gupta
    avcodec/mediacodecdec : refactor to take advantage of new decoding api
    

    This refactor splits up the main mediacodec decode loop into two
    send/receive helpers, which are then used to rewrite the receive_frame
    callback and take full advantage of the new decoding api. Since we
    can now request packets on demand with ff_decode_get_packet(), the
    fifo buffer is no longer necessary and has been removed.

    This change was motivated by behavior observed on certain Android TV
    devices, featuring hardware mpeg2/h264 decoders which also deinterlace
    content (to produce multiple frames per field). Previously, this code
    caused buffering issues because queueInputBuffer() was always invoked
    before each dequeueOutputBuffer(), even though twice as many output
    buffers were being generated.

    With this patch, the decoder will always attempt to drain new frames
    first before sending more data into the underlying codec.

    Signed-off-by : Matthieu Bouron <matthieu.bouron@gmail.com>

    • [DH] libavcodec/mediacodecdec.c
    • [DH] libavcodec/mediacodecdec_common.c
    • [DH] libavcodec/mediacodecdec_common.h
  • Does this code contain a potential memory leak ?

    26 juillet 2017, par Johnnylin

    Here is the code :

    cv::Mat YV12ToBRG24_FFmpeg(unsigned char* pYUV, int width,int height)
    {
       if (width &lt; 1 || height &lt; 1 || pYUV == nullptr){
           return cv::Mat();
       }

       cv::Mat result(height,width,CV_8UC3, cv::Scalar::all(0));
       uchar* pBGR24 = result.data;

       AVFrame* pFrameYUV = av_frame_alloc();
       pFrameYUV->width = width;
       pFrameYUV->height = height;
       pFrameYUV->format = AV_PIX_FMT_YUV420P;
       av_image_fill_arrays(pFrameYUV->data, pFrameYUV->linesize, pYUV, AV_PIX_FMT_YUV420P, width, height, 1);

       //U,V exchange
       uint8_t * ptmp=pFrameYUV->data[1];
       pFrameYUV->data[1]=pFrameYUV->data[2];
       pFrameYUV->data[2]=ptmp;

       AVFrame* pFrameBGR = av_frame_alloc();
       pFrameBGR->width = width;
       pFrameBGR->height = height;
       pFrameBGR->format = AV_PIX_FMT_BGR24;
       av_image_fill_arrays(pFrameBGR->data, pFrameBGR->linesize, pBGR24, AV_PIX_FMT_BGR24, width, height, 1);

       struct SwsContext* imgCtx = nullptr;
       imgCtx = sws_getContext(width,height,AV_PIX_FMT_YUV420P,width,height,AV_PIX_FMT_BGR24,SWS_BILINEAR,0,0,0);
       if (imgCtx != nullptr){
           sws_scale(imgCtx,pFrameYUV->data,pFrameYUV->linesize,0,height,pFrameBGR->data,pFrameBGR->linesize);
           sws_freeContext(imgCtx);
           imgCtx = nullptr;
           ptmp = nullptr;
           pBGR24 = nullptr;
           av_frame_free(&amp;pFrameYUV);
           av_frame_free(&amp;pFrameBGR);
           return result;
       }
       else{
           sws_freeContext(imgCtx);
           imgCtx = nullptr;
           ptmp = nullptr;
           pBGR24 = nullptr;
           av_frame_free(&amp;pFrameYUV);
           av_frame_free(&amp;pFrameBGR);
           return cv::Mat();
       }
    }

    This function is called every 40 ms (25 fps) and I saw a significant memory increase after several days(like 12GB). But if I ran this code for hours, the memory leak problem would not be obvious enough to be observed.

    Can anyone help ?
    Thanks.