Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (7)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (2981)

  • Use ffmpeg to match an image to source frames in video [closed]

    25 mai 2024, par user22335954

    I'm trying to write an application to split a single video into multiple pieces based on the appearance of a specific image. (Think title cards). I have video files that may have more than one episode or content inside of a single file and I want them split anywhere I find that title card or image.

    


    My application works by the user providing a timestamp in the format of 00:00:00 to specify the title card image which is then used like this :

    


    ffmpeg -i FILE -qmin 1 -qscale:v 1 -vframes 00:00:00 -f image2 img.png


    


    Now I want to compare that image (img.png) to the source video file using the following example command I've found :

    


    ffmpeg -i FILE -loop 1 -i img.png -an -filter_complex "blend=difference:shortest=1,blackframe=90:20" -f null


    


    I've had to play around with the blackframe=90:20 values to get what I think are correct matches, but I don't understand what these values and/or the blackframe filter is actually controlling. The blend documentation : https://ffmpeg.org/ffmpeg-filters.html#Examples-46 doesn't seem to go into much detail about what is actually happening. I do understand the difference blend means I'm essentially looking for the smallest difference, indicating a frame match to my img, but beyond that I'm sort of just guessing.

    


    Additionally, the output shows a bunch of :

    


    [Parsed_blackframe_1 @ 0x5c1183081880] frame:195 pblack:99 pts:6506 t:6.506000 type:B last_keyframe:135


    


    Based on the frames I can parse those out to find the non-sequential frames and find how how many segments I expect in the video, but when I go to split them, I don't know how to translate the frame or the t value into a timestamp format of 00:00:00. Even for matches that I'm 100% sure of, the frame values don't seem to line up with what I expect. For example, from watching the video, I know that a perfect match occurs at exactly 00:01:45, but the blackframe data says the match occurs at frame 1471 or t:49.08 (the video has a framerate of 29.97). 1471 / 29.97 is indeed 49.08, but that does not correlate to the actual time of 1:45 (105 seconds). How can I convert these values into timestamps (or just show the timestamps of the frames) ?

    


  • LIVE555 RTSP H.264 Raw Video File Stream - ffplay Errors

    22 octobre 2015, par Chris.

    I am streaming a raw .h264 video file via RTSP using LIVE555.

    To receive the stream I am using ffplay. However, when watching the video I notice bad video quality and a bunch of errors in the ffplay-console :

    Input #0, rtsp, from 'rtsp://xx.xx.xxx.x/stream': sq=    0B f=0/0
     Metadata:
       title           : stream
       comment         : stream
    Duration: N/A, start: 0.099989, bitrate: N/A
       Stream #0:0: Video: h264 (High), yuv420p(tv, smpte170m/smpte170m/bt470m), 16
    80x1050 [SAR 1:1 DAR 8:5], 60 fps, 60 tbr, 90k tbn, 120 tbc
    [h264 @ 03f92100] RTP: missed 46 packetsq=   28KB sq=    0B f=1/1
    [h264 @ 03f92100] RTP: missed 74 packetsq=   23KB sq=    0B f=1/1
    [h264 @ 03f92100] RTP: missed 43 packets
    [h264 @ 03f92100] RTP: missed 35 packetsq=  179KB sq=    0B f=1/1
    [h264 @ 05710640] left block unavailable for requested intra4x4 mode -1 at 0 38
    [h264 @ 05710640] error while decoding MB 0 38, bytestream 48108
    [h264 @ 05710640] Cannot use next picture in error concealment
    [h264 @ 05710640] concealing 2989 DC, 2989 AC, 2989 MV errors in P frame
    [h264 @ 051043c0] left block unavailable for requested intra4x4 mode -1 at 0 26
    [h264 @ 051043c0] error while decoding MB 0 26, bytestream 5894
    [h264 @ 051043c0] concealing 4249 DC, 4249 AC, 4249 MV errors in I frame
    [h264 @ 03f92100] RTP: missed 68 packetsq=   28KB sq=    0B f=1/1
    [h264 @ 03f92100] RTP: missed 31 packetsq=  153KB sq=    0B f=1/1
    [h264 @ 052a0020] concealing 3292 DC, 3292 AC, 3292 MV errors in I frame
    [h264 @ 052a0020] Cannot use next picture in error concealment1/1
    [h264 @ 052a0020] concealing 2190 DC, 2190 AC, 2190 MV errors in P frame
    [h264 @ 03f92100] RTP: missed 69 packetsq=   27KB sq=    0B f=1/1
    [h264 @ 052a0020] concealing 3732 DC, 3732 AC, 3732 MV errors in I frame
    [h264 @ 03f92100] RTP: missed 26 packetsq=   30KB sq=    0B f=1/1
    ...

    How can I determine what’s wrong here ? Either with the stream or the file ?

  • FFMPEG and DirectX Capture in C++

    13 décembre 2016, par tankyx

    I have a system that allows me to capture a window and save it as a mp4, using ffmpeg. I use gdigrab to capture the frame, but it is fairly slow (60ms per av_read_frame calls)

    I know I can capture a game using the DirectX API, but I don’t know how to convert the resulting BMP to an AVFrame.

    The following code is the DirectX code I use to capture the frame

    extern void* pBits;
    extern IDirect3DDevice9* g_pd3dDevice;
    IDirect3DSurface9* pSurface;
    g_pd3dDevice->CreateOffscreenPlainSurface(ScreenWidth, ScreenHeight,
                                         D3DFMT_A8R8G8B8, D3DPOOL_SCRATCH,
                                         &pSurface, NULL);
    g_pd3dDevice->GetFrontBufferData(0, pSurface);
    D3DLOCKED_RECT lockedRect;
    pSurface->LockRect(&lockedRect,NULL,
                  D3DLOCK_NO_DIRTY_UPDATE|
                  D3DLOCK_NOSYSLOCK|D3DLOCK_READONLY)));
    for( int i=0 ; i < ScreenHeight ; i++)
    {
       memcpy( (BYTE*) pBits + i * ScreenWidth * BITSPERPIXEL / 8 ,
            (BYTE*) lockedRect.pBits + i* lockedRect.Pitch ,
            ScreenWidth * BITSPERPIXEL / 8);
    }
    g_pSurface->UnlockRect();
    pSurface->Release();

    And here is my read loop :

    {
       while (1) {
       if (av_read_frame(pFormatCtx, &packet) < 0 || exit)
           break;
       if (packet.stream_index == videoindex) {
           // Decode video frame
           av_packet_rescale_ts(&packet, { 1, std::stoi(pParser->GetVal("video-fps")) }, pCodecCtx->time_base);
           avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

           if (frameFinished) {
               pFrame->pts = i;
               i++;
               sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
               pFrameRGB->pts = pFrame->pts;
               enc.encodeFrame(pFrameRGB);

       }
       // Free the packet that was allocated by av_read_frame
       av_free_packet(&packet);
    }

    How can I create an AVFrame using the bmp I have, without using the av_read_frame ?