Recherche avancée

Médias (91)

Autres articles (82)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (8955)

  • LibVLC : Retrieving current frame number

    2 avril 2015, par Solidus

    I am doing a project which involves a bit of video recording and editing and I am struggling to find a good C++ library to use. I am using QT as my framework and their video player is not working properly for me to use (seeking crashes some times, e.g.). Also, I need to record video and audio from my camera and QCamera does not work in windows (for recording).

    On my program the user can draw on top of the video and I need to store the start frame and the end frame of those drawings.

    Right now I’ve been testing Libvlc which almost does what I want. From what I can see they don’t have a way to just jump to a certain frame as this can only be done by time or position.

    The first solution that I came up with was to capture the time change event and then calculate the frame using the FPS. The problem is that, as far as I can tell, the interval of this event is around 250ms, which for a 15fps video is almost 4 frames.

    So, the second solution was to use libvlc_video_set_callbacks to make my own "lock, unlock and display" and count the frames there. This works for recording from the camera, as there is no going back and the frames go from 0 until the video stops. The problem is when playing a video. Since there is no timestamp, as far as I can tell, there is no way for me to know in which frame number I am (the user can be seeking for example). My "hacky" solution was to have a "lastTime" and "numTimes" on the struct I pass into these callbacks and this is what I do :

    lastTime represents the "last new time" received and numTimes represents the number of times lastTime was received.

    get_the_current_time
    calculate_frame_num_with_fps
    if current_time is equal to lastTime:
        frameNum += numTimes
        numTimes++
    else
        lastTime = current_time
        numTimes = 1

    This kinda works but I hate the solution. I’m not sure if when doing seeking the time changes if the difference is less than 250ms. That would maybe be kinda hard for a user to do but I’d prefer not to implement it like that.

    So my question is if there is another solution for this ? If not, any libraries that could help me on this ? I know about FFMPEG which seems would solve me this problem as it’s more low level and I could implement this solution. The problem is my deadline is approaching and that would still me take some time (learning the library and doing all the work). So I was thinking of it as a last resort.

    Thank you for your time.

  • How to convert ffmpeg video frame to YUV444 ?

    21 octobre 2019, par Edward Severinsen

    I have been following a tutorial on how to use ffmpeg and SDL to make a simple video player with no audio (yet). While looking through the tutorial I realized it was out of date and many of the functions it used, for both ffmpeg and SDL, were deprecated. So I searched for an up-to-date solution and found a stackoverflow question answer that completed what the tutorial was missing.

    However, it uses YUV420 which is of low quality. I want to implement YUV444 and after studying chroma-subsampling for a bit and looking at the different formats for YUV am confused as to how to implement it. From what I understand YUV420 is a quarter of the quality YUV444 is. YUV444 means every pixel has its own chroma sample and as such is more detailed while YUV420 means pixels are grouped together and have the same chroma sample and therefore is less detailed.

    And from what I understand the different formats of YUV(420, 422, 444) are different in the way they order y, u, and v. All of this is a bit overwhelming because I haven’t done much with codecs, conversions, etc. Any help would be much appreciated and if additional info is needed please let me know before downvoting.

    Here is the code from the answer I mentioned concerning the conversion to YUV420 :

    texture = SDL_CreateTexture(
           renderer,
           SDL_PIXELFORMAT_YV12,
           SDL_TEXTUREACCESS_STREAMING,
           pCodecCtx->width,
           pCodecCtx->height
           );
       if (!texture) {
           fprintf(stderr, "SDL: could not create texture - exiting\n");
           exit(1);
       }

       // initialize SWS context for software scaling
       sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
           pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
           AV_PIX_FMT_YUV420P,
           SWS_BILINEAR,
           NULL,
           NULL,
           NULL);

       // set up YV12 pixel array (12 bits per pixel)
       yPlaneSz = pCodecCtx->width * pCodecCtx->height;
       uvPlaneSz = pCodecCtx->width * pCodecCtx->height / 4;
       yPlane = (Uint8*)malloc(yPlaneSz);
       uPlane = (Uint8*)malloc(uvPlaneSz);
       vPlane = (Uint8*)malloc(uvPlaneSz);
       if (!yPlane || !uPlane || !vPlane) {
           fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
           exit(1);
       }

       uvPitch = pCodecCtx->width / 2;
       while (av_read_frame(pFormatCtx, &packet) >= 0) {
           // Is this a packet from the video stream?
           if (packet.stream_index == videoStream) {
               // Decode video frame
               avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

               // Did we get a video frame?
               if (frameFinished) {
                   AVPicture pict;
                   pict.data[0] = yPlane;
                   pict.data[1] = uPlane;
                   pict.data[2] = vPlane;
                   pict.linesize[0] = pCodecCtx->width;
                   pict.linesize[1] = uvPitch;
                   pict.linesize[2] = uvPitch;

                   // Convert the image into YUV format that SDL uses
                   sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
                       pFrame->linesize, 0, pCodecCtx->height, pict.data,
                       pict.linesize);

                   SDL_UpdateYUVTexture(
                       texture,
                       NULL,
                       yPlane,
                       pCodecCtx->width,
                       uPlane,
                       uvPitch,
                       vPlane,
                       uvPitch
                       );

                   SDL_RenderClear(renderer);
                   SDL_RenderCopy(renderer, texture, NULL, NULL);
                   SDL_RenderPresent(renderer);

               }
           }

           // Free the packet that was allocated by av_read_frame
           av_free_packet(&packet);
           SDL_PollEvent(&event);
           switch (event.type) {
               case SDL_QUIT:
                   SDL_DestroyTexture(texture);
                   SDL_DestroyRenderer(renderer);
                   SDL_DestroyWindow(screen);
                   SDL_Quit();
                   exit(0);
                   break;
               default:
                   break;
           }

       }

       // Free the YUV frame
       av_frame_free(&pFrame);
       free(yPlane);
       free(uPlane);
       free(vPlane);

       // Close the codec
       avcodec_close(pCodecCtx);
       avcodec_close(pCodecCtxOrig);

       // Close the video file
       avformat_close_input(&pFormatCtx);

    EDIT :

    After more research I learned that in YUV420 is stored with all Y’s first then a combination of U and V bytes one after another as illustrated by this image :

    (source : wikimedia.org)

    However I also learned that YUV444 is stored in the order U, Y, V and repeats like this picture shows :

    I tried changing some things around in code :

       // I changed SDL_PIXELFORMAT_YV12 to SDL_PIXELFORMAT_UYVY
       // as to reflect the order of YUV444
       texture = SDL_CreateTexture(
           renderer,
           SDL_PIXELFORMAT_UYVY,
           SDL_TEXTUREACCESS_STREAMING,
           pCodecCtx->width,
           pCodecCtx->height
           );
       if (!texture) {
           fprintf(stderr, "SDL: could not create texture - exiting\n");
           exit(1);
       }

       // Changed AV_PIX_FMT_YUV420P to AV_PIX_FMT_YUV444P
       // for rather obvious reasons
       sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
           pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
           AV_PIX_FMT_YUV444P,
           SWS_BILINEAR,
           NULL,
           NULL,
           NULL);

       // There are as many Y, U and V bytes as pixels I just
       // made yPlaneSz and uvPlaneSz equal to the number of pixels
       yPlaneSz = pCodecCtx->width * pCodecCtx->height;
       uvPlaneSz = pCodecCtx->width * pCodecCtx->height;
       yPlane = (Uint8*)malloc(yPlaneSz);
       uPlane = (Uint8*)malloc(uvPlaneSz);
       vPlane = (Uint8*)malloc(uvPlaneSz);
       if (!yPlane || !uPlane || !vPlane) {
           fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
           exit(1);
       }

       uvPitch = pCodecCtx->width * 2;
       while (av_read_frame(pFormatCtx, &packet) >= 0) {
           // Is this a packet from the video stream?
           if (packet.stream_index == videoStream) {
               // Decode video frame
               avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

               // Rearranged the order of the planes to reflect UYV order
               // then set linesize to the number of Y, U and V bytes
               // per row
               if (frameFinished) {
                   AVPicture pict;
                   pict.data[0] = uPlane;
                   pict.data[1] = yPlane;
                   pict.data[2] = vPlane;
                   pict.linesize[0] = pCodecCtx->width;
                   pict.linesize[1] = pCodecCtx->width;
                   pict.linesize[2] = pCodecCtx->width;

                   // Convert the image into YUV format that SDL uses
                   sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
                       pFrame->linesize, 0, pCodecCtx->height, pict.data,
                       pict.linesize);

                   SDL_UpdateYUVTexture(
                       texture,
                       NULL,
                       yPlane,
                       1,
                       uPlane,
                       uvPitch,
                       vPlane,
                       uvPitch
                       );
    //.................................................

    But now I get an access violation at the call to SDL_UpdateYUVTexture... I’m honestly not sure what’s wrong. I think it may have to do with setting AVPicture pic’s member data and linesize improperly but I’m not positive.

  • capture my webcam and encode to mp4 with ffmpeg, but the x264 works bad

    17 septembre 2015, par zhangxm1991

    here is my version :
    ffmpeg version : 2.3.3
    libx264 version:142.x

    I wat to capture my video and use ffmpeg and x264 to a mp4 file. the solution is 320x240.
    But i found that the video file is so big. Then i found that the P frame and B frame is very big, even nearly equal to the I frame.I do not know why.

    here is my code :

       c->codec_id = codec_id;
       c->bit_rate = 400000;
       c->width    = 320;
       c->height   = 240;
       c->time_base.den = 10;
       c->time_base.num = 1;
       c->pix_fmt       = AV_PIX_FMT_YUV420P;

    Here is my program output :

    [libx264 @ 0xb6200fc0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX

    [libx264 @ 0xb6200fc0] profile High, level 1.3

    [libx264 @ 0xb6200fc0] 264 - core 142 - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=10 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr mbtree=1 bitrate=400 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00

    Output #0, mp4, to '/home/2014_09_12/0002b6429579/5/2014_09_12_18_39_12_600.mp4':

       Stream #0:0: Video: h264 (libx264), yuv420p, 320x240, q=-1--1, 400 kb/s, 10 tbc

       Stream #0:1: Audio: aac, 8000 Hz, mono, fltp, 32 kb/s

    [mp4 @ 0xb62005e0] Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead.

    [mp4 @ 0xb62005e0] Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead.

    [mp4 @ 0xb62005e0] Encoder did not produce proper pts, making some up.

    [libx264 @ 0xb6200fc0] frame I:3     Avg QP: 2.79  size: 72996

    [libx264 @ 0xb6200fc0] frame P:383   Avg QP: 0.14  size: 54075

    [libx264 @ 0xb6200fc0] frame B:216   Avg QP: 1.95  size: 64784

    [libx264 @ 0xb6200fc0] consecutive B-frames: 30.1% 63.2%  6.0%  0.7%

    [libx264 @ 0xb6200fc0] mb I  I16..4: 16.1% 12.7% 71.2%

    [libx264 @ 0xb6200fc0] mb P  I16..4:  3.5%  5.1% 15.9%  P16..4: 20.3% 22.8% 14.0%  0.0%  0.0%    skip:18.3%

    [libx264 @ 0xb6200fc0] mb B  I16..4:  0.4%  2.5%  8.2%  B16..8: 32.6% 19.8% 13.5%  
    direct:22.2%  skip: 0.8%  L0:25.5% L1:12.8% BI:61.6%

    [libx264 @ 0xb6200fc0] final ratefactor: -32.89

    [libx264 @ 0xb6200fc0] 8x8 transform intra:21.0% inter:29.6%

    [libx264 @ 0xb6200fc0] coded y,uvDC,uvAC intra: 100.0% 100.0% 100.0% inter: 83.3% 80.9% 80.9%

    [libx264 @ 0xb6200fc0] i16 v,h,dc,p:  5%  7% 42% 46%

    [libx264 @ 0xb6200fc0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 14% 12% 23%  8%  6%  6%  8%  9% 14%

    [libx264 @ 0xb6200fc0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 12% 11% 19% 10%  8%  8% 10% 10% 12%

    [libx264 @ 0xb6200fc0] i8c dc,h,v,p: 78%  3%  4% 15%

    [libx264 @ 0xb6200fc0] Weighted P-Frames: Y:0.5% UV:0.3%

    [libx264 @ 0xb6200fc0] ref P L0: 43.4% 14.6% 19.0% 22.9%  0.1%

    [libx264 @ 0xb6200fc0] ref B L0: 63.3% 35.7%  1.1%

    [libx264 @ 0xb6200fc0] ref B L1: 97.0%  3.0%

    [libx264 @ 0xb6200fc0] kb/s:4.53

    [aac @ 0xb6201ba0] 2 frames left in the queue on closing