Recherche avancée

Médias (3)

Mot : - Tags -/pdf

Autres articles (52)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (6122)

  • Prevent ffmpeg from changing the intensity of colors while downscaling the resolution of the video

    29 août 2022, par dravit

    I have a use case where I need to downscale a 716x1280 mp4 video to 358x640 (half of the original). Command that I used is

    


    ffmpeg -i ./input.mp4 -vf "scale=640:640:force_original_aspect_ratio=decrease,pad=ceil(iw/2)*2:ceil(ih/2)*2" ./output.mp4


    


    Out of 10 sample videos, 2 of the them suffered impact on colors. Below I have attached a comparison from the one which was impacted the most.

    


    Comparison of frames from the most impacted video

    


    NOTE : The one on the right is a frame from the original video and the frame on the left is the one from the processed (down scaled) video. Notice the colors red and green in the image (even the skin color and hair color were changed).

    


    What I am looking for is

    


      

    • Is there any way I can prevent changes like these happening ? Probably some flag on saturation, brightness, contrast or any other parameter.
    • 


    • I am assuming that ffmpeg uses some default settings while downscaling a video. What made ffmpeg change colors only for these two videos ? If it made similar changes for the rest of the videos as well, how to predict this behaviour before hand ?
    • 


    


    EDIT :

    


    What I already have Tried ?

    


      

    • -crf with values 0 and 18.
    • 


    • -preset veryslow as mentioned here
    • 


    


    None helped

    


    Mediainfo input V/S output

    


    





    


    


    


    


    


    



    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    param input output
    color range Limited NA (attribute not in description)
    color primaries BT.2020 NA (attribute not in description)
    transfer characteristics HLG NA (attribute not in description)
    matrix coefficients BT.2020 non-constant NA (attribute not in description)
    bit deapth 8 8

    


    


    Logs of the ffmpeg command

    


    ffmpeg -i ./input.mp4 -vf "scale=640:640:force_original_aspect_ratio=decrease,pad=ceil(iw/2)*2:ceil(ih/2)*2" -movflags +faststart ./output.mp4
ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
  built with Apple clang version 12.0.0 (clang-1200.0.32.28)
  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.3.1_9 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from './input.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.45.100
  Duration: 00:00:30.05, start: 0.000000, bitrate: 10366 kb/s
    Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt2020nc/bt2020/arib-std-b67), 716x1280, 10116 kb/s, 30 fps, 30 tbr, 19200 tbn, 38400 tbc (default)
    Metadata:
      handler_name    : Core Media Video
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 245 kb/s (default)
    Metadata:
      handler_name    : Core Media Audio
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[libx264 @ 0x7faab4808800] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x7faab4808800] profile High, level 3.0, 4:2:0, 8-bit
[libx264 @ 0x7faab4808800] 264 - core 161 r3027 4121277 - H.264/MPEG-4 AVC codec - Copyleft 2003-2020 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to './output.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.45.100
    Stream #0:0(und): Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 358x640, q=-1--1, 30 fps, 15360 tbn, 30 tbc (default)
    Metadata:
      handler_name    : Core Media Video
      encoder         : Lavc58.91.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
    Metadata:
      handler_name    : Core Media Audio
      encoder         : Lavc58.91.100 aac
[mp4 @ 0x7faab5808800] Starting second pass: moving the moov atom to the beginning of the file
frame=  901 fps=210 q=-1.0 Lsize=    3438kB time=00:00:30.02 bitrate= 938.0kbits/s speed=7.01x
video:2933kB audio:472kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.974633%
[libx264 @ 0x7faab4808800] frame I:6     Avg QP:22.60  size: 20769
[libx264 @ 0x7faab4808800] frame P:228   Avg QP:24.84  size:  7657
[libx264 @ 0x7faab4808800] frame B:667   Avg QP:27.59  size:  1697
[libx264 @ 0x7faab4808800] consecutive B-frames:  0.9%  0.9%  1.0% 97.2%
[libx264 @ 0x7faab4808800] mb I  I16..4:  9.5% 64.6% 26.0%
[libx264 @ 0x7faab4808800] mb P  I16..4:  2.5% 12.2%  2.5%  P16..4: 37.2% 20.6% 11.2%  0.0%  0.0%    skip:13.7%
[libx264 @ 0x7faab4808800] mb B  I16..4:  0.4%  2.1%  0.2%  B16..8: 42.2%  7.1%  1.2%  direct: 1.8%  skip:44.9%  L0:39.4% L1:52.8% BI: 7.8%
[libx264 @ 0x7faab4808800] 8x8 transform intra:72.2% inter:74.2%
[libx264 @ 0x7faab4808800] coded y,uvDC,uvAC intra: 61.8% 67.2% 20.2% inter: 16.7% 13.9% 1.3%
[libx264 @ 0x7faab4808800] i16 v,h,dc,p: 24% 19%  7% 50%
[libx264 @ 0x7faab4808800] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 16% 15%  6%  9% 11%  7% 10%  6%
[libx264 @ 0x7faab4808800] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 25% 16% 13%  7%  9% 10%  7%  9%  4%
[libx264 @ 0x7faab4808800] i8c dc,h,v,p: 53% 16% 26%  5%
[libx264 @ 0x7faab4808800] Weighted P-Frames: Y:3.9% UV:1.8%
[libx264 @ 0x7faab4808800] ref P L0: 57.8% 19.5% 14.8%  7.8%  0.1%
[libx264 @ 0x7faab4808800] ref B L0: 90.7%  7.2%  2.1%
[libx264 @ 0x7faab4808800] ref B L1: 95.3%  4.7%
[libx264 @ 0x7faab4808800] kb/s:799.80
[aac @ 0x7faab2036a00] Qavg: 189.523


    


  • http: restructure http_connect error handling path

    21 mars 2014, par wm4
    http: restructure http_connect error handling path
    

    The authstr memory allocations make it annoying to error in the middle
    of the header setup code, so apply the usual C error handling idiom to
    make it easier to error at any point.

    Signed-off-by : Michael Niedermayer <michaelni@gmx.at>

    • [DH] libavformat/http.c
  • Implementing a chained overlay filter with the Libavfilter library in Android NDK

    14 mars 2014, par gookman

    I am trying to use the overlay filter with multiple input sources, for an Android app. Basically, I want to overlay multiple video sources on top of a static image.
    I have looked at the sample that comes with ffmpeg and implemented my code based on that, but things don't seem to be working as expected.

    In the ffmpeg filtering sample there seems to be a single video input. I have to handle multiple video inputs and I am not sure that my solution is the correct one. I have tried to find other examples, but looks like this is the only one.

    Here is my code :

    AVFilterContext **inputContexts;
    AVFilterContext *outputContext;
    AVFilterGraph *graph;

    int initFilters(AVFrame *bgFrame, int inputCount, AVCodecContext **codecContexts, char *filters)
    {
       int i;
       int returnCode;
       char args[512];
       char name[9];
       AVFilterInOut **graphInputs = NULL;
       AVFilterInOut *graphOutput = NULL;

       AVFilter *bufferSrc  = avfilter_get_by_name("buffer");
       AVFilter *bufferSink = avfilter_get_by_name("buffersink");

       graph = avfilter_graph_alloc();
       if(graph == NULL)
           return -1;

       //allocate inputs
       graphInputs = av_calloc(inputCount + 1, sizeof(AVFilterInOut *));
       for(i = 0; i &lt;= inputCount; i++)
       {
           graphInputs[i] = avfilter_inout_alloc();
           if(graphInputs[i] == NULL)
               return -1;
       }

       //allocate input contexts
       inputContexts = av_calloc(inputCount + 1, sizeof(AVFilterContext *));
       //first is the background
       snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=1/1:pixel_aspect=0", bgFrame->width, bgFrame->height, bgFrame->format);
       returnCode = avfilter_graph_create_filter(&amp;inputContexts[0], bufferSrc, "background", args, NULL, graph);
       if(returnCode &lt; 0)
           return returnCode;
       graphInputs[0]->filter_ctx = inputContexts[0];
       graphInputs[0]->name = av_strdup("background");
       graphInputs[0]->next = graphInputs[1];

       //allocate the rest
       for(i = 1; i &lt;= inputCount; i++)
       {
           AVCodecContext *codecCtx = codecContexts[i - 1];
           snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
                       codecCtx->width, codecCtx->height, codecCtx->pix_fmt,
                       codecCtx->time_base.num, codecCtx->time_base.den,
                       codecCtx->sample_aspect_ratio.num, codecCtx->sample_aspect_ratio.den);
           snprintf(name, sizeof(name), "video_%d", i);

           returnCode = avfilter_graph_create_filter(&amp;inputContexts[i], bufferSrc, name, args, NULL, graph);
           if(returnCode &lt; 0)
               return returnCode;

           graphInputs[i]->filter_ctx = inputContexts[i];
           graphInputs[i]->name = av_strdup(name);
           graphInputs[i]->pad_idx = 0;
           if(i &lt; inputCount)
           {
               graphInputs[i]->next = graphInputs[i + 1];
           }
           else
           {
               graphInputs[i]->next = NULL;
           }
       }

       //allocate outputs
       graphOutput = avfilter_inout_alloc();  
       returnCode = avfilter_graph_create_filter(&amp;outputContext, bufferSink, "out", NULL, NULL, graph);
       if(returnCode &lt; 0)
           return returnCode;
       graphOutput->filter_ctx = outputContext;
       graphOutput->name = av_strdup("out");
       graphOutput->next = NULL;
       graphOutput->pad_idx = 0;

       returnCode = avfilter_graph_parse_ptr(graph, filters, graphInputs, &amp;graphOutput, NULL);
       if(returnCode &lt; 0)
           return returnCode;

       returnCode = avfilter_graph_config(graph, NULL);
           return returnCode;

       return 0;
    }

    The filters argument of the function is passed on to avfilter_graph_parse_ptr and it can looks like this : [background] scale=512x512 [base]; [video_1] scale=256x256 [tmp_1]; [base][tmp_1] overlay=0:0 [out]

    The call breaks after the call to avfilter_graph_config with the warning :
    Output pad "default" with type video of the filter instance "background" of buffer not connected to any destination and the error Invalid argument.

    What is it that I am not doing correctly ?