Recherche avancée

Médias (91)

Autres articles (2)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 is the first MediaSPIP stable release.
    Its official release date is June 21, 2013 and is announced here.
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

Sur d’autres sites (1499)

  • Broken output from libavcodec/swscale, depending on resolution

    3 juin 2014, par dtumaykin

    I am writing a video conference software, I have a H.264 stream decoded with libavcoded into IYUV and than rendered into a window with VMR9 in windowless mode. I use a DirectShow graph to do so.

    To avoid unnecessary conversion into RGB and back (see link), I convert IYUV video into YUY2 before passing it to VMR9, with libswscale.

    I noticed that with video resolution of 848x480, output video is broken, so I investigated further and came up that for some resolutions video is always broken. To exclude the libswscale from elaboration, I added support for IYUV+padding to IYUV conversion, and it worked, with all resolutions.

    Still, I was willing to avoid slow IYUV, so I implemented support for NV12 (with libswscale) and YV12 (manually, essentially the same as IYUV). After doing some tests on two different computers, I came up with strange results.

    resolution  YUY2    NV12    IYUV    YV12
    PC 1 (my laptop)                
    640x360     ok      broken  ok      broken
    848x480     broken  broken  ok      broken
    960x540     broken  broken  ok      broken
    1024x576    ok      ok      ok      ok
    1280x720    ok      ok      ok      broken
    1920x1080   ok      broken  ok      broken

    PC 2                
    640x360     ok      ok      ok      ok
    848x480     ok      broken  ok      broken
    960x540     ok      ok      ok      ok
    1024x576    ok      ok      ok      ok
    1280x720    ok      broken  ok      ok
    1920x1080   ok      ok      ok      ok

    To exclude VMR9 fault, I substituted it with EVR, but with same results.

    I know that padding is needed for memory alignment, and that the size of padding depends on CPU used (libavcodec doc), that may explain difference between two computers(first has Intel i7-3820QM, the second Intel Core 2 Quad Q6600). I suppose it has something to do with padding, because images are corrupted in certain way.
    You can see my blue t-shirt in lower part of image.
    You can see my blue t-shirt in lower part of image, and my face in the upper one.

    To follow is the code for the conversion. NV12 and YUY2 conversions are performed with libswscale, while IYUV and YV12 manually.

    int pixels = _outputFrame->width * _outputFrame->height;
    if (_outputFormat == "YUY2") {
       int stride = _outputFrame->width * 2;
       sws_scale(_convertCtx, _outputFrame->data, _outputFrame->linesize, 0, _outputFrame->height, &out, &stride);
    }
    else if (_outputFormat == "NV12") {
       int stride[] = { _outputFrame->width, _outputFrame->width };
       uint8_t * dst[] = { out, out + pixels };
       sws_scale(_convertCtx, _outputFrame->data, _outputFrame->linesize, 0, _outputFrame->height, dst, stride);
    }
    else if (_outputFormat == "IYUV") { // clean ffmpeg padding
       for (int i = 0; i < _outputFrame->height; i++) // copy Y
           memcpy(out + i * _outputFrame->width, _outputFrame->data[0] + i * _outputFrame->linesize[0] , _outputFrame->width);
       for (int i = 0; i < _outputFrame->height / 2; i++) // copy U
           memcpy(out + pixels + i * _outputFrame->width / 2, _outputFrame->data[1] + i * _outputFrame->linesize[1] , _outputFrame->width / 2);            
       for (int i = 0; i < _outputFrame->height / 2; i++) // copy V
           memcpy(out + pixels + pixels/4 + i * _outputFrame->width / 2, _outputFrame->data[2] + i * _outputFrame->linesize[2] , _outputFrame->width / 2);
    }
    else if (_outputFormat == "YV12") { // like IYUV, but U is inverted with V plane
       for (int i = 0; i < _outputFrame->height; i++) // copy Y
           memcpy(out + i * _outputFrame->width, _outputFrame->data[0] + i * _outputFrame->linesize[0], _outputFrame->width);
       for (int i = 0; i < _outputFrame->height / 2; i++) // copy V
           memcpy(out + pixels + i * _outputFrame->width / 2, _outputFrame->data[2] + i * _outputFrame->linesize[2], _outputFrame->width / 2);
       for (int i = 0; i < _outputFrame->height / 2; i++) // copy U
           memcpy(out + pixels + pixels / 4 + i * _outputFrame->width / 2, _outputFrame->data[1] + i * _outputFrame->linesize[1], _outputFrame->width / 2);
    }

    out is an output buffer. _outputFrame is libavcodec output AVFrame. _convertCtx is initialized as follows.

    if (_outputFormat == "YUY2")
       _convertCtx = sws_getContext(_width, _height, AV_PIX_FMT_YUV420P,
                                    _width, _height, AV_PIX_FMT_YUYV422, SWS_FAST_BILINEAR, nullptr, nullptr, nullptr);
    else if (_outputFormat == "NV12")
       _convertCtx = sws_getContext(_width, _height, AV_PIX_FMT_YUV420P,
                                    _width, _height, AV_PIX_FMT_NV12, SWS_FAST_BILINEAR, nullptr, nullptr, nullptr);

    Questions :

    1. Are manual conversions correct ?
    2. Are my assumptions correct ?
    3. Is previous two answers are positive, where is the problem ? And especially...
    4. Why it presents only with some resolutions and not others ?
    5. What additional info can I provide ?
  • FFmpeg : how to produce MP4 CENC (Common Encryption) videos

    2 novembre 2022, par Roland Le Franc

    What is the correct syntax to do CENC encryption with ffmpeg ?

    



    The ffmpeg 3.0 release notes include "Common Encryption (CENC) MP4 encoding and decoding support", and the files libavformat/movenccenc.h and libavformat/movenccenc.c seem to include everything needed to encrypt MP4 files according to the Common Encryption standard.

    



    However, I can't find any documentation on this topic in the ffmpeg manual pages.

    



    Regards

    


  • DCT coefficients and MV extraction in ffmpeg Mpeg-4 encoding

    30 juin 2017, par Giacomo Calvigioni

    I’m using ffmpeg and libx264 to encode a video and I want to extract the DCT coefficients and motion vector of each frame during the encoding process.

    What is the best way to do this ?

    I read in the ffmpeg manual that is possible to use the debug mode with some flags to extract these values. I tried ffmpeg -debug dct_coeff to output the dct coefficients but this option doesn’t work for me ; is it deprecated or related to a specific ffmpeg version ?

    Another option would be to modify and recompile ffmpeg source code but I don’t know in which part of the code DCT and MV are calculated.

    Any help with the debug mode or code modification suggestions would be appreciated.