Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (18)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

Sur d’autres sites (3813)

  • Broken output from libavcodec/swscale, depending on resolution

    3 juin 2014, par dtumaykin

    I am writing a video conference software, I have a H.264 stream decoded with libavcoded into IYUV and than rendered into a window with VMR9 in windowless mode. I use a DirectShow graph to do so.

    To avoid unnecessary conversion into RGB and back (see link), I convert IYUV video into YUY2 before passing it to VMR9, with libswscale.

    I noticed that with video resolution of 848x480, output video is broken, so I investigated further and came up that for some resolutions video is always broken. To exclude the libswscale from elaboration, I added support for IYUV+padding to IYUV conversion, and it worked, with all resolutions.

    Still, I was willing to avoid slow IYUV, so I implemented support for NV12 (with libswscale) and YV12 (manually, essentially the same as IYUV). After doing some tests on two different computers, I came up with strange results.

    resolution  YUY2    NV12    IYUV    YV12
    PC 1 (my laptop)                
    640x360     ok      broken  ok      broken
    848x480     broken  broken  ok      broken
    960x540     broken  broken  ok      broken
    1024x576    ok      ok      ok      ok
    1280x720    ok      ok      ok      broken
    1920x1080   ok      broken  ok      broken

    PC 2                
    640x360     ok      ok      ok      ok
    848x480     ok      broken  ok      broken
    960x540     ok      ok      ok      ok
    1024x576    ok      ok      ok      ok
    1280x720    ok      broken  ok      ok
    1920x1080   ok      ok      ok      ok

    To exclude VMR9 fault, I substituted it with EVR, but with same results.

    I know that padding is needed for memory alignment, and that the size of padding depends on CPU used (libavcodec doc), that may explain difference between two computers(first has Intel i7-3820QM, the second Intel Core 2 Quad Q6600). I suppose it has something to do with padding, because images are corrupted in certain way.
    You can see my blue t-shirt in lower part of image.
    You can see my blue t-shirt in lower part of image, and my face in the upper one.

    To follow is the code for the conversion. NV12 and YUY2 conversions are performed with libswscale, while IYUV and YV12 manually.

    int pixels = _outputFrame->width * _outputFrame->height;
    if (_outputFormat == "YUY2") {
       int stride = _outputFrame->width * 2;
       sws_scale(_convertCtx, _outputFrame->data, _outputFrame->linesize, 0, _outputFrame->height, &out, &stride);
    }
    else if (_outputFormat == "NV12") {
       int stride[] = { _outputFrame->width, _outputFrame->width };
       uint8_t * dst[] = { out, out + pixels };
       sws_scale(_convertCtx, _outputFrame->data, _outputFrame->linesize, 0, _outputFrame->height, dst, stride);
    }
    else if (_outputFormat == "IYUV") { // clean ffmpeg padding
       for (int i = 0; i < _outputFrame->height; i++) // copy Y
           memcpy(out + i * _outputFrame->width, _outputFrame->data[0] + i * _outputFrame->linesize[0] , _outputFrame->width);
       for (int i = 0; i < _outputFrame->height / 2; i++) // copy U
           memcpy(out + pixels + i * _outputFrame->width / 2, _outputFrame->data[1] + i * _outputFrame->linesize[1] , _outputFrame->width / 2);            
       for (int i = 0; i < _outputFrame->height / 2; i++) // copy V
           memcpy(out + pixels + pixels/4 + i * _outputFrame->width / 2, _outputFrame->data[2] + i * _outputFrame->linesize[2] , _outputFrame->width / 2);
    }
    else if (_outputFormat == "YV12") { // like IYUV, but U is inverted with V plane
       for (int i = 0; i < _outputFrame->height; i++) // copy Y
           memcpy(out + i * _outputFrame->width, _outputFrame->data[0] + i * _outputFrame->linesize[0], _outputFrame->width);
       for (int i = 0; i < _outputFrame->height / 2; i++) // copy V
           memcpy(out + pixels + i * _outputFrame->width / 2, _outputFrame->data[2] + i * _outputFrame->linesize[2], _outputFrame->width / 2);
       for (int i = 0; i < _outputFrame->height / 2; i++) // copy U
           memcpy(out + pixels + pixels / 4 + i * _outputFrame->width / 2, _outputFrame->data[1] + i * _outputFrame->linesize[1], _outputFrame->width / 2);
    }

    out is an output buffer. _outputFrame is libavcodec output AVFrame. _convertCtx is initialized as follows.

    if (_outputFormat == "YUY2")
       _convertCtx = sws_getContext(_width, _height, AV_PIX_FMT_YUV420P,
                                    _width, _height, AV_PIX_FMT_YUYV422, SWS_FAST_BILINEAR, nullptr, nullptr, nullptr);
    else if (_outputFormat == "NV12")
       _convertCtx = sws_getContext(_width, _height, AV_PIX_FMT_YUV420P,
                                    _width, _height, AV_PIX_FMT_NV12, SWS_FAST_BILINEAR, nullptr, nullptr, nullptr);

    Questions :

    1. Are manual conversions correct ?
    2. Are my assumptions correct ?
    3. Is previous two answers are positive, where is the problem ? And especially...
    4. Why it presents only with some resolutions and not others ?
    5. What additional info can I provide ?
  • adaptive bitrate, Is it better to reduce the resolution of a video or just reduce it's bitrate ?

    29 janvier 2018, par loki

    I must provide my videos in adaptive bitrate (HLS). To do this I need to provide several videos at different bitrate. Using ffmpeg :

    1. I can make several variants of the video at the same resolution but at different bitrate.
    2. I can make several variants of the video at different resolutions resulting in different bitrate.

    So what is the way to go ? What settings does other services like youtube/instagram/facebook use ?

  • Révision 23894 : Début de résolution de #3996 . Ne plus afficher quota_cache qui est obsolète et ...

    18 janvier 2018, par erational@erational.org