Recherche avancée

Médias (0)

Mot : - Tags -/alertes

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (30)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (5266)

  • ffmpeg audio/video sync problems when uploading to WhatsApp

    11 janvier 2021, par Luke Collins

    (Note : I have looked at related questions such as this one, but the solutions there didn't work for me since presumably their output files wouldn't play correctly in mpv, whereas mine do.)

    


    I want to send a scene from a film to my friend on WhatsApp. I run

    


    


    ffmpeg -i film.mp4 -ss 00:25:30 -t 00:03:00 -c: copy out.mp4

    


    


    to extract the clip, which produces the file out.mp4 which plays fine when I play it with mpv.

    


    If instead, I play the file in my browser (firefox), the first frame is static for a couple of seconds while audio plays, until the video starts and the video and audio are both in sync. (In other words, it seems the file has additional unnecessary audio information appended to the beginning of the file, which starts before my desired clip when played in firefox).

    


    Diagrammatically, I suspect the video contains the following information :

    


    enter image description here

    


    and firefox plays it in this way :

    


    enter image description here

    


    What's worse is that when I upload it to WhatsApp, it misaligns the audio with the video, instead playing it this way :

    


    enter image description here

    


    How can I fix this so that the video has no additional video/audio information ?

    


    I appreciate any help with this.

    


  • Broken output from libavcodec/swscale, depending on resolution

    3 juin 2014, par dtumaykin

    I am writing a video conference software, I have a H.264 stream decoded with libavcoded into IYUV and than rendered into a window with VMR9 in windowless mode. I use a DirectShow graph to do so.

    To avoid unnecessary conversion into RGB and back (see link), I convert IYUV video into YUY2 before passing it to VMR9, with libswscale.

    I noticed that with video resolution of 848x480, output video is broken, so I investigated further and came up that for some resolutions video is always broken. To exclude the libswscale from elaboration, I added support for IYUV+padding to IYUV conversion, and it worked, with all resolutions.

    Still, I was willing to avoid slow IYUV, so I implemented support for NV12 (with libswscale) and YV12 (manually, essentially the same as IYUV). After doing some tests on two different computers, I came up with strange results.

    resolution  YUY2    NV12    IYUV    YV12
    PC 1 (my laptop)                
    640x360     ok      broken  ok      broken
    848x480     broken  broken  ok      broken
    960x540     broken  broken  ok      broken
    1024x576    ok      ok      ok      ok
    1280x720    ok      ok      ok      broken
    1920x1080   ok      broken  ok      broken

    PC 2                
    640x360     ok      ok      ok      ok
    848x480     ok      broken  ok      broken
    960x540     ok      ok      ok      ok
    1024x576    ok      ok      ok      ok
    1280x720    ok      broken  ok      ok
    1920x1080   ok      ok      ok      ok

    To exclude VMR9 fault, I substituted it with EVR, but with same results.

    I know that padding is needed for memory alignment, and that the size of padding depends on CPU used (libavcodec doc), that may explain difference between two computers(first has Intel i7-3820QM, the second Intel Core 2 Quad Q6600). I suppose it has something to do with padding, because images are corrupted in certain way.
    You can see my blue t-shirt in lower part of image.
    You can see my blue t-shirt in lower part of image, and my face in the upper one.

    To follow is the code for the conversion. NV12 and YUY2 conversions are performed with libswscale, while IYUV and YV12 manually.

    int pixels = _outputFrame->width * _outputFrame->height;
    if (_outputFormat == "YUY2") {
       int stride = _outputFrame->width * 2;
       sws_scale(_convertCtx, _outputFrame->data, _outputFrame->linesize, 0, _outputFrame->height, &out, &stride);
    }
    else if (_outputFormat == "NV12") {
       int stride[] = { _outputFrame->width, _outputFrame->width };
       uint8_t * dst[] = { out, out + pixels };
       sws_scale(_convertCtx, _outputFrame->data, _outputFrame->linesize, 0, _outputFrame->height, dst, stride);
    }
    else if (_outputFormat == "IYUV") { // clean ffmpeg padding
       for (int i = 0; i < _outputFrame->height; i++) // copy Y
           memcpy(out + i * _outputFrame->width, _outputFrame->data[0] + i * _outputFrame->linesize[0] , _outputFrame->width);
       for (int i = 0; i < _outputFrame->height / 2; i++) // copy U
           memcpy(out + pixels + i * _outputFrame->width / 2, _outputFrame->data[1] + i * _outputFrame->linesize[1] , _outputFrame->width / 2);            
       for (int i = 0; i < _outputFrame->height / 2; i++) // copy V
           memcpy(out + pixels + pixels/4 + i * _outputFrame->width / 2, _outputFrame->data[2] + i * _outputFrame->linesize[2] , _outputFrame->width / 2);
    }
    else if (_outputFormat == "YV12") { // like IYUV, but U is inverted with V plane
       for (int i = 0; i < _outputFrame->height; i++) // copy Y
           memcpy(out + i * _outputFrame->width, _outputFrame->data[0] + i * _outputFrame->linesize[0], _outputFrame->width);
       for (int i = 0; i < _outputFrame->height / 2; i++) // copy V
           memcpy(out + pixels + i * _outputFrame->width / 2, _outputFrame->data[2] + i * _outputFrame->linesize[2], _outputFrame->width / 2);
       for (int i = 0; i < _outputFrame->height / 2; i++) // copy U
           memcpy(out + pixels + pixels / 4 + i * _outputFrame->width / 2, _outputFrame->data[1] + i * _outputFrame->linesize[1], _outputFrame->width / 2);
    }

    out is an output buffer. _outputFrame is libavcodec output AVFrame. _convertCtx is initialized as follows.

    if (_outputFormat == "YUY2")
       _convertCtx = sws_getContext(_width, _height, AV_PIX_FMT_YUV420P,
                                    _width, _height, AV_PIX_FMT_YUYV422, SWS_FAST_BILINEAR, nullptr, nullptr, nullptr);
    else if (_outputFormat == "NV12")
       _convertCtx = sws_getContext(_width, _height, AV_PIX_FMT_YUV420P,
                                    _width, _height, AV_PIX_FMT_NV12, SWS_FAST_BILINEAR, nullptr, nullptr, nullptr);

    Questions :

    1. Are manual conversions correct ?
    2. Are my assumptions correct ?
    3. Is previous two answers are positive, where is the problem ? And especially...
    4. Why it presents only with some resolutions and not others ?
    5. What additional info can I provide ?
  • FFMPEG : Set Opacity of audio waveform color

    13 août 2018, par Software Development Consultan

    I was trying to do transparency in waveform generated. It seems there is not direct option in ’showwaves’ filter so I came across ’colorkey’ which might help.

    I am trying with following :

    ffmpeg -y -loop 1 -threads 0 -i background.png -i input.mp3 -filter_complex "[1:a]aformat=channel_layouts=mono,showwaves=s=1280x100:rate=7:mode=cline:scale=sqrt:colors=0x0000ff,colorkey=color=0x0000ff:similarity=0.01:blend=0.1[v] ; [0:v][v] overlay=0:155 [v1]" -map "[v1]" -map 1:a -c:v libx264 -crf 35 -ss 0 -t 5 -c:a copy -shortest -pix_fmt yuv420p -threads 0 test_org.mp4

    So I wanted to blue color waveform and wanted to set opacity of that 1 to 0 let say. But it seems this generates blackbox which is actual background of ’1280x100’. I want to keep background of waveform transparent and just wanted to change opacity of waveform only.

    Result of my command : enter image description here

    enter image description here

    Can you please let me know your suggestion

    @Gyan, this is with reference to following question which you have answered.

    Related last question

    Thanks, Hardik