Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (8)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

Sur d’autres sites (3891)

  • How to encode Planar 4:2:0 (fourcc P010)

    20 juillet 2021, par DennisFleurbaaij

    I'm trying to recode fourcc V210 (which is a packed YUV4:2:2 format) into a P010 (planar YUV4:2:0). I think I've implemented it according to spec, but the renderer is giving a wrong image so something is off. Decoding the V210 has a decent example in ffmpeg (defines are modified from their solution) but I can't find a P010 encoder to look at what I'm doing wrong.

    


    (Yes, I've tried ffmpeg and that works but it's too slow for this, it takes 30ms per frame on an Intel Gen11 i7)

    


    Clarification (after @Frank's question) : The frames being processed are 4k (3840px wide) and hence there is no code for doing the 128b alignment.

    


    This is running on intel so little endian conversions applied.

    


    Try1 - all green image :

    


    The following code

    


    #define V210_READ_PACK_BLOCK(a, b, c) \
    do {                              \
        val  = *src++;                \
        a = val & 0x3FF;              \
        b = (val >> 10) & 0x3FF;      \
        c = (val >> 20) & 0x3FF;      \
    } while (0)

#define PIXELS_PER_PACK 6
#define BYTES_PER_PACK (4*4)

void MyClass::FormatVideoFrame(
    BYTE* inFrame,
    BYTE* outBuffer)
{
    const uint32_t pixels = m_height * m_width;

    const uint32_t* src = (const uint32_t *)inFrame);

    uint16_t* dstY = (uint16_t *)outBuffer;

    uint16_t* dstUVStart = (uint16_t*)(outBuffer + ((ptrdiff_t)pixels * sizeof(uint16_t)));
    uint16_t* dstUV = dstUVStart;

    const uint32_t packsPerLine = m_width / PIXELS_PER_PACK;

    for (uint32_t line = 0; line < m_height; line++)
    {
        for (uint32_t pack = 0; pack < packsPerLine; pack++)
        {
            uint32_t val;
            uint16_t u, y1, y2, v;

            if (pack % 2 == 0)
            {
                V210_READ_PACK_BLOCK(u, y1, v);
                *dstUV++ = u;
                *dstY++ = y1;
                *dstUV++ = v;

                V210_READ_PACK_BLOCK(y1, u, y2);
                *dstY++ = y1;
                *dstUV++ = u;
                *dstY++ = y2;

                V210_READ_PACK_BLOCK(v, y1, u);
                *dstUV++ = v;
                *dstY++ = y1;
                *dstUV++ = u;

                V210_READ_PACK_BLOCK(y1, v, y2);
                *dstY++ = y1;
                *dstUV++ = v;
                *dstY++ = y2;
            }
            else
            {
                V210_READ_PACK_BLOCK(u, y1, v);
                *dstY++ = y1;

                V210_READ_PACK_BLOCK(y1, u, y2);
                *dstY++ = y1;
                *dstY++ = y2;

                V210_READ_PACK_BLOCK(v, y1, u);
                *dstY++ = y1;

                V210_READ_PACK_BLOCK(y1, v, y2);
                *dstY++ = y1;
                *dstY++ = y2;
            }
        }
    }

#ifdef _DEBUG

    // Fully written Y space
    assert(dstY == dstUVStart);

    // Fully written UV space
    const BYTE* expectedVurrentUVPtr = outBuffer + (ptrdiff_t)GetOutFrameSize();
    assert(expectedVurrentUVPtr == (BYTE *)dstUV);

#endif
}

// This is called to determine outBuffer size
LONG MyClass::GetOutFrameSize() const
{
    const LONG pixels = m_height * m_width;

    return
        (pixels * sizeof(uint16_t)) +  // Every pixel 1 y
        (pixels / 2 / 2 * (2 * sizeof(uint16_t)));  // Every 2 pixels and every odd row 2 16-bit numbers
}


    


    Leads to all green image. This turned out to be a missing bit shift to place the 10 bits in the upper bits of the 16-bit value as per the P010 spec.

    


    Try 2 - Y works, UV doubled ?

    


    Updated the code to properly (or so I think) shifts the YUV values to the correct position in their 16-bit space.

    


    #define V210_READ_PACK_BLOCK(a, b, c) \
    do {                              \
        val  = *src++;                \
        a = val & 0x3FF;              \
        b = (val >> 10) & 0x3FF;      \
        c = (val >> 20) & 0x3FF;      \
    } while (0)


#define P010_WRITE_VALUE(d, v) (*d++ = (v << 6))

#define PIXELS_PER_PACK 6
#define BYTES_PER_PACK (4 * sizeof(uint32_t))

// Snipped constructor here which guarantees that we're processing
// something which does not violate alignment.

void MyClass::FormatVideoFrame(
    const BYTE* inBuffer,
    BYTE* outBuffer)
{   
    const uint32_t pixels = m_height * m_width;
    const uint32_t aligned_width = ((m_width + 47) / 48) * 48;
    const uint32_t stride = aligned_width * 8 / 3;

    uint16_t* dstY = (uint16_t *)outBuffer;

    uint16_t* dstUVStart = (uint16_t*)(outBuffer + ((ptrdiff_t)pixels * sizeof(uint16_t)));
    uint16_t* dstUV = dstUVStart;

    const uint32_t packsPerLine = m_width / PIXELS_PER_PACK;

    for (uint32_t line = 0; line < m_height; line++)
    {
        // Lines start at 128 byte alignment
        const uint32_t* src = (const uint32_t*)(inBuffer + (ptrdiff_t)(line * stride));

        for (uint32_t pack = 0; pack < packsPerLine; pack++)
        {
            uint32_t val;
            uint16_t u, y1, y2, v;

            if (pack % 2 == 0)
            {
                V210_READ_PACK_BLOCK(u, y1, v);
                P010_WRITE_VALUE(dstUV, u);
                P010_WRITE_VALUE(dstY, y1);
                P010_WRITE_VALUE(dstUV, v);

                V210_READ_PACK_BLOCK(y1, u, y2);
                P010_WRITE_VALUE(dstY, y1);
                P010_WRITE_VALUE(dstUV, u);
                P010_WRITE_VALUE(dstY, y2);

                V210_READ_PACK_BLOCK(v, y1, u);
                P010_WRITE_VALUE(dstUV, v);
                P010_WRITE_VALUE(dstY, y1);
                P010_WRITE_VALUE(dstUV, u);

                V210_READ_PACK_BLOCK(y1, v, y2);
                P010_WRITE_VALUE(dstY, y1);
                P010_WRITE_VALUE(dstUV, v);
                P010_WRITE_VALUE(dstY, y2);
            }
            else
            {
                V210_READ_PACK_BLOCK(u, y1, v);
                P010_WRITE_VALUE(dstY, y1);

                V210_READ_PACK_BLOCK(y1, u, y2);
                P010_WRITE_VALUE(dstY, y1);
                P010_WRITE_VALUE(dstY, y2);

                V210_READ_PACK_BLOCK(v, y1, u);
                P010_WRITE_VALUE(dstY, y1);

                V210_READ_PACK_BLOCK(y1, v, y2);
                P010_WRITE_VALUE(dstY, y1);
                P010_WRITE_VALUE(dstY, y2);
            }
        }
    }

#ifdef _DEBUG

    // Fully written Y space
    assert(dstY == dstUVStart);

    // Fully written UV space
    const BYTE* expectedVurrentUVPtr = outBuffer + (ptrdiff_t)GetOutFrameSize();
    assert(expectedVurrentUVPtr == (BYTE *)dstUV);

#endif
}


    


    This leads to the Y being correct and the amount of lines for U and V as well, but somehow U and V are not overlaid properly. There are two versions of it seemingly mirrored through the center vertical. Something similar but less visible for zeroing out V. So both of these are getting rendered at half the width ? Any tips appreciated :)

    


    Fix :
Found the bug, I'm flipping VU not per pack but per block

    


    if (pack % 2 == 0)


    


    Should be

    


    if (line % 2 == 0)


    


  • Using FFMPEG filters directly in Web Audio API AudioWorklet

    13 juillet 2021, par Cynic_Android

    I am trying to leverage the vast numbers of audio filters that FFMPEG has and see if I can use them directly in a custom AudioWorklet so I dont have to reinvent the wheel for each and every filter. One option I came across was to convert the AVFilter library to WASM and write a wrapper class to call the library functions.
https://dev.to/alfg/ffmpeg-webassembly-2cbl

    


    But I am looking for a solution where the data can be piped to the filter and the output passed instantaneously to the other audio worklet nodes so the effect can be heard without a delay.

    


    Any kind of help would be highly appreciated.

    


  • FFMPEG/DASH-LL creates audio and video chunks at different rates ; player is confused (404 errors)

    26 mai 2021, par Danny

    I'm trying to create a MPEG-DASH "live" stream from a static file to test various low latency modes. The DASH muxer in FFmpeg creates two AdaptationSets, one for video chunks and one for audio chunks.

    


    However, the audio and video chunk files are not created at the same rate (should they be ?). ie, here stream0 are the video chunks and stream1 are the audio chunks. After a few seconds of running, the webroot directory contains :

    


    chunk-stream0-00001.m4s  chunk-stream1-00001.m4s  
chunk-stream0-00002.m4s  chunk-stream1-00002.m4s  
chunk-stream0-00003.m4s  chunk-stream1-00003.m4s  
chunk-stream0-00004.m4s  chunk-stream1-00004.m4s  
                         chunk-stream1-00005.m4s  
                         chunk-stream1-00006.m4s  
                         chunk-stream1-00007.m4s  
                         chunk-stream1-00008.m4s  
                         chunk-stream1-00009.m4s  
master.mpd  
init-stream0.m4s  
init-stream1.m4s  


    


    The stream doesn't load (or play) on either dash.js or shaka-player and there are lots of 404 (Not Found) errors for the video chunks. The player is requesting chunks from both stream0 and stream1 in sequence, ie, stream0-001 + stream1-001, then stream0-002 + stream1-002 and so on.

    


    But since stream0 only goes from 001 to 004, there are lots of 404 errors as it tries to load stream0-005 through 009.

    


    The gap gets wider after letting FFmpeg run for a while. eg, stream0 is 62 to 75 but stream1 is 174 to 187. Reloading the player page at this point fails with dash.all.debug.js:15615 [2055][FragmentController] No video bytes to push or stream is inactive. and shows 404 errors stream0 chunk 188 (which doesn't exist yet !)

    


    enter image description here

    


    The FFmpeg command was adopted from DASH streaming from the top-down :

    


    ffmpeg -re -i /mnt/swdevel/TestStreams/H264/ThreeHourMovie.mp4 \
-c:v libx264 -x264-params keyint=120:scenecut=0 -b:v 1M -c:a copy \
-f dash -dash_segment_type mp4 \
 -seg_duration 2 \
 -target_latency 3 \
 -frag_type duration \
 -frag_duration 0.2 \
 -window_size 10 \
 -extra_window_size 3 \
 -streaming 1 \
 -ldash 1 \
 -use_template 1 \
 -use_timeline 0 \
 -write_prft 1 \
 -fflags +nobuffer+flush_packets \
 -format_options "movflags=+cmaf" \
 -utc_timing_url "/pelican/testPlayers/time.php" \
 master.mpd


    


    And the dash.js player code is very simple :

    


    const srcUrl = "../ottWebRoot/playerTest/master.mpd"; 

var player = dashjs.MediaPlayer().create();

let autoPlay = false;
player.initialize(document.querySelector("#videoTagId"), srcUrl, autoPlay);

player.updateSettings(
{
    streaming :
    {
        lowLatencyEnabled : true,
        liveDelay : 2,
        jumpGaps : true,
        jumpLargeGaps : true,
        smallGapLimit : 1.5,
    }
});


    


    To provide the UTCTiming element in the manifest, the small time.php URL returns a UTC time from the web server :

    


    <?php
    print gmdate("Y-m-d\TH:i:s\Z");
?>


    


    (It also shows 404 errors for the latest stream1/audio chunk, that's likely a different problem)

    


    I'm not sure what to try next. Any and suggestions greatly appreciated.

    


    EDIT I

    


    The suggestion by @Anonymous Coward to change the key interval improved things a lot. The chunks for stream0 and stream1 are in lock-step and have identical sequence numbers.

    


    However, there are still many 404 errors, both on initial page load (without pressing play) and during playback.

    


    I ran watch -n 1 ls -lt code> and compared side-by-side to the errors in the browser console.  It&#x27;s hard to compare but it <em>looks</em> like the browser is trying to fetch files "on the play edge" which haven&#x27;t yet been created by FFmpeg.  See the pic below.

    &#xA;

    How do I instruct the browser to wait just a bit more before fetching the edge chunks ?

    &#xA;

    enter image description here

    &#xA;

    EDIT II

    &#xA;

    Using shaka-player instead of dash.js plays properly without 404 errors. Configured as :

    &#xA;

        player.configure(&#xA;    {&#xA;        streaming: &#xA;        {&#xA;            lowLatencyMode: true,&#xA;            inaccurateManifestTolerance: 0,&#xA;            rebufferingGoal: 0.1,&#xA;        }&#xA;        &#xA;    });&#xA;

    &#xA;

    Client

    &#xA;

      &#xA;
    • MacOS 10.12
    • &#xA;

    • dash.js latest 3.2.2
    • &#xA;

    • Chrome 79, Safari 12, FireFox v ?
    • &#xA;

    &#xA;

    Server

    &#xA;

      &#xA;
    • Apache 2.4.37
    • &#xA;

    • PHP 7.2.4 (for time function only)
    • &#xA;

    • Centos 8
    • &#xA;

    &#xA;

    (For reference, here is the mpd file generated by FFmpeg)

    &#xA;

    &lt;?xml version="1.0" encoding="utf-8"?>&#xA;<mpd xmlns="urn:mpeg:dash:schema:mpd:2011" profiles="urn:mpeg:dash:profile:isoff-live:2011" type="dynamic" minimumupdateperiod="PT500S" availabilitystarttime="2021-05-24T14:50:00.263Z" publishtime="2021-05-24T15:22:45.335Z" timeshiftbufferdepth="PT50.0S" maxsegmentduration="PT2.0S" minbuffertime="PT5.0S">&#xA;    <programinformation>&#xA;    </programinformation>&#xA;    <servicedescription>&#xA;        <latency target="3000" referenceid="0"></latency>&#xA;    </servicedescription>&#xA;    <period start="PT0.0S">&#xA;        <adaptationset contenttype="video" startwithsap="1" segmentalignment="true" bitstreamswitching="true" framerate="24/1" maxwidth="1280" maxheight="682" par="15:8" lang="und">&#xA;            <resync dt="200000" type="0"></resync>&#xA;            <representation mimetype="video/mp4" codecs="avc1.64081f" bandwidth="1000000" width="1280" height="682" sar="1023:1024">&#xA;                <producerreferencetime inband="true" type="captured" wallclocktime="2021-05-24T14:50:00.263Z" presentationtime="0">&#xA;                    <utctiming schemeiduri="urn:mpeg:dash:utc:http-xsdate:2014" value="/pelican/testPlayers/time.php"></utctiming>&#xA;                </producerreferencetime>&#xA;                <resync dt="5000000" type="1"></resync>&#xA;                <segmenttemplate timescale="1000000" duration="2000000" availabilitytimeoffset="1.800" availabilitytimecomplete="false" initialization="init-stream$RepresentationID$.m4s" media="chunk-stream$RepresentationID$-$Number%05d$.m4s" startnumber="1">&#xA;                </segmenttemplate>&#xA;            </representation>&#xA;        </adaptationset>&#xA;        <adaptationset contenttype="audio" startwithsap="1" segmentalignment="true" bitstreamswitching="true" lang="und">&#xA;            <resync dt="200000" type="0"></resync>&#xA;            <representation mimetype="audio/mp4" codecs="mp4a.40.2" bandwidth="116317" audiosamplingrate="48000">&#xA;                <audiochannelconfiguration schemeiduri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="2"></audiochannelconfiguration>&#xA;                <producerreferencetime inband="true" type="captured" wallclocktime="2021-05-24T14:50:00.306Z" presentationtime="0">&#xA;                    <utctiming schemeiduri="urn:mpeg:dash:utc:http-xsdate:2014" value="/pelican/testPlayers/time.php"></utctiming>&#xA;                </producerreferencetime>&#xA;                <resync dt="21333" type="1"></resync>&#xA;                <segmenttemplate timescale="1000000" duration="2000000" availabilitytimeoffset="1.800" availabilitytimecomplete="false" initialization="init-stream$RepresentationID$.m4s" media="chunk-stream$RepresentationID$-$Number%05d$.m4s" startnumber="1">&#xA;                </segmenttemplate>&#xA;            </representation>&#xA;        </adaptationset>&#xA;    </period>&#xA;    <utctiming schemeiduri="urn:mpeg:dash:utc:http-xsdate:2014" value="/pelican/testPlayers/time.php"></utctiming>&#xA;</mpd>&#xA;

    &#xA;