Recherche avancée

Médias (91)

Autres articles (103)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (9387)

  • FFplay requesting video via RTSP :// but receiving on multicast address

    28 mai 2014, par DavidG

    First of all, I apologize for how long the supporting information will be in this post. This is my first post on this forum.

    My issue is I need to run the command line version of ffmpeg to capture a video stream. However, as a proof of concept I’m first attempting to capture and view the video using ffplay (BTW, I have not had any success using ffmpeg or ffprobe). I’m running the ffplay command to read video from a Coretec video encoder which has multicast enabled.

    Unicast address:   172.30.18.50
    Multicast address: 239.130.18.50:4002

    My question is how can I request the Unicast address, but receive the video on the multicast address ? (BTW, the ffplay operation does not work even if I replace the Unicast address with the Multicast address below)

    NOTE : After looking at the Wireshark trace, I see the video data has GSMTAP in the protocol column. When I do "ffmpeg -protocols : I see there is a Decoder "gsm" which decodes raw gsm. however, when I use ffplay -f gsm ... I get "Protocol not found".

    I am able to use VLC to view the video using the following command :

    VLC rtsp://172.30.18.50

    It appears from the Wireshark trace that the session is initiated on the Unicast address, but the video is streamed on the Multicast address. VLC is able to determine this and perform the appropriate operation. I don’t know what to add to ffplay to let it know that another stream will be carrying the video.

    I am UNABLE to perform the following ffplay commands (none of them work) :

    ffplay -v debug rtsp://172.30.18.50
    ffplay -v debug -rtsp_transport udp rtsp://172.30.18.50
    ffplay -v debug -rtsp_transport udp_multicast rtsp://172.30.18.50

    NOTE : I am able to get ffplay to launch, but the video is garbled badly. Maybe this bit of information will ring a bell for someone ? The command I used was :

    ffplay -v debug -i udp://239.130.18.50:4002?sources=172.30.18.50

    The version of ffplay I’m using is :

    ffplay version N-63439-g96470ca Copyright (c) 2003-2014 the FFmpeg developers
     built on May 25 2014 22:09:07 with gcc 4.8.2 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av
    isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab
    le-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetyp
    e --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-
    libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libope
    njpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsox
    r --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab -
    -enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx
    --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-
    libxavs --enable-libxvid --enable-decklink --enable-zlib
     libavutil      52. 86.100 / 52. 86.100
     libavcodec     55. 65.100 / 55. 65.100
     libavformat    55. 41.100 / 55. 41.100
     libavdevice    55. 13.101 / 55. 13.101
     libavfilter     4.  5.100 /  4.  5.100
     libswscale      2.  6.100 /  2.  6.100
     libswresample   0. 19.100 /  0. 19.100
     libpostproc    52.  3.100 / 52.  3.100

    The debug output for ffplay -v debug rtsp ://172.30.18.50 is :

    [rtsp @ 0000000002a8be80] SDP:=    0KB vq=    0KB sq=    0B f=0/0
    v=0
    o=- 1 1 IN IP4 50.18.30.172
    s=Test
    a=type:broadcast
    t=0 0
    c=IN IP4 239.130.18.50/63
    m=video 4002 RTP/AVP 96
    a=rtpmap:96 MP4V-ES/90000
    a=fmtp:96 profile-level-id=245;config=000001B0F5000001B509000001000000012000C8F8
    A058BA9860FA616087828307a=control:track1

    [rtsp @ 0000000002a8be80] video codec set to: mpeg4
    [udp @ 0000000002a8bac0] end receive buffer size reported is 65536
    [udp @ 0000000002aa1600] end receive buffer size reported is 65536
    [rtsp @ 0000000002a8be80] Nonmatching transport in server reply/0
    rtsp://172.30.18.50: Invalid data found when processing input

    And the Wireshark trace output is :

    OPTIONS rtsp://172.30.18.50:554 RTSP/1.0
    CSeq: 1
    User-Agent: Lavf55.41.100

    RTSP/1.0 200 OK
    CSeq: 1
    Public: DESCRIBE, SETUP, TEARDOWN, PLAY

    DESCRIBE rtsp://172.30.18.50:554 RTSP/1.0
    Accept: application/sdp
    CSeq: 2
    User-Agent: Lavf55.41.100

    RTSP/1.0 200 OK
    CSeq: 2 Content-Type: application/sdp
    Content-Length: 270

    v=0
    o=- 1 1 IN IP4 50.18.30.172
    s=Test
    a=type:broadcast
    t=0 0
    c=IN IP4 239.130.18.50/63
    m=video 4002 RTP/AVP 96
    a=rtpmap:96 MP4V-ES/90000
    a=fmtp:96 profile-level-id=245;config=000001B0F5000001B509000001000000012000C8F8A058BA9860FA616087828307a=control:track1

    SETUP rtsp://172.30.18.50:554 RTSP/1.0
    Transport: RTP/AVP/UDP;unicast;client_port=9574-9575
    CSeq: 3
    User-Agent: Lavf55.41.100

    RTSP/1.0 200 OK
    CSeq: 3
    Session: test
    Transport: RTP/AVP;multicast;destination=;port=4002-4003;ttl=63

    The debug output for ffplay -v debug -rtsp_transport udp rtsp ://172.30.18.50 is :

    [rtsp @ 0000000002c5c0a0] SDP:=    0KB vq=    0KB sq=    0B f=0/0
    v=0
    o=- 1 1 IN IP4 50.18.30.172
    s=Test
    a=type:broadcast
    t=0 0
    c=IN IP4 239.130.18.50/63
    m=video 4002 RTP/AVP 96
    a=rtpmap:96 MP4V-ES/90000
    a=fmtp:96 profile-level-id=245;config=000001B0F5000001B509000001000000012000C8F8
    A058BA9860FA616087828307a=control:track1


    [rtsp @ 0000000002c5c0a0] video codec set to: mpeg4
    [udp @ 0000000002c62420] end receive buffer size reported is 65536
    [udp @ 0000000002c726a0] end receive buffer size reported is 65536
    [rtsp @ 0000000002c5c0a0] Nonmatching transport in server reply/0
    rtsp://172.30.18.50: Invalid data found when processing input

    And the Wireshark trace output is :

    OPTIONS rtsp://172.30.18.50:554 RTSP/1.0
    CSeq: 1
    User-Agent: Lavf55.41.100

    RTSP/1.0 200 OK
    CSeq: 1
    Public: DESCRIBE, SETUP, TEARDOWN, PLAY

    DESCRIBE rtsp://172.30.18.50:554 RTSP/1.0
    Accept: application/sdp
    CSeq: 2
    User-Agent: Lavf55.41.100

    RTSP/1.0 200 OK
    CSeq: 2
    Content-Type: application/sdp
    Content-Length: 270

    v=0
    o=- 1 1 IN IP4 50.18.30.172
    s=Test
    a=type:broadcast
    t=0 0
    c=IN IP4 239.130.18.50/63
    m=video 4002 RTP/AVP 96
    a=rtpmap:96 MP4V-ES/90000 a=fmtp:96 profile-level-id=245;config=000001B0F5000001B509000001000000012000C8F8A058BA9860FA616087828307a=control:track1

    SETUP rtsp://172.30.18.50:554 RTSP/1.0
    Transport: RTP/AVP/UDP;unicast;client_port=22332-22333
    CSeq: 3
    User-Agent: Lavf55.41.100

    RTSP/1.0 200 OK
    CSeq: 3
    Session: test
    Transport: RTP/AVP;multicast;destination=239.130.18.50;port=4002-4003;ttl=63

    The debug output for ffplay -v debug -rtsp_transport udp_multicast is :

    [rtsp @ 00000000002fc100] SDP:=    0KB vq=    0KB sq=    0B f=0/0
    v=0
    o=- 1 1 IN IP4 50.18.30.172
    s=Test
    a=type:broadcast
    t=0 0
    c=IN IP4 239.130.18.50/63
    m=video 4002 RTP/AVP 96
    a=rtpmap:96 MP4V-ES/90000
    a=fmtp:96 profile-level-id=245;config=000001B0F5000001B509000001000000012000C8F8
    A058BA9860FA616087828307a=control:track1

    [rtsp @ 00000000002fc100] video codec set to: mpeg4
       nan    :  0.000 fd=   0 aq=    0KB vq=    0KB sq=    0B f=0/0

    And the Wireshark trace output is :

    OPTIONS rtsp://172.30.18.50:554
    RTSP/1.0
    CSeq: 1
    User-Agent: Lavf55.41.100

    RTSP/1.0 200 OK
    CSeq: 1
    Public: DESCRIBE, SETUP, TEARDOWN, PLAY

    DESCRIBE rtsp://172.30.18.50:554 RTSP/1.0
    Accept: application/sdp
    CSeq: 2
    User-Agent: Lavf55.41.100

    RTSP/1.0 200 OK
    CSeq: 2
    Content-Type: application/sdp
    Content-Length: 270

    v=0
    o=- 1 1 IN IP4 50.18.30.172
    s=Test
    a=type:broadcast
    t=0 0
    c=IN IP4 239.130.18.50/63
    m=video 4002 RTP/AVP 96
    a=rtpmap:96 MP4V-ES/90000
    a=fmtp:96 profile-level-id=245;config=000001B0F5000001B509000001000000012000C8F8A058BA9860FA616087828307a=control:track1

    SETUP rtsp://172.30.18.50:554 RTSP/1.0
    Transport: RTP/AVP/UDP;multicast
    CSeq: 3
    User-Agent: Lavf55.41.100

    Thank you in advance to whomever is willing to tackle this.
    - DavidG

  • Targeting a specific file size in vp8+vorbis encoding using ffmpeg

    4 août 2021, par Mohammad Zamanian

    I have a couple videos that I want to encode to vp8 for video and Vorbis for audio. This is the FFmpeg command I'm currently using :

    


    ffmpeg -y -i input.mp4 -map 0:v:0 -s 640x360 -filter:v fps=20 -c:v libvpx -crf 10 -b:v 200k -map 0:a:0 -b:a 48k -c:a libvorbis output.webm


    


    I want to have control over output file size and limit it to 3MB without clipping the video, but instead, lose quality. so I cant use -fs 3MB.

    


    How can I determine the file size based on video and audio bitrates and duration ?

    


    How can I limit the file size without clipping ?

    


  • How to encode Planar 4:2:0 (fourcc P010)

    20 juillet 2021, par DennisFleurbaaij

    I'm trying to recode fourcc V210 (which is a packed YUV4:2:2 format) into a P010 (planar YUV4:2:0). I think I've implemented it according to spec, but the renderer is giving a wrong image so something is off. Decoding the V210 has a decent example in ffmpeg (defines are modified from their solution) but I can't find a P010 encoder to look at what I'm doing wrong.

    


    (Yes, I've tried ffmpeg and that works but it's too slow for this, it takes 30ms per frame on an Intel Gen11 i7)

    


    Clarification (after @Frank's question) : The frames being processed are 4k (3840px wide) and hence there is no code for doing the 128b alignment.

    


    This is running on intel so little endian conversions applied.

    


    Try1 - all green image :

    


    The following code

    


    #define V210_READ_PACK_BLOCK(a, b, c) \
    do {                              \
        val  = *src++;                \
        a = val & 0x3FF;              \
        b = (val >> 10) & 0x3FF;      \
        c = (val >> 20) & 0x3FF;      \
    } while (0)

#define PIXELS_PER_PACK 6
#define BYTES_PER_PACK (4*4)

void MyClass::FormatVideoFrame(
    BYTE* inFrame,
    BYTE* outBuffer)
{
    const uint32_t pixels = m_height * m_width;

    const uint32_t* src = (const uint32_t *)inFrame);

    uint16_t* dstY = (uint16_t *)outBuffer;

    uint16_t* dstUVStart = (uint16_t*)(outBuffer + ((ptrdiff_t)pixels * sizeof(uint16_t)));
    uint16_t* dstUV = dstUVStart;

    const uint32_t packsPerLine = m_width / PIXELS_PER_PACK;

    for (uint32_t line = 0; line < m_height; line++)
    {
        for (uint32_t pack = 0; pack < packsPerLine; pack++)
        {
            uint32_t val;
            uint16_t u, y1, y2, v;

            if (pack % 2 == 0)
            {
                V210_READ_PACK_BLOCK(u, y1, v);
                *dstUV++ = u;
                *dstY++ = y1;
                *dstUV++ = v;

                V210_READ_PACK_BLOCK(y1, u, y2);
                *dstY++ = y1;
                *dstUV++ = u;
                *dstY++ = y2;

                V210_READ_PACK_BLOCK(v, y1, u);
                *dstUV++ = v;
                *dstY++ = y1;
                *dstUV++ = u;

                V210_READ_PACK_BLOCK(y1, v, y2);
                *dstY++ = y1;
                *dstUV++ = v;
                *dstY++ = y2;
            }
            else
            {
                V210_READ_PACK_BLOCK(u, y1, v);
                *dstY++ = y1;

                V210_READ_PACK_BLOCK(y1, u, y2);
                *dstY++ = y1;
                *dstY++ = y2;

                V210_READ_PACK_BLOCK(v, y1, u);
                *dstY++ = y1;

                V210_READ_PACK_BLOCK(y1, v, y2);
                *dstY++ = y1;
                *dstY++ = y2;
            }
        }
    }

#ifdef _DEBUG

    // Fully written Y space
    assert(dstY == dstUVStart);

    // Fully written UV space
    const BYTE* expectedVurrentUVPtr = outBuffer + (ptrdiff_t)GetOutFrameSize();
    assert(expectedVurrentUVPtr == (BYTE *)dstUV);

#endif
}

// This is called to determine outBuffer size
LONG MyClass::GetOutFrameSize() const
{
    const LONG pixels = m_height * m_width;

    return
        (pixels * sizeof(uint16_t)) +  // Every pixel 1 y
        (pixels / 2 / 2 * (2 * sizeof(uint16_t)));  // Every 2 pixels and every odd row 2 16-bit numbers
}


    


    Leads to all green image. This turned out to be a missing bit shift to place the 10 bits in the upper bits of the 16-bit value as per the P010 spec.

    


    Try 2 - Y works, UV doubled ?

    


    Updated the code to properly (or so I think) shifts the YUV values to the correct position in their 16-bit space.

    


    #define V210_READ_PACK_BLOCK(a, b, c) \
    do {                              \
        val  = *src++;                \
        a = val & 0x3FF;              \
        b = (val >> 10) & 0x3FF;      \
        c = (val >> 20) & 0x3FF;      \
    } while (0)


#define P010_WRITE_VALUE(d, v) (*d++ = (v << 6))

#define PIXELS_PER_PACK 6
#define BYTES_PER_PACK (4 * sizeof(uint32_t))

// Snipped constructor here which guarantees that we're processing
// something which does not violate alignment.

void MyClass::FormatVideoFrame(
    const BYTE* inBuffer,
    BYTE* outBuffer)
{   
    const uint32_t pixels = m_height * m_width;
    const uint32_t aligned_width = ((m_width + 47) / 48) * 48;
    const uint32_t stride = aligned_width * 8 / 3;

    uint16_t* dstY = (uint16_t *)outBuffer;

    uint16_t* dstUVStart = (uint16_t*)(outBuffer + ((ptrdiff_t)pixels * sizeof(uint16_t)));
    uint16_t* dstUV = dstUVStart;

    const uint32_t packsPerLine = m_width / PIXELS_PER_PACK;

    for (uint32_t line = 0; line < m_height; line++)
    {
        // Lines start at 128 byte alignment
        const uint32_t* src = (const uint32_t*)(inBuffer + (ptrdiff_t)(line * stride));

        for (uint32_t pack = 0; pack < packsPerLine; pack++)
        {
            uint32_t val;
            uint16_t u, y1, y2, v;

            if (pack % 2 == 0)
            {
                V210_READ_PACK_BLOCK(u, y1, v);
                P010_WRITE_VALUE(dstUV, u);
                P010_WRITE_VALUE(dstY, y1);
                P010_WRITE_VALUE(dstUV, v);

                V210_READ_PACK_BLOCK(y1, u, y2);
                P010_WRITE_VALUE(dstY, y1);
                P010_WRITE_VALUE(dstUV, u);
                P010_WRITE_VALUE(dstY, y2);

                V210_READ_PACK_BLOCK(v, y1, u);
                P010_WRITE_VALUE(dstUV, v);
                P010_WRITE_VALUE(dstY, y1);
                P010_WRITE_VALUE(dstUV, u);

                V210_READ_PACK_BLOCK(y1, v, y2);
                P010_WRITE_VALUE(dstY, y1);
                P010_WRITE_VALUE(dstUV, v);
                P010_WRITE_VALUE(dstY, y2);
            }
            else
            {
                V210_READ_PACK_BLOCK(u, y1, v);
                P010_WRITE_VALUE(dstY, y1);

                V210_READ_PACK_BLOCK(y1, u, y2);
                P010_WRITE_VALUE(dstY, y1);
                P010_WRITE_VALUE(dstY, y2);

                V210_READ_PACK_BLOCK(v, y1, u);
                P010_WRITE_VALUE(dstY, y1);

                V210_READ_PACK_BLOCK(y1, v, y2);
                P010_WRITE_VALUE(dstY, y1);
                P010_WRITE_VALUE(dstY, y2);
            }
        }
    }

#ifdef _DEBUG

    // Fully written Y space
    assert(dstY == dstUVStart);

    // Fully written UV space
    const BYTE* expectedVurrentUVPtr = outBuffer + (ptrdiff_t)GetOutFrameSize();
    assert(expectedVurrentUVPtr == (BYTE *)dstUV);

#endif
}


    


    This leads to the Y being correct and the amount of lines for U and V as well, but somehow U and V are not overlaid properly. There are two versions of it seemingly mirrored through the center vertical. Something similar but less visible for zeroing out V. So both of these are getting rendered at half the width ? Any tips appreciated :)

    


    Fix :
Found the bug, I'm flipping VU not per pack but per block

    


    if (pack % 2 == 0)


    


    Should be

    


    if (line % 2 == 0)