Recherche avancée

Médias (1)

Mot : - Tags -/biographie

Autres articles (106)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

Sur d’autres sites (10413)

  • avformat/matroskaenc : Don't implicitly mark WebVTT in WebM as English

    18 janvier 2020, par Andreas Rheinhardt
    avformat/matroskaenc : Don't implicitly mark WebVTT in WebM as English
    

    Writing the language of WebVTT in WebM proceeded differently than the
    language of all other tracks : In case no language was given, it does not
    write anything instead of "und" (for undefined). Because the default
    value of the Language element in WebM (that inherited it from Matroska)
    is "eng" (for English), any such track will actually be flagged as
    English.

    Doing it this way goes back to commit 509642b4 (the commit adding
    support for WebVTT) and no reason for this has been given in the commit
    message or in the discussion about this patch on the mailing list ; the
    best I can think of is this : the WebM wiki contains "The srclang attribute
    is stored as the Language sub-element." Someone unfamiliar with default
    values in Matroska/WebM could interpret this as meaning that no Language
    element should be written if the language is unknown. And this is wrong
    and this commit changes it.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>

    • [DH] libavformat/matroskaenc.c
  • How can I quantitatively measure gstreamer H264 latency between source and display ?

    10 août 2018, par KevinM

    I have a project where we are using gstreamer , x264, etc, to multicast a video stream over a local network to multiple receivers (dedicated computers attached to monitors). We’re using gstreamer on both the video source (camera) systems and the display monitors.

    We’re using RTP, payload 96, and libx264 to encode the video stream (no audio).

    But now I need to quantify the latency between (as close as possible to) frame acquisition and display.

    Does anyone have suggestions that use the existing software ?

    Ideally I’d like to be able to run the testing software for a few hours to generate enough statistics to quantify the system. Meaning that I can’t do one-off tests like point the source camera at the receiving display monitor displaying a high resolution and manually calculate the difference...

    I do realise that using a pure software-only solution, I will not be able to quantify the video acquisition delay (i.e. CCD to framebuffer).

    I can arrange that the system clocks on the source and display systems are synchronised to a high accuracy (using PTP), so I will be able to trust the system clocks (else I will use some software to track the difference between the system clocks and remove this from the test results).

    In case it helps, the project applications are written in C++, so I can use C event callbacks, if they’re available, to consider embedding system time in a custom header (e.g. frame xyz, encoded at time TTT - and use the same information on the receiver to calculate a difference).

  • How can I quantitatively measure gstreamer H264 latency between source and display ?

    16 décembre 2014, par KevinM

    I have a project where we are using gstreamer , x264, etc, to multicast a video stream over a local network to multiple receivers (dedicated computers attached to monitors). We’re using gstreamer on both the video source (camera) systems and the display monitors.

    We’re using RTP, payload 96, and libx264 to encode the video stream (no audio).

    But now I need to quantify the latency between (as close as possible to) frame acquisition and display.

    Does anyone have suggestions that use the existing software ?

    Ideally I’d like to be able to run the testing software for a few hours to generate enough statistics to quantify the system. Meaning that I can’t do one-off tests like point the source camera at the receiving display monitor displaying a high resolution and manually calculate the difference...

    I do realise that using a pure software-only solution, I will not be able to quantify the video acquisition delay (i.e. CCD to framebuffer).

    I can arrange that the system clocks on the source and display systems are synchronised to a high accuracy (using PTP), so I will be able to trust the system clocks (else I will use some software to track the difference between the system clocks and remove this from the test results).

    In case it helps, the project applications are written in C++, so I can use C event callbacks, if they’re available, to consider embedding system time in a custom header (e.g. frame xyz, encoded at time TTT - and use the same information on the receiver to calculate a difference).