Recherche avancée

Médias (0)

Mot : - Tags -/navigation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (95)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (10020)

  • lavc : Implement Dolby Vision RPU parsing

    3 janvier 2022, par Niklas Haas
    lavc : Implement Dolby Vision RPU parsing
    

    Based on a mixture of guesswork, partial documentation in patents, and
    reverse engineering of real-world samples. Confirmed working for all the
    samples I've thrown at it.

    Contains some annoying machinery to persist these values in between
    frames, which is needed in theory even though I've never actually seen a
    sample that relies on it in practice. May or may not work.

    Since the distinction matters greatly for parsing the color matrix
    values, this includes a small helper function to guess the right profile
    from the RPU itself in case the user has forgotten to forward the dovi
    configuration record to the decoder. (Which in practice, only ffmpeg.c
    and ffplay do..)

    Notable omissions / deviations :
    - CRC32 verification. This is based on the MPEG2 CRC32 type, which is
    similar to IEEE CRC32 but apparently different in subtle enough ways
    that I could not get it to pass verification no matter what parameters
    I fed to av_crc. It's possible the code needs some changes.
    - Linear interpolation support. Nothing documents this (beyond its
    existence) and no samples use it, so impossible to implement.
    - All of the extension metadata blocks, but these contain values that
    seem largely congruent with ST2094, HDR10, or other existing forms of
    side data, so I will defer parsing/attaching them to a future commit.
    - The patent describes a mechanism for predicting coefficients from
    previous RPUs, but the bit for the flag whether to use the
    prediction deltas or signal entirely new coefficients does not seem to
    be present in actual RPUs, so we ignore this subsystem entirely.
    - In the patent's spec, the NLQ subsystem also loops over
    num_nlq_pivots, but even in the patent the number is hard-coded to one
    iteration rather than signalled. So we only store one set of coefs.

    Heavily influenced by https://github.com/quietvoid/dovi_tool
    Documentation drawn from US Patent 10,701,399 B2 and ETSI GS CCM 001

    Signed-off-by : Niklas Haas <git@haasn.dev>
    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] configure
    • [DH] libavcodec/Makefile
    • [DH] libavcodec/dovi_rpu.c
    • [DH] libavcodec/dovi_rpu.h
  • Summary Video Accessibility Talk

    23 avril 2013, par silvia

    I’ve just got off a call to the UK Digital TV Group, for which I gave a talk on HTML5 video accessibility (slides best viewed in Google Chrome).

    The slide provide a high-level summary of the accessibility features that we’ve developed in the W3C for HTML5, including :

    • Subtitles & Captions with WebVTT and the track element
    • Video Descriptions with WebVTT, the track element and speech synthesis
    • Chapters with WebVTT for semantic navigation
    • Audio Descriptions through synchronising an audio track with a video
    • Sign Language video synchronized with a main video

    I received some excellent questions.

    The obvious one was about why WebVTT and not TTML. While for anyone who has tried to implement TTML support, the advantages of WebVTT should be clear, for some the decision of the browsers to go with WebVTT still seems to be bothersome. The advantages of CSS over XSL-FO in a browser-context are obvious, but not as much outside browsers. So, the simplicity of WebVTT and the clear integration with HTML have to speak for themselves. Conversion between TTML and WebVTT was a feature that was being asked for.

    I received a question about how to support ducking (reduce the volume of the main audio track) when using video descriptions. My reply was to either use video descriptions with WebVTT and do ducking during the times that a cue is active, or when using audio descriptions (i.e. actual audio tracks) to add an additional WebVTT file of kind=metadata to mark the intervals in which to do ducking. In both cases some JavaScript will be necessary.

    I received another question about how to do clean audio, which I had almost forgotten was a requirement from our earlier media accessibility document. “Clean audio” consists of isolating the audio channel containing the spoken dialog and important non-speech information that can then be amplified or otherwise modified, while other channels containing music or ambient sounds are attenuated. I suggested using the mediagroup attribute to provide a main video element (without an audio track) and then the other channels as parallel audio tracks that can be turned on and off and attenuated individually. There is some JavaScript coding involved on top of the APIs that we have defined in HTML, but it can be implemented in browsers that support the mediagroup attribute.

    Another question was about the possibilities to extend the list of @kind attribute values. I explained that right now we have a proposal for a new text track kind=”forced” so as to provide forced subtitles for sections of video with foreign language. These would be on when no other subtitle or caption tracks are activated. I also explained that if there is a need for application-specific text tracks, the kind=”metadata” would be the correct choice.

    I received some further questions, in particular about how to apply styling to captions (e.g. color changes to text) and about how closely the browser are able to keep synchronization across multiple media elements. The earlier was easily answered with the ::cue pseudo-element, but the latter is a quality of implementation feature, so I had to defer to individual browsers.

    Overall it was a good exercise to summarize the current state of HTML5 video accessibility and I was excited to show off support in Chrome for all the features that we designed into the standard.

  • Summary Video Accessibility Talk

    1er janvier 2014, par silvia

    I’ve just got off a call to the UK Digital TV Group, for which I gave a talk on HTML5 video accessibility (slides best viewed in Google Chrome).

    The slide provide a high-level summary of the accessibility features that we’ve developed in the W3C for HTML5, including :

    • Subtitles & Captions with WebVTT and the track element
    • Video Descriptions with WebVTT, the track element and speech synthesis
    • Chapters with WebVTT for semantic navigation
    • Audio Descriptions through synchronising an audio track with a video
    • Sign Language video synchronized with a main video

    I received some excellent questions.

    The obvious one was about why WebVTT and not TTML. While for anyone who has tried to implement TTML support, the advantages of WebVTT should be clear, for some the decision of the browsers to go with WebVTT still seems to be bothersome. The advantages of CSS over XSL-FO in a browser-context are obvious, but not as much outside browsers. So, the simplicity of WebVTT and the clear integration with HTML have to speak for themselves. Conversion between TTML and WebVTT was a feature that was being asked for.

    I received a question about how to support ducking (reduce the volume of the main audio track) when using video descriptions. My reply was to either use video descriptions with WebVTT and do ducking during the times that a cue is active, or when using audio descriptions (i.e. actual audio tracks) to add an additional WebVTT file of kind=metadata to mark the intervals in which to do ducking. In both cases some JavaScript will be necessary.

    I received another question about how to do clean audio, which I had almost forgotten was a requirement from our earlier media accessibility document. “Clean audio” consists of isolating the audio channel containing the spoken dialog and important non-speech information that can then be amplified or otherwise modified, while other channels containing music or ambient sounds are attenuated. I suggested using the mediagroup attribute to provide a main video element (without an audio track) and then the other channels as parallel audio tracks that can be turned on and off and attenuated individually. There is some JavaScript coding involved on top of the APIs that we have defined in HTML, but it can be implemented in browsers that support the mediagroup attribute.

    Another question was about the possibilities to extend the list of @kind attribute values. I explained that right now we have a proposal for a new text track kind=”forced” so as to provide forced subtitles for sections of video with foreign language. These would be on when no other subtitle or caption tracks are activated. I also explained that if there is a need for application-specific text tracks, the kind=”metadata” would be the correct choice.

    I received some further questions, in particular about how to apply styling to captions (e.g. color changes to text) and about how closely the browser are able to keep synchronization across multiple media elements. The earlier was easily answered with the ::cue pseudo-element, but the latter is a quality of implementation feature, so I had to defer to individual browsers.

    Overall it was a good exercise to summarize the current state of HTML5 video accessibility and I was excited to show off support in Chrome for all the features that we designed into the standard.