Recherche avancée

Médias (0)

Mot : - Tags -/api

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (50)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (7202)

  • Produce waveform video from audio using FFMPEG

    27 avril 2017, par RhythmicDevil

    I am trying to create a waveform video from audio. My goal is to produce a video that looks something like this

    enter image description here

    For my test I have an mp3 that plays a short clipped sound. There are 4 bars of 1/4 notes and 4 bars of 1/8 notes played at 120bpm. I am having some trouble coming up with the right combination of preprocessing and filtering to produce a video that looks like the image. The colors dont have to be exact, I am more concerned with the shape of the beats. I tried a couple of different approaches using showwaves and showspectrum. I cant quite wrap my head around why when using showwaves the beats go past so quickly, but using showspectrum produces a video where I can see each individual beat.

    ShowWaves

    ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showwaves=s=1280x100:mode=cline:rate=25:scale=sqrt,format=yuv420p[v]" -map "[v]" -map 0:a output_wav.mp4

    This link will download the output of that command.

    ShowSpectrum

    ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showspectrum=s=1280x100:mode=combined:color=intensity:saturation=5:slide=1:scale=cbrt,format=yuv420p[v]" -map "[v]" -an -map 0:a output_spec.mp4

    This link will download the output of that command.

    I posted the simple examples because I didn’t want to confuse the issue by adding all the variations I have tried.

    In practice I suppose I can get away with the output from showspectrum but I’d like to understand where/how I am thinking about this incorrectly. Thanks for any advice.

    Here is a link to the source audio file.

  • avcodec/h2645_sei : loosen up min luminance requirements

    25 mai 2024, par Niklas Haas
    avcodec/h2645_sei : loosen up min luminance requirements
    

    The H.265 specification is quite clear on this case :

    > When min_display_mastering_luminance is not in the range of 1 to
    > 50000, the nominal maximum display luminance of the mastering display
    > is unknown or unspecified or specified by other means not specified in
    > this Specification.

    And so the current code is correct in marking luminance data as invalid
    if min luminance is set to 0. However, this breaks playback of at least
    several real-world Blu-ray releases, for example La La Land, Planet of
    the Apes, and quite possibly a lot more. These come with ostensibly
    valid max_luminance tags (1000 nits), but min_luminance set to 0.

    Loosen up this requirement by guarding it behind FF_COMPLIANCE_STRICT.
    We still reject blatantly invalid metadata (wrong value range on
    luminance, max set to 0, max below min, min above 50 nits etc.), so this
    shouldn't cause any unintended regressions.

    Fixes : https://github.com/mpv-player/mpv/issues/14177

    • [DH] libavcodec/h2645_sei.c
  • Trouble with frame accuracy applying subcaps using .ass files to 23.976fps video [closed]

    25 août 2023, par WhatsYourFunction

    Currently testing .ass subcaps in a VFX workflow.
The goal is to drop specfic text over specific shots and the in/out points have to be frame accurate
We're working in a 23.976 project.

    


    Currently having no trouble using FFmpeg to generate frame-accurate subclips of individual shots from a full-show export by converting hh:mm:ss:ff to seconds and then handling the 24 to 23.976 offset, using the following alorithm :

    


    InPoint_Seconds = ConvertToSeconds(InPoint_Hmsf_FullShow) - ConvertToSeconds(Start_Hmsf_FullShow) // Convert from SMTPE Time Code to seconds.
InPoint_Seconds = InPoint_Seconds * (1001 / 1000) //Handle 24 to 23.976 offset
OutPoint_Seconds = [Same idea as above]
Duration_Seconds = Output_Seconds - InPoint_Seconds

> ffmpeg -ss InPoint_Seconds -t Duration_Seconds -i SourcePath -c copy DestPath


    


    So generating frame-accurate copies of portions of a larger file works with perfect accuracy

    


    BUT when applying the same logic to subcaps using .ass files, sometimes they land with frame accuracy, and sometimes they don't (They'll be 1 frame late at most, and it does not increase over the span of the source clip).

    


    Curious if anyone has any ideas.