Recherche avancée

Médias (0)

Mot : - Tags -/tags

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (65)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (12219)

  • What would cause video whose position is set and played at a certain millisecond, to not align with a video trimmed to that same millisecond position ?

    2 décembre 2020, par Cha Pa

    I have developed a windows C# application that allows you to cut clips of video that is synched to a transcript.

    


    The application has a video tuner that allows you to adjust the start and stops of the clip in .1, 1, 3 seconds and gives you a short 2 second preview of the start or 2 seconds leading up to the stop as a preview. Then you can cut that video into a small clip by pressing a create clip button.

    


    Because of of the nature of the provider of the videos, these videos are typically Mpeg1 that are over an hour long and are all varying in both in width/height dimensions as well as bitrate. Changing the format is not an option.

    


    My question is this. What would cause the video player to play earlier than where the video is cut using an ffmpeg wrapper ?

    


    I have developed a test version in winform with a mediaplayer control, a version in WPF with a mediacontrol player, and another winform with a vlcwrapper. All of them start early as compared to the ffmpeg cut. All of them start at a slightly different time from each other even though they are all provided the same millisecond time stamp. In fact testing directly with setting command line position to ffplay and vlc, they have slightly different start times when using the same position and both are different then the command line cut of ffmpeg.

    


    The infuriating thing is that when I compare the ffmpegwrapper exports they are dead on with another piece of software's playback preview and clip export. Every video I clip that has a different bitrate or resolution is sightly off to a differing number of milliseconds. One video may be .4 seconds off and another is 1.1. I would assume it was a delay due to the loading of the video and I can async await somehow, or assume it was a problem with the timer, except it starts EARLY not late. All 3 demos are some variation of the code below that grabs start time and end times and calls a timer to stop the video after playing the number of milliseconds calculated between the start and end. Even more peculiar, the shorter the timer, the shorter the start time appears to be off.

    


    Any thoughts on if it is an mpeg1 mpeg2 issue or a bitrate issue I can control to be consistent between videos without re-encoding every video ? Maybe even outside the box ways to put clips to memory stream or play the temp file quickly ?

    


    private void PreviewStartButton_Click(object sender, EventArgs e)
       {
           startPosition = Convert.ToSingle(startBox.Text);
           stopPosition = startPosition + 2; //gives you your 2 second preview of the star
           mediaPlayer.Ctlcontrols.currentPosition = origStartTime;
           mediaPlayer.Ctlcontrols.play();
           StopWmpPlayerTimer();
           StartWmpPlayerTimer();
       }


    


  • How to use sexagesimal syntax in arguments of FFmpeg or if impossible how to convert it in Windows CMD shell

    6 décembre 2020, par Link-akro

    Question updated 2020-12-06 to enlarge the scope without discarding the prior answer which applies to both prior and larger cases.

    


    I had trouble to provide a sexagesimal time (HH:MM:SS.mm) to a filter option that was not an expression. For instance trim filter.
It happens there is an escaping rule i did not know yet when i first asked, and was addressed in the first comment by @Gyan.

    


    The problem is universal, but the solution may depend on the shell if we go the scripting route... and i am currently stuck with Windows's CMD.exe.

    


    For instance the following skips one minute and 4 tenths of seconds in all streams as accurately as it can seek each, then invokes the trim filter to keep the segment between one and two minutes of the remaining duration, and do so with two different syntaxes. This example happens to be compatible with both CMD and BASH shells so no escaping hell.

    


    ffmpeg -ss "1:00.4" -i INPUT -vf "trim=start='1\:00':end='120'" OUTPUT


    


    Then how do we achieve the same in the expressions within the filters ?
If we cannot avoid using the scripting of the shell, i am looking for a Windows CMD solution.
I had posted one answer with a piece of script to convert a textual sexagesimal time to a textual decimal fractional time in seconds, which was not useful for the original case, but may apply to more generic cases and in particular to the expressions.

    


    Example of failed attempt with one expression in the select filter.

    


    ffmpeg -ss "1:00.4" -i INPUT -vf "select='between(t,1\:00',120)'" OUTPUT


    


    The sexagesimal notation seems to not be supported by ffmpeg filter expressions as i found no reference of it in the documentation nor in SO/web.

    


    I browsed through the list of functions defined in ffmpeg expression library but did not find any way to parse the sexagesimal input there yet, nor any way to use text in its semantics.

    


    However i found some unrelated example that hard-coded some arithmetical expression to provide the numerical decimal amount of seconds equivalent to what was intended, such as 2*60+2 to mean 2:02.

    


    The polynome used above to compute seconds may use preprocessing of shell variable, whichever shell it is, but we need to parse the components of HH:MM:SS.mm to put them in those variable first. You know, using bash $var or cmd %var%/%~1 styles. Otherwise we may compute the polynome completely in the shell instead of the expression but it is so much trouble for little gain.

    


    So while CMD still exists like an undead and becomes really dead , and while i do not have the opportunity yet to replace it, i wish for an answer that either :

    


      

    • does not need the shell/script at all, OR
    • 


    • provide a solution in Windows CMD, although relying on it as little as possible.
    • 


    


    Reminder and clarification, the use case assumes that we are given a textual sexagesimal time as input and intend to use it in an expression of ffmpeg filter with as little shell dependency as possible or otherwise satisfy Windows CMD.

    


  • FFmpeg : stream audio playlist, normalize loudness and generate spectrogram and waveform

    23 février 2021, par watain

    I would like to use FFmpeg to stream a playlist containing multiple audio files (mainly FLAC and MP3). During playback, I would like FFmpeg to normalize the loudness of the audio signal and generate a spectrogram and waveform of the audio signal separately. The spectrogram and waveform should serve as a audio stream monitor. The final audio stream, spectrogram and waveform outputs will be sent to a browser, which plays the audio stream and continuously renders the spectrogram and waveform "images". I would also like to be able to remove and add audio files from the playlist during playback.

    



    As a first step, I would like to use the ffmpeg command to achieve the desired result, before I'll try to write code which does the same programmatically.

    



    (sidenote : I've discovered libgroove which basically does what I want, but I would like to understand the FFmpeg internals and write my own piece of software. The target language is Go and using either the goav or go-libav libraries might do the job. However, I might end up writing the code in C, then creating Go language bindings from C, instead of relying on one of the named libraries.)

    



    Here's a little overview :

    



    playlist (input) --> loudnorm --> split --> spectrogram --> separate output
                                    |
                                  split ---> waveform ----> separate output
                                    |
                                    +------> encode ------> audio stream output


    



    For the loudness normalization, I intend to use the loudnorm filter, which implements the EBU R128 algorithm.

    



    For the spectrogram, I intend to use the showspectrum or showspectrumpic filter. Since I want the spectrogram to be "steamable", I'm not really sure how to do this. Maybe there's a way to output segments step-by-step ? Or maybe there's a way to output some sort of representation (JSON or any other format) step-by-step ?

    



    For the waveform, I intend to use the showwaves or showwavespic filter. The same as for the spectrogram applies here, since the output should be "streamable".

    



    I'm having a little trouble to achieve what I want using the ffmpeg command. Here's what I have so far :

    



    ffmpeg \
    -re -i input.flac \
    -filter_complex "
      [0:a] loudnorm [ln]; \
      [ln] asplit [a][b]; \
      [a] showspectrumpic=size=640x518:mode=combined [ss]; \
      [b] showwavespic=size=1280x202 [sw]
    " \
    -map '[ln]' -map '[ss]' -map '[sw]' \
    -f tee \
    -acodec libmp3lame -ab 128k -ac 2 -ar 44100 \
    "
      [aselect='ln'] rtp://127.0.0.1:1234 | \
      [aselect='ss'] ss.png | \
      [aselect='sw'] sw.png
    "


    



    Currently, I get the following error :

    



    Output with label 'ln' does not exist in any defined filter graph, or was already used elsewhere.


    



    Also, I'm not sure whether aselect is the correct functionality to use. Any Hints ?