Recherche avancée

Médias (1)

Mot : - Tags -/école

Autres articles (27)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (2048)

  • Tracking granular duration from an ffmpeg stream piped into Nodejs

    20 mai 2020, par slifty

    The Situation

    



    I have a video file that I want to process in nodejs, but I want to run it through ffmpeg first in order to normalize the encoding. As I get the data / for a given data event I would like to be able to know how "far" into the video the piped stream has progressed, at as close to frame-level granularity as possible.

    



    0.1 second granularity would be fine.

    



    The Code

    



    I am using Nodejs to invoke ffmpeg to take the video file path and then output the data to stdout :

    



    const ffmpegSettings = [
    '-i', './path/to/video.mp4',       // Ingest a file
    '-f', 'mpegts',                    // Specify the wrapper
    '-'                                // Output to stdout
]
const ffmpegProcess = spawn('ffmpeg', ffmpegSettings)
const readableStream = ffmpegProcess.stdout
readableStream.on('data', async (data) => {
    readableStream.pause() // Pause the stream to make sure we process in order.

    // Process the data.
    // This is where I want to know the "timestamp" for the data.

    readableStream.resume() // Restart the stream.
})


    



    The Question

    



    How can I efficiently and accurately keep track of how far into the video a given 'data' event represents ? Keeping in mind that a given on event in this context might not even have enough data to represent even a full frame.

    




    



    Temporary Edit

    



    (I will delete this section once a conclusive answer is identified)

    



    Since posting this question I've done some exploration using ffprobe.

    



        start = () => {
        this.ffmpegProcess = spawn('ffmpeg', this.getFfmpegSettings())
        this.ffprobeProcess = spawn('ffprobe', this.getFfprobeSettings())
        this.inputStream = this.getInputStream()

        this.inputStream.on('data', (rawData) => {
            this.inputStream.pause()
            if(rawData) {
                this.ffmpegProcess.stdin.write(rawData)
            }
            this.inputStream.resume()
        })

        this.ffmpegProcess.stdout.on('data', (mpegtsData) => {
            this.ffmpegProcess.stdout.pause()
            this.ffprobeProcess.stdin.write(mpegtsData)
            console.log(this.currentPts)
            this.ffmpegProcess.stdout.resume()
        })

        this.ffprobeProcess.stdout.on('data', (probeData) => {
            this.ffprobeProcess.stdout.pause()
            const lastLine = probeData.toString().trim().split('\n').slice(-1).pop()
            const lastPacket = lastLine.split(',')
            const pts = lastPacket[4]
            console.log(`FFPROBE: ${pts}`)
            this.currentPts = pts
            this.ffprobeProcess.stdout.resume()
        })

        logger.info(`Starting ingestion from ${this.constructor.name}...`)
    }

    /**
     * Returns an ffmpeg settings array for this ingestion engine.
     *
     * @return {String[]} A list of ffmpeg command line parameters.
     */
    getFfmpegSettings = () => [
        '-loglevel', 'info',
        '-i', '-',
        '-f', 'mpegts',
        // '-vf', 'fps=fps=30,signalstats,metadata=print:key=lavfi.signalstats.YDIF:file=\'pipe\\:3\'',
        '-',
    ]

    /**
     * Returns an ffprobe settings array for this ingestion engine.
     *
     * @return {String[]} A list of ffprobe command line parameters.
     */
    getFfprobeSettings = () => [
        '-f', 'mpegts',
        '-i', '-',
        '-print_format', 'csv',
        '-show_packets',
    ]


    



    This pipes the ffmpeg output into ffprobe and uses that to "estimate" where in the stream the processing has gotten. It starts off wildly inaccurate, but after about 5 seconds of processed-video ffprobe and ffmpeg are producing data at a similar pace.

    



    This is a hack, but a step towards the granularity I want. It may be that I need an mpegts parser / chunker that can run on the ffmpeg output directly in NodeJS.

    



    The output of the above is something along the lines of :

    



    undefined (repeated around 100x as I assume ffprobe needs more data to start)
FFPROBE: 1.422456 // Note that this represents around 60 ffprobe messages sent at once.
FFPROBE: 1.933867
1.933867 // These lines represent mpegts data sent from ffmpeg, and the latest pts reported by ffprobe
FFPROBE: 2.388989
2.388989
FFPROBE: 2.728578
FFPROBE: 2.989811
FFPROBE: 3.146544
3.146544
FFPROBE: 3.433889
FFPROBE: 3.668989
FFPROBE: 3.802400
FFPROBE: 3.956333
FFPROBE: 4.069333
4.069333
FFPROBE: 4.426544
FFPROBE: 4.609400
FFPROBE: 4.870622
FFPROBE: 5.184089
FFPROBE: 5.337267
5.337267
FFPROBE: 5.915522
FFPROBE: 6.104700
FFPROBE: 6.333478
6.333478
FFPROBE: 6.571833
FFPROBE: 6.705300
6.705300
FFPROBE: 6.738667
6.738667
FFPROBE: 6.777567
FFPROBE: 6.772033
6.772033
FFPROBE: 6.805400
6.805400
FFPROBE: 6.829811
FFPROBE: 6.838767
6.838767
FFPROBE: 6.872133
6.872133
FFPROBE: 6.882056
FFPROBE: 6.905500
6.905500


    


  • avutil/mem : Use max_alloc_size as-is

    21 mai 2020, par Andreas Rheinhardt
    avutil/mem : Use max_alloc_size as-is
    

    The size of a single allocation performed by av_malloc() or av_realloc()
    is supposed to be bounded by max_alloc_size, which defaults to INT_MAX
    and can be set by the user ; yet currently this is not completely
    honoured : The actual value used is max_alloc_size - 32. How this came
    to be can only be understood historically :

    a) 0ecca7a49f8e254c12a3a1de048d738bfbb614c6 disallowed allocations
    > INT_MAX. At that time the size parameter of av_malloc() was an
    unsigned and the commentary added ("lets disallow possible ambiguous
    cases") indicates that this was done as a precaution against calling the
    functions with negative int values. Genuinely limiting the size of
    allocations to INT_MAX doesn't seem to have been the intention given
    that at this time the memalign hack introduced in commit
    da9b170c6f06184a5114dc66afb8385cd0ffff83 (which when enabled increased
    the size of allocations slightly so that one can return a correctly
    aligned pointer that actually does not point to the beginning of the
    allocated buffer) was already present.
    b) Said memalign hack allocated 17 bytes more than actually desired, yet
    allocating 16 bytes more is actually enough and so this was changed in
    a9493601638b048c44751956d2360f215918800c ; this commit also replaced
    INT_MAX by INT_MAX - 16 (and made the limit therefore a limit on the size
    of the allocated buffer), but kept the comment, although there is nothing
    ambiguous about allocating (INT_MAX - 16)..INT_MAX.
    c) 13dfce3d44f99a2d7df71aba8ae003d58db726f7 then increased 16 to 32 for
    AVX, 6b4c0be5586acad3bbafd7d2dd02a8328a5ab632 replaced INT_MAX by
    MAX_MALLOC_SIZE (which was of course defined to be INT_MAX) and
    5a8e994287d8ef181c0a5eac537547d7059b4524 added max_alloc_size and made
    it user-selectable.
    d) 4fb311c804098d78e5ce5f527f9a9c37536d3a08 then dropped the memalign
    hack, yet it kept the -32 (probably because the comment about ambiguous
    cases was still present ?), although it is no longer needed at all after
    this commit. Therefore this commit removes it and uses max_alloc_size
    directly.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>

    • [DH] libavutil/mem.c
  • avformat/matroskaenc : Improve mimetype search

    3 novembre 2019, par Andreas Rheinhardt
    avformat/matroskaenc : Improve mimetype search
    

    Use the mime_types of the corresponding AVCodecDescriptor instead of
    tables specific to Matroska. The former are generally more encompassing :
    They contain every item of the current lists except "text/plain" for
    AV_CODEC_ID_TEXT and "binary" for AV_CODEC_ID_BIN_DATA.

    The former has been preserved by special-casing it while the latter is
    a hack added in c9212abf so that the demuxer (which uses the same tables)
    sets the appropriate CodecID for broken files ("binary" is not a correct
    mime type at all) ; using it for the muxer was a mistake. The correct
    mime type for AV_CODEC_ID_BIN_DATA is "application/octet-stream" and
    this is what one gets from the AVCodecDescriptor.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>

    • [DH] libavformat/matroskaenc.c