Recherche avancée

Médias (91)

Autres articles (31)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (5175)

  • Precise method of segmenting & transcoding video+audio (via ffmpeg), into an on-demand HLS stream ?

    17 novembre 2019, par Felix

    recently I’ve been messing around with FFMPEG and streams through Nodejs. My ultimate goal is to serve a transcoded video stream - from any input filetype - via HTTP, generated in real-time as it’s needed in segments.

    I’m currently attempting to handle this using HLS. I pre-generate a dummy m3u8 manifest using the known duration of the input video. It contains a bunch of URLs that point to individual constant-duration segments. Then, once the client player starts requesting the individual URLs, I use the requested path to determine which time range of video the client needs. Then I transcode the video and stream that segment back to them.

    Now for the problem : This approach mostly works, but has a small audio bug. Currently, with most test input files, my code produces a video that - while playable - seems to have a very small (< .25 second) audio skip at the start of each segment.

    I think this may be an issue with splitting using time in ffmpeg, where possibly the audio stream cannot be accurately sliced at the exact frame the video is. So far, I’ve been unable to figure out a solution to this problem.

    If anybody has any direction they can steer me - or even a prexisting library/server that solves this use-case - I appreciate the guidance. My knowledge of video encoding is fairly limited.

    I’ll include an example of my relevant current code below, so others can see where I’m stuck. You should be able to run this as a Nodejs Express server, then point any HLS player at localhost:8080/master to load the manifest and begin playback. See the transcode.get('/segment/:seg.ts' line at the end, for the relevant transcoding bit.

    'use strict';
    const express = require('express');
    const ffmpeg = require('fluent-ffmpeg');
    let PORT = 8080;
    let HOST = 'localhost';
    const transcode = express();


    /*
    * This file demonstrates an Express-based server, which transcodes &amp; streams a video file.
    * All transcoding is handled in memory, in chunks, as needed by the player.
    *
    * It works by generating a fake manifest file for an HLS stream, at the endpoint "/m3u8".
    * This manifest contains links to each "segment" video clip, which browser-side HLS players will load as-needed.
    *
    * The "/segment/:seg.ts" endpoint is the request destination for each clip,
    * and uses FFMpeg to generate each segment on-the-fly, based off which segment is requested.
    */


    const pathToMovie = 'C:\\input-file.mp4';  // The input file to stream as HLS.
    const segmentDur = 5; //  Controls the duration (in seconds) that the file will be chopped into.


    const getMetadata = async(file) => {
       return new Promise( resolve => {
           ffmpeg.ffprobe(file, function(err, metadata) {
               console.log(metadata);
               resolve(metadata);
           });
       });
    };



    // Generate a "master" m3u8 file, which the player should point to:
    transcode.get('/master', async(req, res) => {
       res.set({"Content-Disposition":"attachment; filename=\"m3u8.m3u8\""});
       res.send(`#EXTM3U
    #EXT-X-STREAM-INF:BANDWIDTH=150000
    /m3u8?num=1
    #EXT-X-STREAM-INF:BANDWIDTH=240000
    /m3u8?num=2`)
    });

    // Generate an m3u8 file to emulate a premade video manifest. Guesses segments based off duration.
    transcode.get('/m3u8', async(req, res) => {
       let met = await getMetadata(pathToMovie);
       let duration = met.format.duration;

       let out = '#EXTM3U\n' +
           '#EXT-X-VERSION:3\n' +
           `#EXT-X-TARGETDURATION:${segmentDur}\n` +
           '#EXT-X-MEDIA-SEQUENCE:0\n' +
           '#EXT-X-PLAYLIST-TYPE:VOD\n';

       let splits = Math.max(duration / segmentDur);
       for(let i=0; i&lt; splits; i++){
           out += `#EXTINF:${segmentDur},\n/segment/${i}.ts\n`;
       }
       out+='#EXT-X-ENDLIST\n';

       res.set({"Content-Disposition":"attachment; filename=\"m3u8.m3u8\""});
       res.send(out);
    });

    // Transcode the input video file into segments, using the given segment number as time offset:
    transcode.get('/segment/:seg.ts', async(req, res) => {
       const segment = req.params.seg;
       const time = segment * segmentDur;

       let proc = new ffmpeg({source: pathToMovie})
           .seekInput(time)
           .duration(segmentDur)
           .outputOptions('-preset faster')
           .outputOptions('-g 50')
           .outputOptions('-profile:v main')
           .withAudioCodec('aac')
           .outputOptions('-ar 48000')
           .withAudioBitrate('155k')
           .withVideoBitrate('1000k')
           .outputOptions('-c:v h264')
           .outputOptions(`-output_ts_offset ${time}`)
           .format('mpegts')
           .on('error', function(err, st, ste) {
               console.log('an error happened:', err, st, ste);
           }).on('progress', function(progress) {
               console.log(progress);
           })
           .pipe(res, {end: true});
    });

    transcode.listen(PORT, HOST);
    console.log(`Running on http://${HOST}:${PORT}`);
  • The First Problem

    19 janvier 2011, par Multimedia Mike — HTML5

    A few years ago, The Linux Hater made the following poignant observation regarding Linux driver support :

    Drivers are only just the beginning... But for some reason y’all like to focus on the drivers. You know why lusers do that ? Because it just happens to be the problem that people notice first.

    And so it is with the HTML5 video codec debate, re-invigorated in the past week by Google’s announcement of dropping native H.264 support in their own HTML5 video tag implementation. As I read up on the fiery debate, I kept wondering why people are so obsessed with this issue. Then I remembered the Linux Hater’s post and realized that the video codec issue is simply the first problem that most people notice regarding HTML5 video.

    I appreciate that the video codec debate has prompted Niedermayer to post on his blog once more. Otherwise, I’m just munching popcorn on the sidelines, amused and mildly relieved that the various factions are vociferously attacking each other rather than that little project I help with at work.

    Getting back to the "first problem" aspect— there’s so much emphasis on the video codec ; I wonder why no one ever, ever mentions word one about an audio codec. AAC is typically the codec that pairs with H.264 in the MPEG stack. Dark Shikari once mentioned that "AAC’s licensing terms are exponentially more onerous than H.264′s. If Google didn’t want to use H.264, they would sure as hell not want to use AAC." Most people are probably using "H.264" to refer to the entire MPEG/H.264/AAC stack, even if they probably don’t understand what all of those pieces mean.

    Anyway, The Linux Hater’s driver piece continues :

    Once y’all have drivers, the fight will move to the next layer up. And like I said, it’s a lot harder at that layer.

    A few months ago, when I wanted to post the WebM output of my new VP8 encoder and thought it would be a nice touch to deliver it via a video tag, I ignored the video codec problem (just encoded a VP8/WebM file) only to immediately discover a problem at a different layer— specifically, embedding a file using a video tag triggers a full file download when the page is loaded, which is unacceptable from end user and web hosting perspectives. This is a known issue but doesn’t get as much attention, I guess because there are bigger problems to solve first (c.f. video codec issue).

    For other issues, check out the YouTube blog’s HTML5 post or Hulu’s post that also commented on HTML5. Issues such as video streaming flexibility, content protection, fullscreen video, webcam/microphone input, and numerous others are rarely mentioned in the debates. Only "video codec" is of paramount importance.

    But I’m lending too much weight to the cacophony of a largely uninformed internet debate. Realistically, I know there are many talented engineers down in the trenches working to solve at least some of these problems. To tie this in with the Linux driver example, I’m consistently stunned these days regarding how simple it is to get Linux working on a new computer— most commodity consumer hardware really does just work right out of the box. Maybe one day, we’ll wake up and find that HTML5 video has advanced to the point that it solves all of the relevant problems to make it the simple and obvious choice for delivering web video in nearly all situations.

    It won’t be this year.

  • How to solve issue for converting ismv file of h264 video codec and aac audio codec ?

    4 octobre 2013, par Priyal

    I want to convert transmux ismv file to mp4 format. ISMV file is encoded withe the following ffmpeg detail :

      Metadata:
      major_brand     : isml
      minor_version   : 1
      compatible_brands: piffiso2
      creation_time   : 2013-10-03 14:10:41
      Duration: 15:21:13.16, start: 0.000000, bitrate: 41 kb/s
      Stream #0:0(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 64
                        kb/s (default)
      Metadata:
      creation_time   : 2013-10-03 14:10:41
      handler_name    : Audio
      Stream #0:1(und): Video: h264 (avc1 / 0x31637661), 1280x720, 3217 kb/s, 29.9
                        7 tbr, 10000k tbn, 20000k tbc (default)
    Metadata:
      creation_time   : 2013-10-03 14:10:41
      handler_name    : Video
      Stream #0:2(und): Data: none (dfxp / 0x70786664), 32 kb/s (default)
      Metadata:
      creation_time   : 2013-10-03 14:10:41
      handler_name    : Text

    I used the following ffmpeg command to transmux :

    ffmpeg -y -ss 00:00:10 -i Encoder1.ismv -vcodec libx264 -ar 44100 -t 40 -preset slow -qp 0  
    Encoder.mp4

    I got output as :

    ffmpeg version N-56010-g54d628a Copyright (c) 2000-2013 the FFmpeg developers
    built on Sep  4 2013 00:44:27 with gcc 4.7.3 (GCC)
    configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av
    isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab
    le-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetyp
    e --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --ena
    ble-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-l
    ibopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libsp
    eex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-
    amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --
    enable-libxvid --enable-zlib
        libavutil      52. 43.100 / 52. 43.100
        libavcodec     55. 31.100 / 55. 31.100
        libavformat    55. 16.100 / 55. 16.100
        libavdevice    55.  3.100 / 55.  3.100
        libavfilter     3. 83.102 /  3. 83.102
        libswscale      2.  5.100 /  2.  5.100
        libswresample   0. 17.103 /  0. 17.103
        libpostproc    52.  3.100 / 52.  3.100
    [aac @ 040f1ba0] TYPE_FIL: Input buffer exhausted before END element found
    [h264 @ 00353120] AVC: nal size -729776398
    [h264 @ 00353120] no frame!
    [h264 @ 00353120] AVC: nal size -570515285
    [h264 @ 00353120] no frame!
    [h264 @ 00353120] AVC: nal size -1477874754
    [h264 @ 00353120] no frame!
    [h264 @ 00353120] AVC: nal size -712314563
    [h264 @ 00353120] no frame!
    [h264 @ 00353120] AVC: nal size -23151524
    [h264 @ 00353120] no frame!
    [h264 @ 00353120] AVC: nal size -592499201
    [h264 @ 00353120] no frame!
    [h264 @ 00353120] AVC: nal size 225768173
    [h264 @ 00353120] no frame!
    [h264 @ 00353120] AVC: nal size 698187359
    [h264 @ 00353120] AVC: nal size 635127544
    [h264 @ 00353120] no frame!
    [h264 @ 00353120] AVC: nal size -1242688339
    [h264 @ 00353120] AVC: nal size 269543071
    [h264 @ 00353120] no frame!
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0035e840] decoding for stream 1 failed
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0035e840] Could not find codec parameters for stream
     1 (Video: h264 (avc1 / 0x31637661), 1280x720, 3217 kb/s): unspecified pixel form
     at Consider increasing the value for the &#39;analyzeduration&#39; and &#39;probesize&#39; options
     Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#39;C:\inetpub\media\archives\DEFAULT WEB S
     ITE\PushToPUblishPoint\Eagan_12034_Soccer_3-10-2013_14_10_19-isml\2013-10-03-14-
     10-35-436\Segment001\Encoder1.ismv&#39;:
      Metadata:
      major_brand     : isml
      minor_version   : 1
      compatible_brands: piffiso2
      creation_time   : 2013-10-03 14:10:41
      Duration: 15:21:13.16, start: 0.000000, bitrate: 41 kb/s
      Stream #0:0(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 64
                        kb/s (default)
       Metadata:
      creation_time   : 2013-10-03 14:10:41
      handler_name    : Audio
      Stream #0:1(und): Video: h264 (avc1 / 0x31637661), 1280x720, 3217 kb/s, 29.9
                        7 tbr, 10000k tbn, 20000k tbc (default)
      Metadata:
      creation_time   : 2013-10-03 14:10:41
      handler_name    : Video
      Stream #0:2(und): Data: none (dfxp / 0x70786664), 32 kb/s (default)
      Metadata:
      creation_time   : 2013-10-03 14:10:41
      handler_name    : Text

    [buffer @ 04679480] Unable to parse option value "-1" as pixel format
                  Last message repeated 1 times
    [buffer @ 04679480] Error setting option pix_fmt to value -1.
    [graph 0 input from stream 0:1 @ 040f7380] Error applying options to the filter.

    Error opening filters!

    Please suggest the solution for Windows OS