Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (104)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

Sur d’autres sites (12591)

  • How to seek mp4 aac audio using Media Source Extensions

    29 août 2018, par Chris

    Please can someone offer me a few pointers when trying to seek within streamed aac audio in mp4 containers ? I’m trying to develop a music download service that sips data via ranged requests rather than simply link to a mp4 file as an <audio></audio> src. (which will instead buffer the whole file as quickly as possible, and so be rather wasteful and expensive).

    So far I’ve managed to successfully append sequential audio range buffers to the SourceBuffer object using partial/ranged requests, attached to my suitably mime-typed MediaSource object. But as soon as I try to seek, the wheels come off and I receive a ’CHUNK_DEMUXER_ERROR_APPEND_FAILED’ error, with the specific issue : ’stream parsing failed’.

    I’ve prepared my mp4 files by encoding them with ffmpeg (via the fluent ffmpeg module), rewriting the movie header box at the start of the file (via the -movflags faststart setting) so that the duration can be parsed. I then fragment the file with mp4fragment (part of the Bento4 tools) with the default settings, and check to ensure the structure of the file matches the format specified by ISO BMFF, with pairs of movie fragments and data boxes (moof/mdat) describing the audio stream. Given the source buffer has no problem playing from the beginning, with contiguous subsequent ranges, this appears to confirm that the format of the mp4 file is acceptable.

    As an aside, I’ve tried fragmenting the file completely in ffmpeg/fluent ffmpeg (using the ’-movflags empty_moov+default_base_moof’ options), but while this works, it also removes the duration from the moov as you’d expect, so the file gets larger during playback as more fragments are fetched and appended. If I set the file duration manually, I still have the issue of not being able to seek to unbuffered audio, so I only seem to be making life more difficult trying to fragment the file solely in ffmpeg.

    So how should I go about seeking within the stream ? I gather that seeking effectively ’needle-drops’ randomly, and so the source buffer might struggle to parse the data out of context, but I imagined that it would skip to the next available fragment in the range that I fetch (which is calculated using the percentage of the seek bar width to set the player.currentTime, which is then converted to a suitable byte range using the 128kbps CBR figure to convert seconds to bytes, to send a 206 partial range request).

    I’ve seen mention of buffer offsets, but I don’t understand how these apply. Most of the dev examples I’ve seen just focus on whole files or segmented videos, rather than fragmented single audio files for seeking ? Do I need to somehow retain a portion of the data from the moov box when seeking for the source buffer to be able to parse it ? In the trun box I have a data offset that varies between two values throughout the file, 444 and 448, depending on whether the sample count is 86 or 87. I’m not sure why it’s not consistent.

    Here’s what the moov looks like from my audio file :

    [ftyp] size=8+24
     major_brand = isom
     minor_version = 200
     compatible_brand = isom
     compatible_brand = iso2
     compatible_brand = mp41
     compatible_brand = iso5
    [moov] size=8+620
     [mvhd] size=12+96
       timescale = 1000
       duration = 350047
       duration(ms) = 350047
     [trak] size=8+448
       [tkhd] size=12+80, flags=7
         enabled = 1
         id = 1
         duration = 350047
         width = 0.000000
         height = 0.000000
       [edts] size=8+28
         [elst] size=12+16
           entry count = 1
           entry/segment duration = 350000
           entry/media time = 2048
           entry/media rate = 1
       [mdia] size=8+312
         [mdhd] size=12+20
           timescale = 44100
           duration = 0
           duration(ms) = 0
           language = und
         [hdlr] size=12+41
           handler_type = soun
           handler_name = Bento4 Sound Handler
         [minf] size=8+219
           [smhd] size=12+4
             balance = 0
           [dinf] size=8+28
             [dref] size=12+16
               [url ] size=12+0, flags=1
                 location = [local to file]
           [stbl] size=8+159
             [stsd] size=12+79
               entry-count = 1
               [mp4a] size=8+67
                 data_reference_index = 1
                 channel_count = 2
                 sample_size = 16
                 sample_rate = 44100
                 [esds] size=12+27
                   [ESDescriptor] size=2+25
                     es_id = 0
                     stream_priority = 0
                     [DecoderConfig] size=2+17
                       stream_type = 5
                       object_type = 64
                       up_stream = 0
                       buffer_size = 0
                       max_bitrate = 128006
                       avg_bitrate = 128006
                       DecoderSpecificInfo = 12 10
                     [Descriptor:06] size=2+1
             [stts] size=12+4
               entry_count = 0
             [stsc] size=12+4
               entry_count = 0
             [stsz] size=12+8
               sample_size = 0
               sample_count = 0
             [stco] size=12+4
               entry_count = 0
     [mvex] size=8+48
       [mehd] size=12+4
         duration = 350047
       [trex] size=12+20
         track id = 1
         default sample description index = 1
         default sample duration = 0
         default sample size = 0
         default sample flags = 0

    And here’s a typical fragment :

    [moof] size=8+428
     [mfhd] size=12+4
       sequence number = 1
     [traf] size=8+404
       [tfhd] size=12+8, flags=20008
         track ID = 1
         default sample duration = 1024
       [tfdt] size=12+8, version=1
         base media decode time = 0
       [trun] size=12+352, flags=201
         sample count = 86
         data offset = 444
    [mdat] size=8+32653

    Does that all look good ? Any pointers for seeking within such a file would be hugely appreciated. Thanks !

  • Live streaming : node-media-server + Dash.js configured for real-time low latency

    7 juillet 2021, par Maoration

    We're working on an app that enables live monitoring of your back yard.&#xA;Each client has a camera connected to the internet, streaming to our public node.js server.

    &#xA;&#xA;

    I'm trying to use node-media-server to publish an MPEG-DASH (or HLS) stream to be available for our app clients, on different networks, bandwidths and resolutions around the world.

    &#xA;&#xA;

    Our goal is to get as close as possible to live "real-time" so you can monitor what happens in your backyard instantly.

    &#xA;&#xA;

    The technical flow already accomplished is :

    &#xA;&#xA;

      &#xA;
    1. ffmpeg process on our server processes the incoming camera stream (separate child process for each camera) and publishes the stream via RTSP on the local machine for node-media-server to use as an 'input' (we are also saving segmented files, generating thumbnails, etc.). the ffmpeg command responsible for that is :

      &#xA;&#xA;

      -c:v libx264 -preset ultrafast -tune zerolatency -b:v 900k -f flv rtmp://127.0.0.1:1935/live/office

    2. &#xA;

    3. node-media-server is running with what I found as the default configuration for 'live-streaming'

      &#xA;&#xA;

      private NMS_CONFIG = {&#xA;server: {&#xA;  secret: &#x27;thisisnotmyrealsecret&#x27;,&#xA;},&#xA;rtmp_server: {&#xA;  rtmp: {&#xA;    port: 1935,&#xA;    chunk_size: 60000,&#xA;    gop_cache: false,&#xA;    ping: 60,&#xA;    ping_timeout: 30,&#xA;  },&#xA;  http: {&#xA;    port: 8888,&#xA;    mediaroot: &#x27;./server/media&#x27;,&#xA;    allow_origin: &#x27;*&#x27;,&#xA;  },&#xA;  trans: {&#xA;    ffmpeg: &#x27;/usr/bin/ffmpeg&#x27;,&#xA;    tasks: [&#xA;      {&#xA;        app: &#x27;live&#x27;,&#xA;        hls: true,&#xA;        hlsFlags: &#x27;[hls_time=2:hls_list_size=3:hls_flags=delete_segments]&#x27;,&#xA;        dash: true,&#xA;        dashFlags: &#x27;[f=dash:window_size=3:extra_window_size=5]&#x27;,&#xA;      },&#xA;    ],&#xA;  },&#xA;},&#xA;

      &#xA;&#xA;

      } ;

    4. &#xA;

    5. As I understand it, out of the box NMS (node-media-server) publishes the input stream it gets in multiple output formats : flv, mpeg-dash, hls.&#xA;with all sorts of online players for these formats I'm able to access and the stream using the url on localhost. with mpeg-dash and hls I'm getting anything between 10-15 seconds of delay, and more.

    6. &#xA;

    &#xA;&#xA;


    &#xA;&#xA;

    My goal now is to implement a local client-side mpeg-dash player, using dash.js and configure it to be as close as possible to live.

    &#xA;&#xA;

    my code for that is :

    &#xA;&#xA;

    &#xD;&#xA;
    &#xD;&#xA;
    &#xD;&#xA;&#xD;&#xA;    &#xD;&#xA;        &#xD;&#xA;        &#xD;&#xA;    &#xD;&#xA;    &#xD;&#xA;        <div>&#xD;&#xA;            <video autoplay="" controls=""></video>&#xD;&#xA;        </div>&#xD;&#xA;        <code class="echappe-js">&lt;script src=&quot;https://cdnjs.cloudflare.com/ajax/libs/dashjs/3.0.2/dash.all.min.js&quot;&gt;&lt;/script&gt;&#xD;&#xA;&#xD;&#xA;        &lt;script&gt;&amp;#xD;&amp;#xA;            (function(){&amp;#xD;&amp;#xA;                // var url = &quot;https://dash.akamaized.net/envivio/EnvivioDash3/manifest.mpd&quot;;&amp;#xD;&amp;#xA;                var url = &quot;http://localhost:8888/live/office/index.mpd&quot;;&amp;#xD;&amp;#xA;                var player = dashjs.MediaPlayer().create();&amp;#xD;&amp;#xA;                &amp;#xD;&amp;#xA;                &amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;                // config&amp;#xD;&amp;#xA;                targetLatency = 2.0;        // Lowering this value will lower latency but may decrease the player&amp;#x27;s ability to build a stable buffer.&amp;#xD;&amp;#xA;                minDrift = 0.05;            // Minimum latency deviation allowed before activating catch-up mechanism.&amp;#xD;&amp;#xA;                catchupPlaybackRate = 0.5;  // Maximum catch-up rate, as a percentage, for low latency live streams.&amp;#xD;&amp;#xA;                stableBuffer = 2;           // The time that the internal buffer target will be set to post startup/seeks (NOT top quality).&amp;#xD;&amp;#xA;                bufferAtTopQuality = 2;     // The time that the internal buffer target will be set to once playing the top quality.&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;                player.updateSettings({&amp;#xD;&amp;#xA;                    &amp;#x27;streaming&amp;#x27;: {&amp;#xD;&amp;#xA;                        &amp;#x27;liveDelay&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;liveCatchUpMinDrift&amp;#x27;: 0.05,&amp;#xD;&amp;#xA;                        &amp;#x27;liveCatchUpPlaybackRate&amp;#x27;: 0.5,&amp;#xD;&amp;#xA;                        &amp;#x27;stableBufferTime&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;bufferTimeAtTopQuality&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;bufferTimeAtTopQualityLongForm&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;bufferToKeep&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;bufferAheadToKeep&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;lowLatencyEnabled&amp;#x27;: true,&amp;#xD;&amp;#xA;                        &amp;#x27;fastSwitchEnabled&amp;#x27;: true,&amp;#xD;&amp;#xA;                        &amp;#x27;abr&amp;#x27;: {&amp;#xD;&amp;#xA;                            &amp;#x27;limitBitrateByPortal&amp;#x27;: true&amp;#xD;&amp;#xA;                        },&amp;#xD;&amp;#xA;                    }&amp;#xD;&amp;#xA;                });&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;                console.log(player.getSettings());&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;                setInterval(() =&gt; {&amp;#xD;&amp;#xA;                  console.log(&amp;#x27;Live latency= &amp;#x27;, player.getCurrentLiveLatency());&amp;#xD;&amp;#xA;                  console.log(&amp;#x27;Buffer length= &amp;#x27;, player.getBufferLength(&amp;#x27;video&amp;#x27;));&amp;#xD;&amp;#xA;                }, 3000);&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;                player.initialize(document.querySelector(&quot;#videoPlayer&quot;), url, true);&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;            })();&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;        &lt;/script&gt;&#xD;&#xA;    &#xD;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;&#xA;&#xA;

    with the online test video (https://dash.akamaized.net/envivio/EnvivioDash3/manifest.mpd) I see that the live latency value is close to 2 secs (but I have no way to actually confirm it. it's a video file streamed. in my office I have a camera so I can actually compare latency between real-life and the stream I get).&#xA;however when working locally with my NMS, it seems this value does not want to go below 20-25 seconds.

    &#xA;&#xA;

    Am I doing something wrong ? any configuration on the player (client-side html) I'm forgetting ?&#xA;or is there a missing configuration I should add on the server side (NMS) ?

    &#xA;

  • avcodec : add a native SMPTE VC-2 HQ encoder

    10 février 2016, par Rostislav Pehlivanov
    avcodec : add a native SMPTE VC-2 HQ encoder
    

    This commit adds a new encoder capable of creating BBC/SMPTE Dirac/VC-2 HQ
    profile files.

    Dirac is a wavelet based codec created by the BBC a little more than 10
    years ago. Since then, wavelets have mostly gone out of style as they
    did not provide adequate encoding gains at lower bitrates. Dirac was a
    fully featured video codec equipped with perceptual masking, support for
    most popular pixel formats, interlacing, overlapped-block motion
    compensation, and other features. It found new life after being stripped
    of various features and standardized as the VC-2 codec by the SMPTE with
    an extra profile, the HQ profile that this encoder supports, added.

    The HQ profile was based off of the Low-Delay profile previously
    existing in Dirac. The profile forbids DC prediction and arithmetic
    coding to focus on high performance and low delay at higher bitrates.
    The standard bitrates for this profile vary but generally 1:4
    compression is expected ( 525 Mbps vs the 2200 Mbps for uncompressed
    1080p50). The codec only supports I-frames, hence the high bitrates.

    The structure of this encoder is simple : do a DWT transform on the
    entire image, split it into multiple slices (specified by the user) and
    encode them in parallel. All of the slices are of the same size, making
    rate control and threading very trivial. Although only in C, this encoder
    is capable of 30 frames per second on an 4 core 8 threads Ivy Bridge.
    A lookup table is used to encode most of the coefficients.

    No code was used from the GSoC encoder from 2007 except for the 2
    transform functions in diracenc_transforms.c. All other code was written
    from scratch.

    This encoder outperforms any other encoders in quality, usability and in
    features. Other existing implementations do not support 4 level
    transforms or 64x64 blocks (slices), which greatly increase compression.

    As previously said, the codec is meant for broadcasting, hence support
    for non-broadcasting image widths, heights, bit depths, aspect ratios,
    etc. are limited by the "level". Although this codec supports a few
    chroma subsamplings (420, 422, 444), signalling those is generally
    outside the specifications of the level used (3) and the reference
    decoder will outright refuse to read any image with such a flag
    signalled (it only supports 1920x1080 yuv422p10). However, most
    implementations will happily read files with alternate dimensions,
    framerates and formats signalled.

    Therefore, in order to encode files other than 1080p50 yuv422p10le, you
    need to provide an "-strict -2" argument to the command line. The FFmpeg
    decoder will happily read any files made with non-standard parameters,
    dimensions and subsamplings, and so will other implementations. IMO this
    should be "-strict -1", but I’ll leave that up for discussion.

    There are still plenty of stuff to implement, for instance 5 more
    wavelet transforms are still in the specs and supported by the decoder.

    The encoder can be lossless, given a high enough bitrate.

    Signed-off-by : Rostislav Pehlivanov <atomnuker@gmail.com>

    • [DH] libavcodec/Makefile
    • [DH] libavcodec/allcodecs.c
    • [DH] libavcodec/vc2enc.c
    • [DH] libavcodec/vc2enc_dwt.c
    • [DH] libavcodec/vc2enc_dwt.h
    • [DH] libavcodec/version.h