Recherche avancée

Médias (1)

Mot : - Tags -/iphone

Autres articles (98)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8002)

  • Live streaming : node-media-server + Dash.js configured for real-time low latency

    7 juillet 2021, par Maoration

    We're working on an app that enables live monitoring of your back yard.
Each client has a camera connected to the internet, streaming to our public node.js server.

    



    I'm trying to use node-media-server to publish an MPEG-DASH (or HLS) stream to be available for our app clients, on different networks, bandwidths and resolutions around the world.

    



    Our goal is to get as close as possible to live "real-time" so you can monitor what happens in your backyard instantly.

    



    The technical flow already accomplished is :

    



      

    1. ffmpeg process on our server processes the incoming camera stream (separate child process for each camera) and publishes the stream via RTSP on the local machine for node-media-server to use as an 'input' (we are also saving segmented files, generating thumbnails, etc.). the ffmpeg command responsible for that is :

      



      -c:v libx264 -preset ultrafast -tune zerolatency -b:v 900k -f flv rtmp://127.0.0.1:1935/live/office

    2. 


    3. node-media-server is running with what I found as the default configuration for 'live-streaming'

      



      private NMS_CONFIG = {
server: {
  secret: 'thisisnotmyrealsecret',
},
rtmp_server: {
  rtmp: {
    port: 1935,
    chunk_size: 60000,
    gop_cache: false,
    ping: 60,
    ping_timeout: 30,
  },
  http: {
    port: 8888,
    mediaroot: './server/media',
    allow_origin: '*',
  },
  trans: {
    ffmpeg: '/usr/bin/ffmpeg',
    tasks: [
      {
        app: 'live',
        hls: true,
        hlsFlags: '[hls_time=2:hls_list_size=3:hls_flags=delete_segments]',
        dash: true,
        dashFlags: '[f=dash:window_size=3:extra_window_size=5]',
      },
    ],
  },
},


      



      } ;

    4. 


    5. As I understand it, out of the box NMS (node-media-server) publishes the input stream it gets in multiple output formats : flv, mpeg-dash, hls.
with all sorts of online players for these formats I'm able to access and the stream using the url on localhost. with mpeg-dash and hls I'm getting anything between 10-15 seconds of delay, and more.

    6. 


    




    



    My goal now is to implement a local client-side mpeg-dash player, using dash.js and configure it to be as close as possible to live.

    



    my code for that is :

    



    

    

    &#xD;&#xA;&#xD;&#xA;    &#xD;&#xA;        &#xD;&#xA;        &#xD;&#xA;    &#xD;&#xA;    &#xD;&#xA;        <div>&#xD;&#xA;            <video autoplay="" controls=""></video>&#xD;&#xA;        </div>&#xD;&#xA;        <code class="echappe-js">&lt;script src=&quot;https://cdnjs.cloudflare.com/ajax/libs/dashjs/3.0.2/dash.all.min.js&quot;&gt;&lt;/script&gt;&#xD;&#xA;&#xD;&#xA;        &lt;script&gt;&amp;#xD;&amp;#xA;            (function(){&amp;#xD;&amp;#xA;                // var url = &quot;https://dash.akamaized.net/envivio/EnvivioDash3/manifest.mpd&quot;;&amp;#xD;&amp;#xA;                var url = &quot;http://localhost:8888/live/office/index.mpd&quot;;&amp;#xD;&amp;#xA;                var player = dashjs.MediaPlayer().create();&amp;#xD;&amp;#xA;                &amp;#xD;&amp;#xA;                &amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;                // config&amp;#xD;&amp;#xA;                targetLatency = 2.0;        // Lowering this value will lower latency but may decrease the player&amp;#x27;s ability to build a stable buffer.&amp;#xD;&amp;#xA;                minDrift = 0.05;            // Minimum latency deviation allowed before activating catch-up mechanism.&amp;#xD;&amp;#xA;                catchupPlaybackRate = 0.5;  // Maximum catch-up rate, as a percentage, for low latency live streams.&amp;#xD;&amp;#xA;                stableBuffer = 2;           // The time that the internal buffer target will be set to post startup/seeks (NOT top quality).&amp;#xD;&amp;#xA;                bufferAtTopQuality = 2;     // The time that the internal buffer target will be set to once playing the top quality.&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;                player.updateSettings({&amp;#xD;&amp;#xA;                    &amp;#x27;streaming&amp;#x27;: {&amp;#xD;&amp;#xA;                        &amp;#x27;liveDelay&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;liveCatchUpMinDrift&amp;#x27;: 0.05,&amp;#xD;&amp;#xA;                        &amp;#x27;liveCatchUpPlaybackRate&amp;#x27;: 0.5,&amp;#xD;&amp;#xA;                        &amp;#x27;stableBufferTime&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;bufferTimeAtTopQuality&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;bufferTimeAtTopQualityLongForm&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;bufferToKeep&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;bufferAheadToKeep&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;lowLatencyEnabled&amp;#x27;: true,&amp;#xD;&amp;#xA;                        &amp;#x27;fastSwitchEnabled&amp;#x27;: true,&amp;#xD;&amp;#xA;                        &amp;#x27;abr&amp;#x27;: {&amp;#xD;&amp;#xA;                            &amp;#x27;limitBitrateByPortal&amp;#x27;: true&amp;#xD;&amp;#xA;                        },&amp;#xD;&amp;#xA;                    }&amp;#xD;&amp;#xA;                });&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;                console.log(player.getSettings());&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;                setInterval(() =&gt; {&amp;#xD;&amp;#xA;                  console.log(&amp;#x27;Live latency= &amp;#x27;, player.getCurrentLiveLatency());&amp;#xD;&amp;#xA;                  console.log(&amp;#x27;Buffer length= &amp;#x27;, player.getBufferLength(&amp;#x27;video&amp;#x27;));&amp;#xD;&amp;#xA;                }, 3000);&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;                player.initialize(document.querySelector(&quot;#videoPlayer&quot;), url, true);&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;            })();&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;        &lt;/script&gt;&#xD;&#xA;    &#xD;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;&#xA;&#xA;

    with the online test video (https://dash.akamaized.net/envivio/EnvivioDash3/manifest.mpd) I see that the live latency value is close to 2 secs (but I have no way to actually confirm it. it's a video file streamed. in my office I have a camera so I can actually compare latency between real-life and the stream I get).&#xA;however when working locally with my NMS, it seems this value does not want to go below 20-25 seconds.

    &#xA;&#xA;

    Am I doing something wrong ? any configuration on the player (client-side html) I'm forgetting ?&#xA;or is there a missing configuration I should add on the server side (NMS) ?

    &#xA;

  • Missing packets when transcoding using ffmpeg

    23 février 2021, par Adam Szmyd

    I have very weird issue with trans-coding opus to any other format using ffmpeg. I example it on transcoding to flac as this is what I'm currently using.

    &#xA;

    So I have this one example opus file that after processing via ffmpeg kinda gets shorten like if the ffmpeg would drop some packets/data out of it.

    &#xA;

    I wouldn't mind if ffmpeg was cleaning up some redundant stuff but in practice this thing is making my output file of multiple streams out of sync so at some point audio tracks gets overlapping each other.

    &#xA;

    So I have this input input.opus file that's length is 00:00:05.78 and when i pass it through ffmpeg like this :

    &#xA;

    $ ffmpeg -i input.opus -c flac output.flac&#xA;ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers&#xA;  built with gcc 10 (GCC)&#xA;  configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --docdir=/usr/share/doc/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags=&#x27;-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection&#x27; --extra-ldflags=&#x27;-Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld &#x27; --extra-cflags=&#x27; &#x27; --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libvo-amrwbenc --enable-version3 --enable-bzlib --disable-crystalhd --enable-fontconfig --enable-frei0r --enable-gcrypt --enable-gnutls --enable-ladspa --enable-libaom --enable-libdav1d --enable-libass --enable-libbluray --enable-libcdio --enable-libdrm --enable-libjack --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-liblensfun --enable-libmp3lame --enable-libmysofa --enable-nvenc --enable-openal --enable-opencl --enable-opengl --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librav1e --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libvorbis --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-version3 --enable-vapoursynth --enable-libvpx --enable-vulkan --enable-libglslang --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg --enable-libzvbi --enable-avfilter --enable-avresample --enable-libmodplug --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-lto --enable-libmfx --enable-runtime-cpudetect&#xA;  libavutil      56. 51.100 / 56. 51.100&#xA;  libavcodec     58. 91.100 / 58. 91.100&#xA;  libavformat    58. 45.100 / 58. 45.100&#xA;  libavdevice    58. 10.100 / 58. 10.100&#xA;  libavfilter     7. 85.100 /  7. 85.100&#xA;  libavresample   4.  0.  0 /  4.  0.  0&#xA;  libswscale      5.  7.100 /  5.  7.100&#xA;  libswresample   3.  7.100 /  3.  7.100&#xA;  libpostproc    55.  7.100 / 55.  7.100&#xA;Input #0, ogg, from &#x27;input.opus&#x27;:&#xA;  Duration: 00:00:05.78, start: -0.017500, bitrate: 48 kb/s&#xA;    Stream #0:0: Audio: opus, 48000 Hz, mono, fltp&#xA;    Metadata:&#xA;      DURATION        : 00:00:05.836000000&#xA;      encoder         : Lavc58.91.100 opus&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (opus (native) -> flac (native))&#xA;Press [q] to stop, [?] for help&#xA;[flac @ 0x55f94c9c2f40] encoding as 24 bits-per-sample&#xA;Output #0, flac, to &#x27;output.flac&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf58.45.100&#xA;    Stream #0:0: Audio: flac, 48000 Hz, mono, s32 (24 bit), 128 kb/s&#xA;    Metadata:&#xA;      DURATION        : 00:00:05.836000000&#xA;      encoder         : Lavc58.91.100 flac&#xA;size=     393kB time=00:00:05.79 bitrate= 555.7kbits/s speed= 373x    &#xA;video:0kB audio:385kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.102753%&#xA;

    &#xA;

    interestingly ffmpeg above shows that output should get 00:00:05.79 duration but when checking it with ffprobe, its shorter by 50ms :

    &#xA;

    $ ffprobe -hide_banner output.flac&#xA;Input #0, flac, from &#x27;output.flac&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf58.45.100&#xA;  Duration: 00:00:05.73, start: 0.000000, bitrate: 561 kb/s&#xA;    Stream #0:0: Audio: flac, 48000 Hz, mono, s32 (24 bit)&#xA;

    &#xA;

    It may seem silly small difference but I intentionally cut the file to be 5s long to keep troubleshooting easier. Source file is 30mins and I loose 1min out of it during that process so it is real.

    &#xA;

    ffmpeg version 4.3.1

    &#xA;

    How can i troubleshoot this ? When i try with -loglevel trace I noticed these few logs that looks "suspicious" :

    &#xA;

    [opus @ 0x555a5abe6c40] skip 0 / discard 192 samples due to side data&#xA;[opus @ 0x555a5abe6c40] discard 192/960 samples&#xA;

    &#xA;

    But haven't found a way to stop it from discarding these samples (not even sure it is causing that..)

    &#xA;

    I would appreciate any help or point troubleshooting direction.

    &#xA;

  • lavu/x86 : add FFT assembly

    10 avril 2021, par Lynne
    lavu/x86 : add FFT assembly
    

    This commit adds a pure x86 assembly SIMD version of the FFT in libavutil/tx.
    The design of this pure assembly FFT is pretty unconventional.

    On the lowest level, instead of splitting the complex numbers into
    real and imaginary parts, we keep complex numbers together but split
    them in terms of parity. This saves a number of shuffles in each transform,
    but more importantly, it splits each transform into two independent
    paths, which we process using separate registers in parallel.
    This allows us to keep all units saturated and lets us use all available
    registers to avoid dependencies.
    Moreover, it allows us to double the granularity of our per-load permutation,
    skipping many expensive lookups and allowing us to use just 4 loads per register,
    rather than 8, or in case FMA3 (and by extension, AVX2), use the vgatherdpd
    instruction, which is at least as fast as 4 separate loads on old hardware,
    and quite a bit faster on modern CPUs).

    Higher up, we go for a bottom-up construction of large transforms, foregoing
    the traditional per-transform call-return recursion chains. Instead, we always
    start at the bottom-most basis transform (in this case, a 32-point transform),
    and continue constructing larger and larger transforms until we return to the
    top-most transform.
    This way, we only touch the stack 3 times per a complete target transform :
    once for the 1/2 length transform and two times for the 1/4 length transform.

    The combination algorithm we use is a standard Split-Radix algorithm,
    as used in our C code. Although a version with less operations exists
    (Steven G. Johnson and Matteo Frigo's "A modified split-radix FFT with fewer
    arithmetic operations", IEEE Trans. Signal Process. 55 (1), 111–119 (2007),
    which is the one FFTW uses), it only has 2% less operations and requires at least 4x
    the binary code (due to it needing 4 different paths to do a single transform).
    That version also has other issues which prevent it from being implemented
    with SIMD code as efficiently, which makes it lose the marginal gains it offered,
    and cannot be performed bottom-up, requiring many recursive call-return chains,
    whose overhead adds up.

    We go through a lot of effort to minimize load/stores by keeping as much in
    registers in between construcring transforms. This saves us around 32 cycles,
    on paper, but in reality a lot more due to load/store aliasing (a load from a
    memory location cannot be issued while there's a store pending, and there are
    only so many (2 for Zen 3) load/store units in a CPU).
    Also, we interleave coefficients during the last stage to save on a store+load
    per register.

    Each of the smallest, basis transforms (4, 8 and 16-point in our case)
    has been extremely optimized. Our 8-point transform is barely 20 instructions
    in total, beating our old implementation 8-point transform by 1 instruction.
    Our 2x8-point transform is 23 instructions, beating our old implementation by
    6 instruction and needing 50% less cycles. Our 16-point transform's combination
    code takes slightly more instructions than our old implementation, but makes up
    for it by requiring a lot less arithmetic operations.

    Overall, the transform was optimized for the timings of Zen 3, which at the
    time of writing has the most IPC from all documented CPUs. Shuffles were
    preferred over arithmetic operations due to their 1/0.5 latency/throughput.

    On average, this code is 30% faster than our old libavcodec implementation.
    It's able to trade blows with the previously-untouchable FFTW on small transforms,
    and due to its tiny size and better prediction, outdoes FFTW on larger transforms
    by 11% on the largest currently supported size.

    • [DH] libavutil/tx.c
    • [DH] libavutil/tx_priv.h
    • [DH] libavutil/x86/Makefile
    • [DH] libavutil/x86/tx_float.asm
    • [DH] libavutil/x86/tx_float_init.c