Recherche avancée

Médias (2)

Mot : - Tags -/map

Autres articles (60)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (9777)

  • lavc : Implement Dolby Vision RPU parsing

    3 janvier 2022, par Niklas Haas
    lavc : Implement Dolby Vision RPU parsing
    

    Based on a mixture of guesswork, partial documentation in patents, and
    reverse engineering of real-world samples. Confirmed working for all the
    samples I've thrown at it.

    Contains some annoying machinery to persist these values in between
    frames, which is needed in theory even though I've never actually seen a
    sample that relies on it in practice. May or may not work.

    Since the distinction matters greatly for parsing the color matrix
    values, this includes a small helper function to guess the right profile
    from the RPU itself in case the user has forgotten to forward the dovi
    configuration record to the decoder. (Which in practice, only ffmpeg.c
    and ffplay do..)

    Notable omissions / deviations :
    - CRC32 verification. This is based on the MPEG2 CRC32 type, which is
    similar to IEEE CRC32 but apparently different in subtle enough ways
    that I could not get it to pass verification no matter what parameters
    I fed to av_crc. It's possible the code needs some changes.
    - Linear interpolation support. Nothing documents this (beyond its
    existence) and no samples use it, so impossible to implement.
    - All of the extension metadata blocks, but these contain values that
    seem largely congruent with ST2094, HDR10, or other existing forms of
    side data, so I will defer parsing/attaching them to a future commit.
    - The patent describes a mechanism for predicting coefficients from
    previous RPUs, but the bit for the flag whether to use the
    prediction deltas or signal entirely new coefficients does not seem to
    be present in actual RPUs, so we ignore this subsystem entirely.
    - In the patent's spec, the NLQ subsystem also loops over
    num_nlq_pivots, but even in the patent the number is hard-coded to one
    iteration rather than signalled. So we only store one set of coefs.

    Heavily influenced by https://github.com/quietvoid/dovi_tool
    Documentation drawn from US Patent 10,701,399 B2 and ETSI GS CCM 001

    Signed-off-by : Niklas Haas <git@haasn.dev>
    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] configure
    • [DH] libavcodec/Makefile
    • [DH] libavcodec/dovi_rpu.c
    • [DH] libavcodec/dovi_rpu.h
  • How to use ffmpeg in JavaScript to decode H.264 frames into RGB frames

    17 juin 2020, par noel

    I'm trying to compile ffmpeg into javascript so that I can decode H.264 video streams using node. The streams are H.264 frames packed into RTP NALUs so any solution has to be able to accept H.264 frames rather than a whole file name. These frames can't be in a container like MP4 or AVI because then the demuxer needs to needs the timestamp of every frame before demuxing can occur, but I'm dealing with a real time stream, no containers.

    &#xA;&#xA;

    Streaming H.264 over RTP

    &#xA;&#xA;

    Below is the basic code I'm using to listen on a udp socket. Inside the 'message' callback the data packet is an RTP datagram. The data portion of the data gram is an H.264 frame (P-frames and I-frames).

    &#xA;&#xA;

    var PORT = 33333;&#xA;var HOST = &#x27;127.0.0.1&#x27;;&#xA;&#xA;var dgram = require(&#x27;dgram&#x27;);&#xA;var server = dgram.createSocket(&#x27;udp4&#x27;);&#xA;&#xA;server.on(&#x27;listening&#x27;, function () {&#xA;    var address = server.address();&#xA;    console.log(&#x27;UDP Server listening on &#x27; &#x2B; address.address &#x2B; ":" &#x2B; address.port);&#xA;});&#xA;&#xA;server.on(&#x27;message&#x27;, function (message, remote) {&#xA;    console.log(remote.address &#x2B; &#x27;:&#x27; &#x2B; remote.port &#x2B;&#x27; - &#x27; &#x2B; message);&#xA;    frame = parse_rtp(message);&#xA;&#xA;    rgb_frame = some_library.decode_h264(frame); // This is what I need.&#xA;&#xA;});&#xA;&#xA;server.bind(PORT, HOST);  &#xA;

    &#xA;&#xA;

    I found the Broadway.js library, but I couldn't get it working and it doesn't handle P-frames which I need. I also found ffmpeg.js, but could get that to work and it needs a whole file not a stream. Likewise, fluent-ffmpeg doesn't appear to support file streams ; all of the examples show a filename being passed to the constructor. So I decided to write my own API.

    &#xA;&#xA;

    My current solution attempt

    &#xA;&#xA;

    I have been able to compile ffmpeg into one big js file, but I can't use it like that. I want to write an API around ffmpeg and then expose those functions to JS. So it seems to me like I need to do the following :

    &#xA;&#xA;

      &#xA;
    1. Compile ffmpeg components (avcodec, avutil, etc.) into llvm bitcode.
    2. &#xA;

    3. Write a C wrapper that exposes the decoding functionality and uses EMSCRIPTEN_KEEPALIVE.
    4. &#xA;

    5. Use emcc to compile the wrapper and link it to the bitcode created in step 1.
    6. &#xA;

    &#xA;&#xA;

    I found WASM+ffmpeg, but it's in Chinese and some of the steps aren't clear. In particular there is this step :

    &#xA;&#xA;

    emcc web.c process.c ../lib/libavformat.bc ../lib/libavcodec.bc ../lib/libswscale.bc ../lib/libswresample.bc ../lib/libavutil.bc \&#xA;

    &#xA;&#xA;

     :( Where I think I'm stuck

    &#xA;&#xA;

    I don't understand how all the ffmpeg components get compiled into separate *.bc files. I followed the emmake commands in that article and I end up with one big .bc file.

    &#xA;&#xA;

    2 questions

    &#xA;&#xA;

    1. Does anyone know the steps to compile ffmpeg using emscripten so that I can expose some API to javascript ?
    &#xA; 2. Is there a better way (with decent documentation/examples) to decode h264 video streams using node ?

    &#xA;

  • Why is one ffmpeg webm dash stream much larger than the others ?

    5 janvier 2017, par ranvel

    Over the summer, I worked on putting together a script which took a x264 video/mp3 stream and broke it up into the different streams so that it would work via MSE-DASH. (Based heavily on the instructions on the webmproject.org website) Those same scripts have ceased to work, turning a 6GB video into several 25 Gb videos. I kept up with updates of ffmpeg and so I don’t know when it stopped working, but I am guessing it was due to the way that their DASH Webm implementation was updated.

    I found new method which works better, but still has a major problem with one stream. I was hoping someone could explain how this encoding works so that I could understand the underlying cause.

    #!/bin/bash
    COMMON_OPTS="-map 0:0 -an -threads 11 -cpu-used 4 -cmp chroma"
    WEBM_OPTS="-f webm -c:v vp9 -keyint_min 50 -g 50 -dash 1"

    ffmpeg -i $1 -vn -acodec libvorbis -ab 128k audio.webm &amp;
    ffmpeg -i $1 $COMMON_OPTS $WEBM_OPTS -b:v 500k -vf scale=1280:720 -y vid-500k.webm &amp;
    ffmpeg -i $1 $COMMON_OPTS $WEBM_OPTS -b:v 700k -vf scale=1280:720 -y vid-700k.webm &amp;
    ffmpeg -i $1 $COMMON_OPTS $WEBM_OPTS -b:v 1000k -vf scale=1280:720 -y vid-1000k.webm &amp;
    ffmpeg -i $1 $COMMON_OPTS $WEBM_OPTS -b:v 1500k -vf scale=1280:720 -y vid-1500k.webm  

    The transcode is not yet complete, but you can see where this is headed :

    -rw-r--r--  1 user  staff    87M Jan  4 23:27 audio.webm
    -rw-r--r--  1 user  staff    27M Jan  4 23:42 vid-1000k.webm
    -rw-r--r--  1 user  staff   285M Jan  4 23:42 vid-1500k.webm
    -rw-r--r--  1 user  staff    15M Jan  4 23:42 vid-500k.webm
    -rw-r--r--  1 user  staff    20M Jan  4 23:42 vid-700k.webm

    The 1500k variant is disproportionately larger than the other streams.

    The other problem is that when I use a shorter video, lets say eight or nine minutes, the above configuration runs as expected and everything is perfect. I don’t know where the limit for this is since each test costs a lot of processing power and time, but if it’s less than ten minutes, it works and if its longer than an hour, it produces massive files.