Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (56)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (8772)

  • PyAV : force new framerate while remuxing stream ?

    7 juin 2019, par ToxicFrog

    I have a Python program that receives a sequence of H264 video frames over the network, which I want to display and, optionally, record. The camera records at 30FPS and sends frames as fast as it can, which isn’t consistently 30FPS due to changing network conditions ; sometimes it falls behind and then catches up, and rarely it drops frames entirely.

    The "display" part is easy ; I don’t need to care about timing or stream metadata, just display the frames as fast as they arrive :

    input = av.open(get_video_stream())
    for packet in input.demux(video=0):
     for frame in packet.decode():
       # A bunch of numpy and pygame code here to convert the frame to RGB
       # row-major and blit it to the screen

    The "record" part looks like it should be easy :

    input = av.open(get_video_stream())
    output = av.open(filename, 'w')
    output.add_stream(template=input.streams[0])
    for packet in input.demux(video=0):
     for frame in packet.decode():
       # ...display code...
     packet.stream = output.streams[0]
     output.mux_one(packet)
    output.close()

    And indeed this produces a valid MP4 file containing all the frames, and if I play it back with mplayer -fps 30 it works fine. But that -fps 30 is absolutely required :

    $ ffprobe output.mp4
    Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 960x720,
                     1277664 kb/s, 12800 fps, 12800 tbr, 12800 tbn, 25600 tbc (default)

    Note that 12,800 frames/second. It should look something like this (produced by calling mencoder -fps 30 and piping the frames into it) :

    $ ffprobe mencoder_test.mp4
    Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 960x720,
                     2998 kb/s, 30 fps, 30 tbr, 90k tbn, 180k tbc (default)

    Inspecting the packets and frames I get from the input stream, I see :

    stream: time_base=1/1200000
    codec: framerate=25 time_base=1/50
    packet: dts=None pts=None duration=48000 time_base=1/1200000
    frame: dst=None pts=None time=None time_base=1/1200000

    So, the packets and frames don’t have timestamps at all ; they have a time_base which doesn’t match either the timebase that ends up in the final file or the actual framerate of the camera ; the codec has a framrate and timebase that doesn’t match the final file, the camera framerate, or the other video stream metadata !

    The PyAV documentation is all but entirely absent when it comes to issues of timing and framerate, but I have tried manually setting various combinations of stream, packet, and frame time_base, dts, and pts with no success. I can always remux the recorded videos again to get the correct framerate, but I’d rather write video files that are correct in the first place.

    So, how do I get pyAV to remux the video in a way that produces an output that is correctly marked as 30fps ?

  • lavc : Implement Dolby Vision RPU parsing

    3 janvier 2022, par Niklas Haas
    lavc : Implement Dolby Vision RPU parsing
    

    Based on a mixture of guesswork, partial documentation in patents, and
    reverse engineering of real-world samples. Confirmed working for all the
    samples I've thrown at it.

    Contains some annoying machinery to persist these values in between
    frames, which is needed in theory even though I've never actually seen a
    sample that relies on it in practice. May or may not work.

    Since the distinction matters greatly for parsing the color matrix
    values, this includes a small helper function to guess the right profile
    from the RPU itself in case the user has forgotten to forward the dovi
    configuration record to the decoder. (Which in practice, only ffmpeg.c
    and ffplay do..)

    Notable omissions / deviations :
    - CRC32 verification. This is based on the MPEG2 CRC32 type, which is
    similar to IEEE CRC32 but apparently different in subtle enough ways
    that I could not get it to pass verification no matter what parameters
    I fed to av_crc. It's possible the code needs some changes.
    - Linear interpolation support. Nothing documents this (beyond its
    existence) and no samples use it, so impossible to implement.
    - All of the extension metadata blocks, but these contain values that
    seem largely congruent with ST2094, HDR10, or other existing forms of
    side data, so I will defer parsing/attaching them to a future commit.
    - The patent describes a mechanism for predicting coefficients from
    previous RPUs, but the bit for the flag whether to use the
    prediction deltas or signal entirely new coefficients does not seem to
    be present in actual RPUs, so we ignore this subsystem entirely.
    - In the patent's spec, the NLQ subsystem also loops over
    num_nlq_pivots, but even in the patent the number is hard-coded to one
    iteration rather than signalled. So we only store one set of coefs.

    Heavily influenced by https://github.com/quietvoid/dovi_tool
    Documentation drawn from US Patent 10,701,399 B2 and ETSI GS CCM 001

    Signed-off-by : Niklas Haas <git@haasn.dev>
    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] configure
    • [DH] libavcodec/Makefile
    • [DH] libavcodec/dovi_rpu.c
    • [DH] libavcodec/dovi_rpu.h
  • How to use ffmpeg in JavaScript to decode H.264 frames into RGB frames

    17 juin 2020, par noel

    I'm trying to compile ffmpeg into javascript so that I can decode H.264 video streams using node. The streams are H.264 frames packed into RTP NALUs so any solution has to be able to accept H.264 frames rather than a whole file name. These frames can't be in a container like MP4 or AVI because then the demuxer needs to needs the timestamp of every frame before demuxing can occur, but I'm dealing with a real time stream, no containers.

    &#xA;&#xA;

    Streaming H.264 over RTP

    &#xA;&#xA;

    Below is the basic code I'm using to listen on a udp socket. Inside the 'message' callback the data packet is an RTP datagram. The data portion of the data gram is an H.264 frame (P-frames and I-frames).

    &#xA;&#xA;

    var PORT = 33333;&#xA;var HOST = &#x27;127.0.0.1&#x27;;&#xA;&#xA;var dgram = require(&#x27;dgram&#x27;);&#xA;var server = dgram.createSocket(&#x27;udp4&#x27;);&#xA;&#xA;server.on(&#x27;listening&#x27;, function () {&#xA;    var address = server.address();&#xA;    console.log(&#x27;UDP Server listening on &#x27; &#x2B; address.address &#x2B; ":" &#x2B; address.port);&#xA;});&#xA;&#xA;server.on(&#x27;message&#x27;, function (message, remote) {&#xA;    console.log(remote.address &#x2B; &#x27;:&#x27; &#x2B; remote.port &#x2B;&#x27; - &#x27; &#x2B; message);&#xA;    frame = parse_rtp(message);&#xA;&#xA;    rgb_frame = some_library.decode_h264(frame); // This is what I need.&#xA;&#xA;});&#xA;&#xA;server.bind(PORT, HOST);  &#xA;

    &#xA;&#xA;

    I found the Broadway.js library, but I couldn't get it working and it doesn't handle P-frames which I need. I also found ffmpeg.js, but could get that to work and it needs a whole file not a stream. Likewise, fluent-ffmpeg doesn't appear to support file streams ; all of the examples show a filename being passed to the constructor. So I decided to write my own API.

    &#xA;&#xA;

    My current solution attempt

    &#xA;&#xA;

    I have been able to compile ffmpeg into one big js file, but I can't use it like that. I want to write an API around ffmpeg and then expose those functions to JS. So it seems to me like I need to do the following :

    &#xA;&#xA;

      &#xA;
    1. Compile ffmpeg components (avcodec, avutil, etc.) into llvm bitcode.
    2. &#xA;

    3. Write a C wrapper that exposes the decoding functionality and uses EMSCRIPTEN_KEEPALIVE.
    4. &#xA;

    5. Use emcc to compile the wrapper and link it to the bitcode created in step 1.
    6. &#xA;

    &#xA;&#xA;

    I found WASM+ffmpeg, but it's in Chinese and some of the steps aren't clear. In particular there is this step :

    &#xA;&#xA;

    emcc web.c process.c ../lib/libavformat.bc ../lib/libavcodec.bc ../lib/libswscale.bc ../lib/libswresample.bc ../lib/libavutil.bc \&#xA;

    &#xA;&#xA;

     :( Where I think I'm stuck

    &#xA;&#xA;

    I don't understand how all the ffmpeg components get compiled into separate *.bc files. I followed the emmake commands in that article and I end up with one big .bc file.

    &#xA;&#xA;

    2 questions

    &#xA;&#xA;

    1. Does anyone know the steps to compile ffmpeg using emscripten so that I can expose some API to javascript ?
    &#xA; 2. Is there a better way (with decent documentation/examples) to decode h264 video streams using node ?

    &#xA;