Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (98)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

Sur d’autres sites (10866)

  • Encoding of two full hd streams in Linux + GPU with Intel HD4000 / VA API / FFMPEG / OpenGL

    27 juin 2017, par qknight

    i want to encode/stream two full hd streams in realtime from my laptop to a remote location using linux/xorg on the host.

    VA API

    for this i’ve been playing with the VA API but the performance is pretty bad with 5.59 fps (see paste below).

    FFMPEG

    using ffmpeg with CPU encoding i get about 200 fps but then all cores of my Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz are busy and the fan turns on.

    future plans

    i want GPU support in encoding and later integrate this into a program which streams a virtual xorg ’screen’, see https://lastlog.de/wiki/index.php/Raspberry_PI_virtual_screen for more details on my plans.

    maybe h264 isn’t even what i want ? so if someone advices towards a different implementation, i’d welcome that.

    besides VA API there seems to be QuickSync but i didn’t experiment with that yet as it is not packaged on NixOS just yet.

    note : i need a library to have a smooth integration into the code.

    h264encode -w 1920 -h 1080 —profile MPSource frame is 1920x1080 and will code clip to 1920x1088 with crop
    

    INPUT:Try to encode H264...
    INPUT : Resolution : 1920x1080, 60 frames
    INPUT : FrameRate : 30
    INPUT : Bitrate : 14929920
    INPUT : Slieces : 1
    INPUT : IntraPeriod : 30
    INPUT : IDRPeriod : 60
    INPUT : IpPeriod : 1
    INPUT : Initial QP : 26
    INPUT : Min QP : 0
    INPUT : Source YUV : AUTO generated
    INPUT : Coded Clip : /tmp/test.264
    INPUT : Rec Clip : Not save reconstructed frame

    libva info : VA-API version 0.38.1
    libva info : va_getDriverName() returns 0
    libva info : Trying to open /run/opengl-driver/lib/dri/i965_drv_video.so
    libva info : Found init function __vaDriverInit_0_38
    libva info : va_openDriver() returns 0
    Use profile VAProfileH264Main
    Support rate control mode (0x12):CBR CQP
    RateControl mode : CQP
    Support VAConfigAttribEncPackedHeaders
    Support packed sequence headers
    Support packed picture headers
    Support packed slice headers
    Support packed misc headers
    Support 1 RefPicList0 and 1 RefPicList1
    Loading data into surface 15.....Complete surface loading
    \00000059(054456 bytes coded)

    PERFORMANCE : Frame Rate : 5.59 fps (60 frames, 10730 ms (178.83 ms per frame))
    PERFORMANCE : Compression ratio : 51:1
    PERFORMANCE : UploadPicture : 10467 ms (174.45, 97.55% percent)
    PERFORMANCE : vaBeginPicture : 0 ms (0.00, 0.00% percent)
    PERFORMANCE : vaRenderHeader : 1 ms (0.02, 0.01% percent)
    PERFORMANCE : vaEndPicture : 42 ms (0.70, 0.39% percent)
    PERFORMANCE : vaSyncSurface : 244 ms (4.07, 2.27% percent)
    PERFORMANCE : SavePicture : 7 ms (0.12, 0.07% percent)
    PERFORMANCE : Others : -31 ms (71582787.75, 40027653.91% percent)
    (Multithread enabled, the timing is only for reference)

    i’ve seen https://www.reddit.com/r/linux/comments/1qk1yu/is_there_currently_opensource_software_to_encode/ though but i’m not sure what do do with it.

  • Encoding of two full hd streams in Linux + GPU with Intel HD4000 / VA API / FFMPEG / OpenGL

    30 juin 2016, par qknight

    i want to encode/stream two full hd streams in realtime from my laptop to a remote location using linux/xorg on the host.

    VA API

    for this i’ve been playing with the VA API but the performance is pretty bad with 5.59 fps (see paste below).

    FFMPEG

    using ffmpeg with CPU encoding i get about 200 fps but then all cores of my Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz are busy and the fan turns on.

    future plans

    i want GPU support in encoding and later integrate this into a program which streams a virtual xorg ’screen’, see https://lastlog.de/wiki/index.php/Raspberry_PI_virtual_screen for more details on my plans.

    maybe h264 isn’t even what i want ? so if someone advices towards a different implementation, i’d welcome that.

    besides VA API there seems to be QuickSync but i didn’t experiment with that yet as it is not packaged on NixOS just yet.

    note : i need a library to have a smooth integration into the code.

    h264encode -w 1920 -h 1080 —profile MPSource frame is 1920x1080 and will code clip to 1920x1088 with crop
    

    INPUT:Try to encode H264...
    INPUT : Resolution : 1920x1080, 60 frames
    INPUT : FrameRate : 30
    INPUT : Bitrate : 14929920
    INPUT : Slieces : 1
    INPUT : IntraPeriod : 30
    INPUT : IDRPeriod : 60
    INPUT : IpPeriod : 1
    INPUT : Initial QP : 26
    INPUT : Min QP : 0
    INPUT : Source YUV : AUTO generated
    INPUT : Coded Clip : /tmp/test.264
    INPUT : Rec Clip : Not save reconstructed frame

    libva info : VA-API version 0.38.1
    libva info : va_getDriverName() returns 0
    libva info : Trying to open /run/opengl-driver/lib/dri/i965_drv_video.so
    libva info : Found init function __vaDriverInit_0_38
    libva info : va_openDriver() returns 0
    Use profile VAProfileH264Main
    Support rate control mode (0x12):CBR CQP
    RateControl mode : CQP
    Support VAConfigAttribEncPackedHeaders
    Support packed sequence headers
    Support packed picture headers
    Support packed slice headers
    Support packed misc headers
    Support 1 RefPicList0 and 1 RefPicList1
    Loading data into surface 15.....Complete surface loading
    \00000059(054456 bytes coded)

    PERFORMANCE : Frame Rate : 5.59 fps (60 frames, 10730 ms (178.83 ms per frame))
    PERFORMANCE : Compression ratio : 51:1
    PERFORMANCE : UploadPicture : 10467 ms (174.45, 97.55% percent)
    PERFORMANCE : vaBeginPicture : 0 ms (0.00, 0.00% percent)
    PERFORMANCE : vaRenderHeader : 1 ms (0.02, 0.01% percent)
    PERFORMANCE : vaEndPicture : 42 ms (0.70, 0.39% percent)
    PERFORMANCE : vaSyncSurface : 244 ms (4.07, 2.27% percent)
    PERFORMANCE : SavePicture : 7 ms (0.12, 0.07% percent)
    PERFORMANCE : Others : -31 ms (71582787.75, 40027653.91% percent)
    (Multithread enabled, the timing is only for reference)

    i’ve seen https://www.reddit.com/r/linux/comments/1qk1yu/is_there_currently_opensource_software_to_encode/ though but i’m not sure what do do with it.

  • How to use ffmpeg in JavaScript to decode H.264 frames into RGB frames

    17 juin 2020, par noel

    I'm trying to compile ffmpeg into javascript so that I can decode H.264 video streams using node. The streams are H.264 frames packed into RTP NALUs so any solution has to be able to accept H.264 frames rather than a whole file name. These frames can't be in a container like MP4 or AVI because then the demuxer needs to needs the timestamp of every frame before demuxing can occur, but I'm dealing with a real time stream, no containers.

    



    Streaming H.264 over RTP

    



    Below is the basic code I'm using to listen on a udp socket. Inside the 'message' callback the data packet is an RTP datagram. The data portion of the data gram is an H.264 frame (P-frames and I-frames).

    



    var PORT = 33333;
var HOST = '127.0.0.1';

var dgram = require('dgram');
var server = dgram.createSocket('udp4');

server.on('listening', function () {
    var address = server.address();
    console.log('UDP Server listening on ' + address.address + ":" + address.port);
});

server.on('message', function (message, remote) {
    console.log(remote.address + ':' + remote.port +' - ' + message);
    frame = parse_rtp(message);

    rgb_frame = some_library.decode_h264(frame); // This is what I need.

});

server.bind(PORT, HOST);  


    



    I found the Broadway.js library, but I couldn't get it working and it doesn't handle P-frames which I need. I also found ffmpeg.js, but could get that to work and it needs a whole file not a stream. Likewise, fluent-ffmpeg doesn't appear to support file streams ; all of the examples show a filename being passed to the constructor. So I decided to write my own API.

    



    My current solution attempt

    



    I have been able to compile ffmpeg into one big js file, but I can't use it like that. I want to write an API around ffmpeg and then expose those functions to JS. So it seems to me like I need to do the following :

    



      

    1. Compile ffmpeg components (avcodec, avutil, etc.) into llvm bitcode.
    2. 


    3. Write a C wrapper that exposes the decoding functionality and uses EMSCRIPTEN_KEEPALIVE.
    4. 


    5. Use emcc to compile the wrapper and link it to the bitcode created in step 1.
    6. 


    



    I found WASM+ffmpeg, but it's in Chinese and some of the steps aren't clear. In particular there is this step :

    



    emcc web.c process.c ../lib/libavformat.bc ../lib/libavcodec.bc ../lib/libswscale.bc ../lib/libswresample.bc ../lib/libavutil.bc \


    



     :( Where I think I'm stuck

    



    I don't understand how all the ffmpeg components get compiled into separate *.bc files. I followed the emmake commands in that article and I end up with one big .bc file.

    



    2 questions

    



    1. Does anyone know the steps to compile ffmpeg using emscripten so that I can expose some API to javascript ?
    
 2. Is there a better way (with decent documentation/examples) to decode h264 video streams using node ?