Recherche avancée

Médias (91)

Autres articles (76)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (12479)

  • What is the best way to capture a screenshot from a udp stream ?

    8 novembre 2017, par yash17

    I’m trying to capture a screenshot from a udp stream using ffmpeg in a Ubuntu 14.04 System.
    Following is the command

    ffmpeg -y -i udp_ip -vframes 1 -q:v 1 test.png

    But the image captured is of very poor resolution and I observed a lag while taking the screenshot.

    Please suggest a best tool or a way to take a screenshot in the fastest way and also of the best image resolution possible.

    edit :
    log files

    ffmpeg version 3.3.2 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
     configuration: --extra-libs=-ldl --prefix=/opt/ffmpeg --mandir=/usr/share/man --enable-avresample --disable-debug --enable-nonfree --enable-gpl --enable-version3 --enable-libopencore-amrnb --enable-libopencore-amrwb --disable-decoder=amrnb --disable-decoder=amrwb --enable-libpulse --enable-libfreetype --enable-gnutls --enable-libx264 --enable-libx265 --enable-libfdk-aac --enable-libvorbis --enable-libtheora --enable-libmp3lame --enable-libopus --enable-libvpx --enable-libspeex --enable-libass --enable-avisynth --enable-libsoxr --enable-libxvid --enable-libvidstab --enable-libwavpack --enable-nvenc
     libavutil      55. 58.100 / 55. 58.100
     libavcodec     57. 89.100 / 57. 89.100
     libavformat    57. 71.100 / 57. 71.100
     libavdevice    57.  6.100 / 57.  6.100
     libavfilter     6. 82.100 /  6. 82.100
     libavresample   3.  5.  0 /  3.  5.  0
     libswscale      4.  6.100 /  4.  6.100
     libswresample   2.  7.100 /  2.  7.100
     libpostproc    54.  5.100 / 54.  5.100
    [mpeg2video @ 0x390cc60] Invalid frame dimensions 0x0.
       Last message repeated 7 times
    Input #0, mpegts, from 'udp://@xxx.xx.xx.xx:xxxx':
     Duration: N/A, start: 144.130744, bitrate: 4128 kb/s
     Program 1
       Metadata:
         service_name    : Program-1  
         service_provider: Encoder
       Stream #0:0[0x42]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv, top first), 720x576 [SAR 16:15 DAR 4:3], 4000 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc
       Stream #0:1[0x43]: Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, s16p, 128 kb/s
    Stream mapping:
     Stream #0:0 -> #0:0 (mpeg2video (native) -> png (native))
    Press [q] to stop, [?] for help
    Output #0, image2, to 'player.png':
     Metadata:
       encoder         : Lavf57.71.100
       Stream #0:0: Video: png, rgb24, 720x576 [SAR 16:15 DAR 4:3], q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc
       Metadata:
         encoder         : Lavc57.89.100 png
    frame=    1 fps=0.0 q=-0.0 Lsize=N/A time=00:00:00.04 bitrate=N/A dup=1 drop=1 speed=0.729x    
    video:777kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
  • Live audio using ffmpeg, javascript and nodejs

    8 novembre 2017, par klaus

    I am new to this thing. Please don’t hang me for the poor grammar. I am trying to create a proof of concept application which I will later extend. It does the following : We have a html page which asks for permission to use the microphone. We capture the microphone input and send it via websocket to a node js app.

    JS (Client) :

    var bufferSize = 4096;
    var socket = new WebSocket(URL);
    var myPCMProcessingNode = context.createScriptProcessor(bufferSize, 1, 1);
    myPCMProcessingNode.onaudioprocess = function(e) {
     var input = e.inputBuffer.getChannelData(0);
     socket.send(convertFloat32ToInt16(input));
    }

    function convertFloat32ToInt16(buffer) {
     l = buffer.length;
     buf = new Int16Array(l);
     while (l--) {
       buf[l] = Math.min(1, buffer[l])*0x7FFF;
     }
     return buf.buffer;
    }

    navigator.mediaDevices.getUserMedia({audio:true, video:false})
                                   .then(function(stream){
                                     var microphone = context.createMediaStreamSource(stream);
                                     microphone.connect(myPCMProcessingNode);
                                     myPCMProcessingNode.connect(context.destination);
                                   })
                                   .catch(function(e){});

    In the server we take each incoming buffer, run it through ffmpeg, and send what comes out of the std out to another device using the node js ’http’ POST. The device has a speaker. We are basically trying to create a 1 way audio link from the browser to the device.

    Node JS (Server) :

    var WebSocketServer = require('websocket').server;
    var http = require('http');
    var children = require('child_process');

    wsServer.on('request', function(request) {
     var connection = request.accept(null, request.origin);
     connection.on('message', function(message) {
       if (message.type === 'utf8') { /*NOP*/ }
       else if (message.type === 'binary') {
         ffm.stdin.write(message.binaryData);
       }
     });
     connection.on('close', function(reasonCode, description) {});
     connection.on('error', function(error) {});
    });

    var ffm = children.spawn(
       './ffmpeg.exe'
      ,'-stdin -f s16le -ar 48k -ac 2 -i pipe:0 -acodec pcm_u8 -ar 48000 -f aiff pipe:1'.split(' ')
    );

    ffm.on('exit',function(code,signal){});

    ffm.stdout.on('data', (data) => {
     req.write(data);
    });

    var options = {
     host: 'xxx.xxx.xxx.xxx',
     port: xxxx,
     path: '/path/to/service/on/device',
     method: 'POST',
     headers: {
      'Content-Type': 'application/octet-stream',
      'Content-Length': 0,
      'Authorization' : 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
      'Transfer-Encoding' : 'chunked',
      'Connection': 'keep-alive'
     }
    };

    var req = http.request(options, function(res) {});

    The device supports only continuous POST and only a couple of formats (ulaw, aiff, wav)

    This solution doesn’t seem to work. In the device speaker we only hear something like white noise.

    Also, I think I may have a problem with the buffer I am sending to the ffmpeg std in -> Tried to dump whatever comes out of the websocket to a .wav file then play it with VLC -> it plays everything in the record very fast -> 10 seconds of recording played in about 1 second.

    I am new to audio processing and have searched for about 3 days now for solutions on how to improve this and found nothing.

    I would ask from the community for 2 things :

    1. Is something wrong with my approach ? What more can I do to make this work ? I will post more details if required.

    2. If what I am doing is reinventing the wheel then I would like to know what other software / 3rd party service (like amazon or whatever) can accomplish the same thing.

    Thank you.

  • FFmpeg : Multiple x265-params are not recognized

    13 septembre 2022, par zinon

    I'm using ffmpeg in x265 and I want to use multiple x265-params in one encoding.

    



    When I use more than one parameter, ffmpeg does not recognize them.

    



    My script is :

    



    ffmpeg -s:v 1440x1080 -r 25 -i incident_10d_1440x1080_25.yuv -c:v rawvideo \
-pix_fmt yuv420p -c:v libx265 -x265-params "--qp=16:--preset=medium:--psnr" \
out_1440x1080_qp16.mp4


    



    I set quantization parameter value equal to 16.

    



    But my output in terminal contains the following :

    



    x265 [info]: Main profile, Level-4 (Main tier)
x265 [info]: Thread pool created using 4 threads
x265 [info]: Slices                              : 1
x265 [info]: frame threads / pool features       : 2 / wpp(17 rows)
x265 [info]: Coding QT: max CU size, min CU size : 64 / 8
x265 [info]: Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra
x265 [info]: ME / range / subpel / merge         : hex / 57 / 2 / 2
x265 [info]: Keyframe min / max / scenecut / bias: 25 / 250 / 40 / 5.00
x265 [info]: Lookahead / bframes / badapt        : 20 / 4 / 2
x265 [info]: b-pyramid / weightp / weightb       : 1 / 1 / 0
x265 [info]: References / ref-limit  cu / depth  : 3 / on / on
x265 [info]: AQ: mode / str / qg-size / cu-tree  : 1 / 1.0 / 32 / 1
x265 [info]: Rate Control / qCompress            : CRF-28.0 / 0.60


    



    As can be seen I get Rate Control / qCompress : CRF-28.0 / 0.60.

    



    The correct one must be x265 [info]: Rate Control : CQP-16.

    



    When I have only this parameter in x265-params like -x265-params "--qp=16" it's working properly.