Recherche avancée

Médias (91)

Autres articles (14)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

Sur d’autres sites (3651)

  • Live audio using ffmpeg, javascript and nodejs

    8 novembre 2017, par klaus

    I am new to this thing. Please don’t hang me for the poor grammar. I am trying to create a proof of concept application which I will later extend. It does the following : We have a html page which asks for permission to use the microphone. We capture the microphone input and send it via websocket to a node js app.

    JS (Client) :

    var bufferSize = 4096;
    var socket = new WebSocket(URL);
    var myPCMProcessingNode = context.createScriptProcessor(bufferSize, 1, 1);
    myPCMProcessingNode.onaudioprocess = function(e) {
     var input = e.inputBuffer.getChannelData(0);
     socket.send(convertFloat32ToInt16(input));
    }

    function convertFloat32ToInt16(buffer) {
     l = buffer.length;
     buf = new Int16Array(l);
     while (l--) {
       buf[l] = Math.min(1, buffer[l])*0x7FFF;
     }
     return buf.buffer;
    }

    navigator.mediaDevices.getUserMedia({audio:true, video:false})
                                   .then(function(stream){
                                     var microphone = context.createMediaStreamSource(stream);
                                     microphone.connect(myPCMProcessingNode);
                                     myPCMProcessingNode.connect(context.destination);
                                   })
                                   .catch(function(e){});

    In the server we take each incoming buffer, run it through ffmpeg, and send what comes out of the std out to another device using the node js ’http’ POST. The device has a speaker. We are basically trying to create a 1 way audio link from the browser to the device.

    Node JS (Server) :

    var WebSocketServer = require('websocket').server;
    var http = require('http');
    var children = require('child_process');

    wsServer.on('request', function(request) {
     var connection = request.accept(null, request.origin);
     connection.on('message', function(message) {
       if (message.type === 'utf8') { /*NOP*/ }
       else if (message.type === 'binary') {
         ffm.stdin.write(message.binaryData);
       }
     });
     connection.on('close', function(reasonCode, description) {});
     connection.on('error', function(error) {});
    });

    var ffm = children.spawn(
       './ffmpeg.exe'
      ,'-stdin -f s16le -ar 48k -ac 2 -i pipe:0 -acodec pcm_u8 -ar 48000 -f aiff pipe:1'.split(' ')
    );

    ffm.on('exit',function(code,signal){});

    ffm.stdout.on('data', (data) => {
     req.write(data);
    });

    var options = {
     host: 'xxx.xxx.xxx.xxx',
     port: xxxx,
     path: '/path/to/service/on/device',
     method: 'POST',
     headers: {
      'Content-Type': 'application/octet-stream',
      'Content-Length': 0,
      'Authorization' : 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
      'Transfer-Encoding' : 'chunked',
      'Connection': 'keep-alive'
     }
    };

    var req = http.request(options, function(res) {});

    The device supports only continuous POST and only a couple of formats (ulaw, aiff, wav)

    This solution doesn’t seem to work. In the device speaker we only hear something like white noise.

    Also, I think I may have a problem with the buffer I am sending to the ffmpeg std in -> Tried to dump whatever comes out of the websocket to a .wav file then play it with VLC -> it plays everything in the record very fast -> 10 seconds of recording played in about 1 second.

    I am new to audio processing and have searched for about 3 days now for solutions on how to improve this and found nothing.

    I would ask from the community for 2 things :

    1. Is something wrong with my approach ? What more can I do to make this work ? I will post more details if required.

    2. If what I am doing is reinventing the wheel then I would like to know what other software / 3rd party service (like amazon or whatever) can accomplish the same thing.

    Thank you.

  • FFmpeg library : modified muxing sample for FLTP input and FLTP audio, loses audio

    21 janvier 2014, par taansari

    Based on muxing sample that comes with FFmpeg docs, I have modified it, from input format as S16 to FLTP (planar stereo), and outputting to webm format (stereo).

    Since input is now FLTP, I am filling two arrays, then encoding again to FLTP. There are no obvious errors given on screen, but the resulting webm video does not play any audio (just the video content). This is just proof of concept in understanding things ; here is an added (crude) function to fill up input FLTP stereo buffer :

    static void get_audio_frame_for_planar_stereo(int16_t **samples, int frame_size, int nb_channels)
    {
       int j, i, v[2];
       int16_t *q1 = (int16_t *) samples[0];
       int16_t *q2 = (int16_t *) samples[1];

       for (j = 0; j < frame_size; j++)
       {
           v[0] = (int)(sin(t) * 10000);
           v[1] = (int)(tan(t) * 10000);
           *q1++ = v[0];
           *q2++ = v[1];
           t     += tincr;
           tincr += tincr2;
       }
    }

    Which I am calling from inside write_audio_frame() function.

    Note also, wherever code reffered AV_SAMPLE_FMT_S16 as input, I have changed to AV_SAMPLE_FMT_FLTP.

    Whole workable source is here :

    https://gist.github.com/anonymous/05d1d7662e9feafc45a6

    When run with ffprobe.exe, with these instructions :

    ffprobe -show_packets output.webm >output.txt

    I see nothing out of ordinary, all pts/dts values appear to be in place :

    https://gist.github.com/anonymous/3ed0d6308700ab991704

    Could someone highlight cause of this mis-interpretation ?

    Thanks for your time...

    p.s. I am using Zeranoe FFmpeg Windows builds (32 bit), built on Jan 9 2014 22:04:35 with gcc 4.8.2.(GCC)

    Edit : Based on your guidance elsewhere, I tried the following :

       /* set options */
       //av_opt_set_int       (swr_ctx, "in_channel_count",   c->channels,       0);
       //av_opt_set_int       (swr_ctx, "in_sample_rate",     c->sample_rate,    0);
       //av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt",      AV_SAMPLE_FMT_FLTP, 0);
       //av_opt_set_int       (swr_ctx, "out_channel_count",  c->channels,       0);
       //av_opt_set_int       (swr_ctx, "out_sample_rate",    c->sample_rate,    0);
       //av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt",     c->sample_fmt,     0);

       av_opt_set_int(swr_ctx, "in_channel_layout",    AV_CH_LAYOUT_STEREO, 0);
       av_opt_set_int(swr_ctx, "in_sample_rate",       c->sample_rate, 0);
       av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_FLTP, 0);

       av_opt_set_int(swr_ctx, "out_channel_layout",    AV_CH_LAYOUT_STEREO, 0);
       av_opt_set_int(swr_ctx, "out_sample_rate",       c->sample_rate, 0);
       av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", AV_SAMPLE_FMT_FLTP, 0);

    And the revised function :

    static void get_audio_frame_for_planar_stereo(uint8_t **samples, int frame_size, int nb_channels)
    {
       int j, i;
       float v[2];
       float *q1 = (float *) samples[0];
       float *q2 = (float *) samples[1];

       for (j = 0; j < frame_size; j++)
       {
           v[0] = (tan(t) * 1);
           v[1] = (sin(t) * 1);
           *q1++ = v[0];
           *q2++ = v[1];
           t     += tincr;
           tincr += tincr2;
       }
    }

    Now it appears to be working properly. I tried changing function parameters from uint8_t** to float**, as well as src_samples_data from uint8_t** to float**, but did not make any difference, in a view.

    Updated code : https://gist.github.com/anonymous/35371b2c106961029c3d

    Thanks for highlighting the place(s) that result in this behavior !

  • output of ffmpeg comes out like yamborghini high music video

    19 janvier, par chip

    I do this procedure when I edit a long video

    


      

    • segment to 3 second videos, so I come up with a lot of short videos
    • 


    • I randomly pick videos and put them in a list
    • 


    • then I join these short videos together using concat
    • 


    • now I get a long video again. next thing I do is segment the video 4 minute videos
    • 


    


    After processing, the videos look messed up. I don't know how to describe it but it looks like the music video yamborghini high

    


    For some reason, this only happens to videos I capture at night. I do the same process for day time footage, no problem.

    


    is there a problem with slicing, merging and then slicing again ?

    


    or is it an issue that I run multiple ffmpeg scripts at the same time ?

    


    here's the script

    


    for FILE in *.mp4; do ffmpeg -i ${FILE} -vcodec copy -f segment -segment_time 00:10 -reset_timestamps 1 "part_$( date '+%F%H%M%S' )_%02d.mp4"; rm -rf $FILE; done; echo 'slicing completed.' && \ 
for f in part_*[13579].mp4; do echo "file '$f'" >> mylist.txt; done
ffmpeg -f concat -safe 0 -i mylist.txt -c copy output.mp4 && echo 'done merging.' && \ 
ffmpeg -i output.mp4 -threads 7 -vcodec copy -f segment -segment_time 04:00 -reset_timestamps 1 "Video_Title_$( date '+%F%H%M%S' ).mp4" && echo 'individual videos created'