Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (59)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

Sur d’autres sites (4546)

  • avcodec/pthread_slice : use slice threading from avutil

    11 juillet 2017, par Muhammad Faiz
    avcodec/pthread_slice : use slice threading from avutil
    

    Also remove pthread_cond_broadcast(progress_cond) on uninit.
    Broadcasting it is not required because workers are always
    parked when they are not in thread_execute. So it is imposible
    that a worker is waiting on progress_cond when uninitialized.

    Benchmark :
    ./ffmpeg -threads $threads -thread_type slice -i 10slices.mp4 -f null null
    threads=2 :
    old : 70.212s 70.525s 70.877s
    new : 65.219s 65.377s 65.484s
    threads=3 :
    old : 65.086s 66.306s 66.409s
    new : 63.229s 65.026s 65.116s
    threads=4 :
    old : 60.993s 61.482s 62.123s
    new : 59.224s 59.441s 59.667s
    threads=5 :
    old : 57.576s 57.860s 58.832s
    new : 53.032s 53.948s 54.086s

    Signed-off-by : Muhammad Faiz <mfcc64@gmail.com>

    • [DH] libavcodec/pthread_slice.c
  • Live audio using ffmpeg, javascript and nodejs

    8 novembre 2017, par klaus

    I am new to this thing. Please don’t hang me for the poor grammar. I am trying to create a proof of concept application which I will later extend. It does the following : We have a html page which asks for permission to use the microphone. We capture the microphone input and send it via websocket to a node js app.

    JS (Client) :

    var bufferSize = 4096;
    var socket = new WebSocket(URL);
    var myPCMProcessingNode = context.createScriptProcessor(bufferSize, 1, 1);
    myPCMProcessingNode.onaudioprocess = function(e) {
     var input = e.inputBuffer.getChannelData(0);
     socket.send(convertFloat32ToInt16(input));
    }

    function convertFloat32ToInt16(buffer) {
     l = buffer.length;
     buf = new Int16Array(l);
     while (l--) {
       buf[l] = Math.min(1, buffer[l])*0x7FFF;
     }
     return buf.buffer;
    }

    navigator.mediaDevices.getUserMedia({audio:true, video:false})
                                   .then(function(stream){
                                     var microphone = context.createMediaStreamSource(stream);
                                     microphone.connect(myPCMProcessingNode);
                                     myPCMProcessingNode.connect(context.destination);
                                   })
                                   .catch(function(e){});

    In the server we take each incoming buffer, run it through ffmpeg, and send what comes out of the std out to another device using the node js ’http’ POST. The device has a speaker. We are basically trying to create a 1 way audio link from the browser to the device.

    Node JS (Server) :

    var WebSocketServer = require('websocket').server;
    var http = require('http');
    var children = require('child_process');

    wsServer.on('request', function(request) {
     var connection = request.accept(null, request.origin);
     connection.on('message', function(message) {
       if (message.type === 'utf8') { /*NOP*/ }
       else if (message.type === 'binary') {
         ffm.stdin.write(message.binaryData);
       }
     });
     connection.on('close', function(reasonCode, description) {});
     connection.on('error', function(error) {});
    });

    var ffm = children.spawn(
       './ffmpeg.exe'
      ,'-stdin -f s16le -ar 48k -ac 2 -i pipe:0 -acodec pcm_u8 -ar 48000 -f aiff pipe:1'.split(' ')
    );

    ffm.on('exit',function(code,signal){});

    ffm.stdout.on('data', (data) => {
     req.write(data);
    });

    var options = {
     host: 'xxx.xxx.xxx.xxx',
     port: xxxx,
     path: '/path/to/service/on/device',
     method: 'POST',
     headers: {
      'Content-Type': 'application/octet-stream',
      'Content-Length': 0,
      'Authorization' : 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
      'Transfer-Encoding' : 'chunked',
      'Connection': 'keep-alive'
     }
    };

    var req = http.request(options, function(res) {});

    The device supports only continuous POST and only a couple of formats (ulaw, aiff, wav)

    This solution doesn’t seem to work. In the device speaker we only hear something like white noise.

    Also, I think I may have a problem with the buffer I am sending to the ffmpeg std in -> Tried to dump whatever comes out of the websocket to a .wav file then play it with VLC -> it plays everything in the record very fast -> 10 seconds of recording played in about 1 second.

    I am new to audio processing and have searched for about 3 days now for solutions on how to improve this and found nothing.

    I would ask from the community for 2 things :

    1. Is something wrong with my approach ? What more can I do to make this work ? I will post more details if required.

    2. If what I am doing is reinventing the wheel then I would like to know what other software / 3rd party service (like amazon or whatever) can accomplish the same thing.

    Thank you.

  • Java FFmpeg decoding AVPacket result of avcodec_decode_video2 negative

    19 octobre 2017, par nh_

    I’m new to ffmpeg. I’m using FFmpegFrameGrabber and JavaCPP (version 1.3.3) to grab packets of an mp4-video-file. I would like to save the packet’s bytestream in a database in order to decode the packets at the time the data is requested and to process the image.

    public static void saveVideoToDB(String pathToVideo){
    FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(pathToVideo);
    grabber.start();
    AVPacket pkt;
    while ((pkt = grabber.grabPacket()) != null) {
       BytePointer data = pkt.data();
       data.capacity(pkt.size());
       byte[] arr = data.getStringBytes();  
       //arr = [0, 0, 0, 62, 39, 100, 0, 40, -83, -124.....]
        if (pkt.stream_index() == AVMEDIA_TYPE_VIDEO) {
           //ToDo: save arr to database
            testDecode(arr);
        }


    }
    grabber.close();
    logger.info("Import video finished.");
    }  

    In my code I first tried to decode a packet’s data for a proof-of-concept, but I’m not sure if it’s working like this :

    public static void testDecode(byte[] data){
       AVCodec avCodec = avcodec_find_decoder(AV_CODEC_ID_H264);
       AVCodecContext avCodecContext = avcodec_alloc_context3(avCodec);
       AVDictionary opts = new AVDictionary();
       avcodec_open2(avCodecContext, avCodec, opts);
       av_dict_free(opts);
       AVFrame avFrame = av_frame_alloc();
       AVPacket avPacket = new AVPacket();
       av_init_packet(avPacket);
       Frame frame = new Frame();
       avPacket.pts(AV_NOPTS_VALUE);
       avPacket.dts(AV_NOPTS_VALUE);
       BytePointer bp = new BytePointer(data);
       bp.capacity(data.length);
       avPacket.data(bp);
       avPacket.size(data.length);
       avPacket.pos(-1);

       IntBuffer gotPicture = IntBuffer.allocate(1);
       boolean doVideo = true;
       boolean keyFrames = false;
       boolean processImage = true;
       AVPacket pkt = avPacket;
       if (doVideo) {
         int len = avcodec_decode_video2(avCodecContext, avFrame, gotPicture, avPacket);
         if (len >= 0 &amp;&amp; gotPicture.get(0) != 0
             &amp;&amp; (!keyFrames || avFrame.pict_type() == AV_PICTURE_TYPE_I)) {
           //ToDo: process image
           logger.info("decode success");
         }else{
           logger.info("decode failed");
         }
       }
     }

    The result of avcodec_decode_video2 is always negative (-1094995529 => Invalid data found when processing input) and I receive following errors :

    [h264 @ 00000000214d11a0] No start code is found.

    [h264 @ 00000000214d11a0] Error splitting the input into NAL units.

    Here are some metadata from the input :

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\Users\user01\Documents\fullstream.mp4':
     Metadata:
       major_brand     : mp42
       minor_version   : 0
       compatible_brands: mp42mp41isomiso2
       creation_time   : 2017-07-27T11:17:19.000000Z
     Duration: 00:55:55.48, start: 0.000000, bitrate: 5126 kb/s
       Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080, 4996 kb/s, 25 fps, 25 tbr, 2500 tbn, 5k tbc (default)
       Metadata:
         creation_time   : 2017-07-27T11:17:19.000000Z
         handler_name    : VideoHandler
       Stream #0:1(eng): Audio: mp3 (mp4a / 0x6134706D), 24000 Hz, mono, s16p, 125 kb/s (default)
       Metadata:
         creation_time   : 2017-07-27T11:17:19.000000Z
         handler_name    : SoundHandler