Recherche avancée

Médias (1)

Mot : - Tags -/école

Autres articles (105)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (8595)

  • ffmpeg legitimate decoding errors

    20 juillet 2017, par Gideon Oduro

    My issue is as follows, i’m sending a H.264 encoded video captured with the help of WebRTC over WebSocket. The idea is to perform server side analysis and object tracking.

    navigator.mediaDevices.getUserMedia(constraint).then((stream) => {
     isVideoElement(target, stream)
     mediaRecorder = recorder(stream, {mimeType: 'video/webm; codecs=H264'})
     mediaRecorder.ondataavailable = (blob) => socket.send(blob.data)
     mediaRecorder.start('2000');
    })

    On the server side, data is being received as ByteBuffer :

    override fun handleBinaryMessage(session: WebSocketSession, msg: BinaryMessage) {
       analysis(msg.payload)
    }

    Im using the following resources (resource_1, resource_2) to try to convert my ByteBuffer to a OpenCv frame :

    fun startPreview(data: ByteBuffer) {
    avcodec_register_all()
    val pack = avcodec.AVPacket()
    pack.data(BytePointer(data))
    avcodec.av_init_packet(pack)

    val videoData = BytePointer(data)

    val codec = avcodec.avcodec_find_decoder(avcodec.AV_CODEC_ID_H264)
    val videoCodecContext = avcodec.avcodec_alloc_context3(codec)

    videoCodecContext.width(1280)
    videoCodecContext.height(720)

    videoCodecContext.pix_fmt(avutil.AV_PIX_FMT_YUV420P)
    videoCodecContext.codec_type(avutil.AVMEDIA_TYPE_VIDEO)
    videoCodecContext.extradata(videoData)
    videoCodecContext.extradata_size(data.capacity())
    videoCodecContext.flags2(videoCodecContext.flags2() or avcodec.CODEC_FLAG2_CHUNKS)

    avcodec.avcodec_open2(videoCodecContext, codec, null as PointerPointer<*>?)

    val decodedFrameLength = avcodec.avcodec_receive_frame(videoCodecContext, avutil.AVFrame())

    println(decodedFrameLength)

    }

    Im then receiving decodedFrameLength of -35 indicating a decoding error, cant figure out how to proceed from here ?

  • No audio output using FFmpeg

    26 mars 2022, par John Mergene Arellano

    I am having problem on Live stream output. I am streaming from mobile app to Node JS server to RTMP. Video output of the live stream is working but not the audio. There is no audio output from live stream.

    


    From my client side, I am sending a stream using the Socket.IO library. I captured the video and audio using getUserMedia API.

    


    navigator.mediaDevices.getUserMedia(constraints).then((stream) => {
    window.videoStream = video.srcObject = stream;
    let mediaRecorder = new MediaRecorder(stream, {
        videoBitsPerSecond : 3 * 1024 * 1024
    });
    mediaRecorder.addEventListener('dataavailable', (e) => {
        let data = e.data;
        socket.emit('live', data);
    });
    mediaRecorder.start(1000);
});


    


    Then my server will receive the stream and write it to FFmpeg.

    


    client.on('live', (stream)=>{
   if(ffmpeg)
       ffmpeg.stdin.write(stream);
});


    


    I tried watching the live video in VLC media player. There is a 5 seconds delay and no audio output.

    


    Please see below for FFmpeg options I used :

    


    ffmpeg = this.CHILD_PROCESS.spawn("ffmpeg", [
   '-f',
   'lavfi',
   '-i', 'anullsrc',
   '-i','-',
   '-c:v', 'libx264', '-preset', 'veryfast', '-tune', 'zerolatency',
   '-c:a', 'aac', '-ar', '44100', '-b:a', '64k',
   '-y', //force to overwrite
   '-use_wallclock_as_timestamps', '1', // used for audio sync
   '-async', '1', // used for audio sync
   '-bufsize', '1000',
   '-f',
   'flv',
   `rtmp://127.0.0.1:1935/live/stream` ]);


    


    What is wrong with my setup ? I need to fix the command so that the live stream will output both video and audio.

    


    I tried streaming to youtube RTMP but still no audio. I am expecting to have an output of video and audio from the getUserMedia API.

    


    What is wrong with my setup ? I need to fix the command so that the live stream will output both video and audio.

    


    I tried streaming to youtube RTMP but still no audio. I am expecting to have an output of video and audio from the getUserMedia API.

    


  • Error in converting audio file format from ogg to wav [on hold]

    9 juin 2014, par Sumit Bisht

    I am trying to convert an ogg format file that was created using webrtc (html5 usermedia content generated on firefox) and transferred and decoded on the server into a wav file through ffmpeg but am getting this error on cmmand line while trying to convert :

    $ ffmpeg -i 2014-6-5_16-17-54.ogg res1.wav
    ffmpeg version 2.0.1 Copyright (c) 2000-2013 the FFmpeg developers
     built on May  1 2014 13:12:12 with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-4)
     configuration: --enable-gpl --enable-version3 --enable-shared --enable-nonfree --enable-postproc --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid
     libavutil      52. 38.100 / 52. 38.100
     libavcodec     55. 18.102 / 55. 18.102
     libavformat    55. 12.100 / 55. 12.100
     libavdevice    55.  3.100 / 55.  3.100
     libavfilter     3. 79.101 /  3. 79.101
     libswscale      2.  3.100 /  2.  3.100
     libswresample   0. 17.102 /  0. 17.102
     libpostproc    52.  3.100 / 52.  3.100
    Guessed Channel Layout for  Input Stream #0.0 : mono
    Input #0, ogg, from '2014-6-5_16-17-54.ogg':
     Duration: 00:00:01.84, start: 0.000000, bitrate: 18 kb/s
       Stream #0:0: Audio: opus, 48000 Hz, mono
       Metadata:
         ENCODER         : Mozilla29.0.1
    [graph 0 input from stream 0:0 @ 0x18dca20] Invalid sample format (null)
    Error opening filters!

    Although, I am able to play the file on server and using the same command, am able to convert .ogg files generated somewhere else. What might be I missing ?

    Edit :
    Here’s the source code that is used to write to the file :

    1) During startup - use the methods of getUserMedia API.

    navigator.getUserMedia({
           audio: true,
           video: false
       }, function(stream) {
           audioStream = RecordRTC(stream, {
               bufferSize: 16384
           });
           audioStream.startRecording();

    2) During stopping of the recording - extracting the recorded information.

    function(audioDataURL) {
        var audioFile = {};
        audioFile = {
          contents: audioDataURL
        **strong text**};

    On server end, the following code is creating a file from this data :

    dataURL = dataURL.split(',').pop(); // dataURL is the audioDataURL as defined above
    fileBuffer = new Buffer(dataURL, 'base64');
    fs.writeFileSync(filePath, fileBuffer);