Recherche avancée

Médias (33)

Mot : - Tags -/creative commons

Autres articles (77)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

Sur d’autres sites (10967)

  • RTP packets detected as UDP

    28 février 2017, par user3172852

    Here is what I am trying to do :

    WebRTC endpoint > RTP Endpoint > ffmpeg > RTMP server.

    This is what my SDP file looks like.

    var cm_offer = "v=0\n" +
                 "o=- 3641290734 3641290734 IN IP4 127.0.0.1\n" +
                 "s=nginx\n" +
                 "c=IN IP4 127.0.0.1\n" +
                 "t=0 0\n" +
                 "m=audio 60820 RTP/AVP 0\n" +
                 "a=rtpmap:0 PCMU/8000\n" +
                 "a=recvonly\n" +
                 "m=video 59618 RTP/AVP 101\n" +
                 "a=rtpmap:101 H264/90000\n" +
                 "a=recvonly\n";

    What’s happening is that wireshark can detect the incoming packets at port 59618, but not as RTP packets but UDP packets. I am trying to capture the packets using ffmpeg with the following command :

    ubuntu@ip-132-31-40-100:~$ ffmpeg -i udp://127.0.0.1:59618 -vcodec copy stream.mp4
    ffmpeg version git-2017-01-22-f1214ad Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
     configuration: --extra-libs=-ldl --prefix=/opt/ffmpeg --mandir=/usr/share/man --enable-avresample --disable-debug --enable-nonfree --enable-gpl --enable-version3 --enable-libopencore-amrnb --enable-libopencore-amrwb --disable-decoder=amrnb --disable-decoder=amrwb --enable-libpulse --enable-libfreetype --enable-gnutls --enable-libx264 --enable-libx265 --enable-libfdk-aac --enable-libvorbis --enable-libmp3lame --enable-libopus --enable-libvpx --enable-libspeex --enable-libass --enable-avisynth --enable-libsoxr --enable-libxvid --enable-libvidstab --enable-libwavpack --enable-nvenc
     libavutil      55. 44.100 / 55. 44.100
     libavcodec     57. 75.100 / 57. 75.100
     libavformat    57. 63.100 / 57. 63.100
     libavdevice    57.  2.100 / 57.  2.100
     libavfilter     6. 69.100 /  6. 69.100
     libavresample   3.  2.  0 /  3.  2.  0
     libswscale      4.  3.101 /  4.  3.101
     libswresample   2.  4.100 /  2.  4.100
     libpostproc    54.  2.100 / 54.  2.100

    All I get is a blinking cursor and The stream.mp4 file is not written to disk after I exit (ctrl+c).

    So can you help me figure out :

    1. why wireshark cannot detect the packets as RTP (I suspect it has something to do with SDP)
    2. How to handle SDP answer when the RTP endpoint is pushing to ffmpeg which doesn’t send an answer back.

    Here is the entire code (hello world tutorial modified)

    /*
        * (C) Copyright 2014-2015 Kurento (http://kurento.org/)
        *
        * Licensed under the Apache License, Version 2.0 (the "License");
        * you may not use this file except in compliance with the License.
        * You may obtain a copy of the License at
        *
        *   http://www.apache.org/licenses/LICENSE-2.0
        *
        * Unless required by applicable law or agreed to in writing, software
        * distributed under the License is distributed on an "AS IS" BASIS,
        * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
        * See the License for the specific language governing permissions and
        * limitations under the License.
        */

       function getopts(args, opts)
       {
         var result = opts.default || {};
         args.replace(
             new RegExp("([^?=&]+)(=([^&]*))?", "g"),
             function($0, $1, $2, $3) { result[$1] = decodeURI($3); });

         return result;
       };

       var args = getopts(location.search,
       {
         default:
         {
           ws_uri: 'wss://' + location.hostname + ':8433/kurento',
           ice_servers: undefined
         }
       });

       function setIceCandidateCallbacks(webRtcPeer, webRtcEp, onerror)
       {
         webRtcPeer.on('icecandidate', function(candidate) {
           console.log("Local candidate:",candidate);

           candidate = kurentoClient.getComplexType('IceCandidate')(candidate);

           webRtcEp.addIceCandidate(candidate, onerror)
         });

         webRtcEp.on('OnIceCandidate', function(event) {
           var candidate = event.candidate;

           console.log("Remote candidate:",candidate);

           webRtcPeer.addIceCandidate(candidate, onerror);
         });
       }


       function setIceCandidateCallbacks2(webRtcPeer, rtpEp, onerror)
       {
         webRtcPeer.on('icecandidate', function(candidate) {
           console.log("Localr candidate:",candidate);

           candidate = kurentoClient.getComplexType('IceCandidate')(candidate);

           rtpEp.addIceCandidate(candidate, onerror)
         });
       }


       window.addEventListener('load', function()
       {
         console = new Console();

         var webRtcPeer;
         var pipeline;
         var webRtcEpt;

         var videoInput = document.getElementById('videoInput');
         var videoOutput = document.getElementById('videoOutput');

         var startButton = document.getElementById("start");
         var stopButton = document.getElementById("stop");

         startButton.addEventListener("click", function()
         {
           showSpinner(videoInput, videoOutput);

           var options = {
             localVideo: videoInput,
             remoteVideo: videoOutput
           };


           if (args.ice_servers) {
            console.log("Use ICE servers: " + args.ice_servers);
            options.configuration = {
              iceServers : JSON.parse(args.ice_servers)
            };
           } else {
            console.log("Use freeice")
           }

           webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options, function(error)
           {
             if(error) return onError(error)

             this.generateOffer(onOffer)
           });

           function onOffer(error, sdpOffer)
           {
             if(error) return onError(error)

             kurentoClient(args.ws_uri, function(error, client)
             {
               if(error) return onError(error);

               client.create("MediaPipeline", function(error, _pipeline)
               {
                 if(error) return onError(error);

                 pipeline = _pipeline;

                 pipeline.create("WebRtcEndpoint", function(error, webRtc){
                   if(error) return onError(error);

                   webRtcEpt = webRtc;

                   setIceCandidateCallbacks(webRtcPeer, webRtc, onError)

                   webRtc.processOffer(sdpOffer, function(error, sdpAnswer){
                     if(error) return onError(error);

                     webRtcPeer.processAnswer(sdpAnswer, onError);
                   });
                   webRtc.gatherCandidates(onError);

                   webRtc.connect(webRtc, function(error){
                     if(error) return onError(error);

                     console.log("Loopback established");
                   });
                 });



               pipeline.create("RtpEndpoint", function(error, rtp){
                   if(error) return onError(error);

                   //setIceCandidateCallbacks2(webRtcPeer, rtp, onError)


                   var cm_offer = "v=0\n" +
                         "o=- 3641290734 3641290734 IN IP4 127.0.0.1\n" +
                         "s=nginx\n" +
                         "c=IN IP4 127.0.0.1\n" +
                         "t=0 0\n" +
                         "m=audio 60820 RTP/AVP 0\n" +
                         "a=rtpmap:0 PCMU/8000\n" +
                         "a=recvonly\n" +
                         "m=video 59618 RTP/AVP 101\n" +
                         "a=rtpmap:101 H264/90000\n" +
                         "a=recvonly\n";



                   rtp.processOffer(cm_offer, function(error, cm_sdpAnswer){
                     if(error) return onError(error);

                     //webRtcPeer.processAnswer(cm_sdpAnswer, onError);
                   });
                   //rtp.gatherCandidates(onError);

                   webRtcEpt.connect(rtp, function(error){
                     if(error) return onError(error);

                     console.log("RTP endpoint connected to webRTC");
                   });
                 });









               });
             });
           }
         });
         stopButton.addEventListener("click", stop);


         function stop() {
           if (webRtcPeer) {
             webRtcPeer.dispose();
             webRtcPeer = null;
           }

           if(pipeline){
             pipeline.release();
             pipeline = null;
           }

           hideSpinner(videoInput, videoOutput);
         }

         function onError(error) {
           if(error)
           {
             console.error(error);
             stop();
           }
         }
       })


       function showSpinner() {
         for (var i = 0; i < arguments.length; i++) {
           arguments[i].poster = 'img/transparent-1px.png';
           arguments[i].style.background = "center transparent url('img/spinner.gif') no-repeat";
         }
       }

       function hideSpinner() {
         for (var i = 0; i < arguments.length; i++) {
           arguments[i].src = '';
           arguments[i].poster = 'img/webrtc.png';
           arguments[i].style.background = '';
         }
       }

       /**
        * Lightbox utility (to display media pipeline image in a modal dialog)
        */
       $(document).delegate('*[data-toggle="lightbox"]', 'click', function(event) {
         event.preventDefault();
         $(this).ekkoLightbox();
       });
  • No output file when converting audio using FFmpeg in Android

    18 mars 2017, par Sha

    I’m trying to convert m4a audio file to wav using FFmpeg. The code executes fine and gives no error, but I don’t see any output file in my directory.

    This is what I am executing :

    String[] cmd = {"-y", "-i", "/storage/emulated/0/jd.m4a", "-f","wav" ,"/storage/emulated/0/DCIM/Camera/output.wav"};

    And this is what gets printed :

    03-17 15:40:51.539 10111-10111/io.whispero.soundmerger E/onProgress: ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers
    03-17 15:40:51.542 10111-10111/io.whispero.soundmerger E/onProgress:   built with gcc 4.8 (GCC)
    03-17 15:40:51.545 10111-10111/io.whispero.soundmerger E/onProgress:   configuration: --target-os=linux --cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- --arch=arm --cpu=cortex-a8 --enable-runtime-cpudetect --sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot --enable-pic --enable-libx264 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-fontconfig --enable-pthreads --disable-debug --disable-ffserver --enable-version3 --enable-hardcoded-tables --disable-ffplay --disable-ffprobe --enable-gpl --enable-yasm --disable-doc --disable-shared --enable-static --pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config --prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a --extra-cflags='-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all' --extra-ldflags='-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie' --extra-libs='-lpng -lexpat -lm' --extra-cxxflags=
    03-17 15:40:51.547 10111-10111/io.whispero.soundmerger E/onProgress:   libavutil      55. 17.103 / 55. 17.103
    03-17 15:40:51.552 10111-10111/io.whispero.soundmerger E/onProgress:   libavcodec     57. 24.102 / 57. 24.102
    03-17 15:40:51.554 10111-10111/io.whispero.soundmerger E/onProgress:   libavformat    57. 25.100 / 57. 25.100
    03-17 15:40:51.556 10111-10111/io.whispero.soundmerger E/onProgress:   libavdevice    57.  0.101 / 57.  0.101
    03-17 15:40:51.559 10111-10111/io.whispero.soundmerger E/onProgress:   libavfilter     6. 31.100 /  6. 31.100
    03-17 15:40:51.561 10111-10111/io.whispero.soundmerger E/onProgress:   libswscale      4.  0.100 /  4.  0.100
    03-17 15:40:51.562 10111-10111/io.whispero.soundmerger E/onProgress:   libswresample   2.  0.101 /  2.  0.101
    03-17 15:40:51.564 10111-10111/io.whispero.soundmerger E/onProgress:   libpostproc    54.  0.100 / 54.  0.100
    03-17 15:40:51.581 10111-10111/io.whispero.soundmerger E/onProgress: Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/storage/emulated/0/jd.m4a':
    03-17 15:40:51.585 10111-10111/io.whispero.soundmerger E/onProgress:   Metadata:
    03-17 15:40:51.587 10111-10111/io.whispero.soundmerger E/onProgress:     major_brand     : M4A
    03-17 15:40:51.589 10111-10111/io.whispero.soundmerger E/onProgress:     minor_version   : 0
    03-17 15:40:51.593 10111-10111/io.whispero.soundmerger E/onProgress:     compatible_brands: M4A mp42isom
    03-17 15:40:51.595 10111-10111/io.whispero.soundmerger E/onProgress:     creation_time   : 2017-02-16 10:36:39
    03-17 15:40:51.597 10111-10111/io.whispero.soundmerger E/onProgress:   Duration: 00:00:03.39, start: 0.000000, bitrate: 82 kb/s
    03-17 15:40:51.602 10111-10111/io.whispero.soundmerger E/onProgress:     Stream #0:0(eng): Audio: aac (LC) (mp4a / 0x6134706D), 16000 Hz, mono, fltp, 24 kb/s (default)
    03-17 15:40:51.608 10111-10111/io.whispero.soundmerger E/onProgress:     Metadata:
    03-17 15:40:51.612 10111-10111/io.whispero.soundmerger E/onProgress:       creation_time   : 2017-02-16 10:36:39
    03-17 15:40:51.614 10111-10111/io.whispero.soundmerger E/onProgress: Output #0, wav, to '/storage/emulated/0/DCIM/Camera/hyder.wav':
    03-17 15:40:51.617 10111-10111/io.whispero.soundmerger E/onProgress:   Metadata:
    03-17 15:40:51.619 10111-10111/io.whispero.soundmerger E/onProgress:     major_brand     : M4A
    03-17 15:40:51.621 10111-10111/io.whispero.soundmerger E/onProgress:     minor_version   : 0
    03-17 15:40:51.623 10111-10111/io.whispero.soundmerger E/onProgress:     compatible_brands: M4A mp42isom
    03-17 15:40:51.625 10111-10111/io.whispero.soundmerger E/onProgress:     ISFT            : Lavf57.25.100
    03-17 15:40:51.627 10111-10111/io.whispero.soundmerger E/onProgress:     Stream #0:0(eng): Audio: pcm_s16le ([1][0][0][0] / 0x0001), 16000 Hz, mono, s16, 256 kb/s (default)
    03-17 15:40:51.629 10111-10111/io.whispero.soundmerger E/onProgress:     Metadata:
    03-17 15:40:51.631 10111-10111/io.whispero.soundmerger E/onProgress:       creation_time   : 2017-02-16 10:36:39
    03-17 15:40:51.633 10111-10111/io.whispero.soundmerger E/onProgress:       encoder         : Lavc57.24.102 pcm_s16le
    03-17 15:40:51.636 10111-10111/io.whispero.soundmerger E/onProgress: Stream mapping:
    03-17 15:40:51.639 10111-10111/io.whispero.soundmerger E/onProgress:   Stream #0:0 -> #0:0 (aac (native) -> pcm_s16le (native))
    03-17 15:40:51.642 10111-10111/io.whispero.soundmerger E/onProgress: Press [q] to stop, [?] for help
    03-17 15:40:51.645 10111-10111/io.whispero.soundmerger E/onProgress: size=     106kB time=00:00:03.39 bitrate= 256.2kbits/s speed= 181x    
    03-17 15:40:51.647 10111-10111/io.whispero.soundmerger E/onProgress: video:0kB audio:106kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.071860%
    03-17 15:40:51.650 10111-10111/io.whispero.soundmerger E/SUCCESS: ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers
                                                                       built with gcc 4.8 (GCC)
                                                                       configuration: --target-os=linux --cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- --arch=arm --cpu=cortex-a8 --enable-runtime-cpudetect --sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot --enable-pic --enable-libx264 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-fontconfig --enable-pthreads --disable-debug --disable-ffserver --enable-version3 --enable-hardcoded-tables --disable-ffplay --disable-ffprobe --enable-gpl --enable-yasm --disable-doc --disable-shared --enable-static --pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config --prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a --extra-cflags='-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all' --extra-ldflags='-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie' --extra-libs='-lpng -lexpat -lm' --extra-cxxflags=
                                                                       libavutil      55. 17.103 / 55. 17.103
                                                                       libavcodec     57. 24.102 / 57. 24.102
                                                                       libavformat    57. 25.100 / 57. 25.100
                                                                       libavdevice    57.  0.101 / 57.  0.101
                                                                       libavfilter     6. 31.100 /  6. 31.100
                                                                       libswscale      4.  0.100 /  4.  0.100
                                                                       libswresample   2.  0.101 /  2.  0.101
                                                                       libpostproc    54.  0.100 / 54.  0.100
                                                                     Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/storage/emulated/0/jd.m4a':
                                                                       Metadata:
                                                                         major_brand     : M4A
                                                                         minor_version   : 0
                                                                         compatible_brands: M4A mp42isom
                                                                         creation_time   : 2017-02-16 10:36:39
                                                                       Duration: 00:00:03.39, start: 0.000000, bitrate: 82 kb/s
                                                                         Stream #0:0(eng): Audio: aac (LC) (mp4a / 0x6134706D), 16000 Hz, mono, fltp, 24 kb/s (default)
                                                                         Metadata:
                                                                           creation_time   : 2017-02-16 10:36:39
                                                                     Output #0, wav, to '/storage/emulated/0/DCIM/Camera/hyder.wav':
                                                                       Metadata:
                                                                         major_brand     : M4A
                                                                         minor_version   : 0
                                                                         compatible_brands: M4A mp42isom
                                                                         ISFT            : Lavf57.25.100
                                                                         Stream #0:0(eng): Audio: pcm_s16le ([1][0][0][0] / 0x0001), 16000 Hz, mono, s16, 256 kb/s (default)
                                                                         Metadata:
                                                                           creation_time   : 2017-02-16 10:36:39
                                                                           encoder         : Lavc57.24.102 pcm_s16le
                                                                     Stream mapping:
                                                                       Stream #0:0 -> #0:0 (aac (native) -> pcm_s16le (native))
                                                                     Press [q] to stop, [?] for help
                                                                     size=     106kB time=00:00:03.39 bitrate= 256.2kbits/s speed= 181x    
                                                                     video:0kB audio:106kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.071860%
    03-17 15:40:51.653 10111-10111/io.whispero.soundmerger E/onFinish: onFinish

    Please help why I am not seeing any output audio file :(

    Thanks.

  • FFMPEG to create an MPEG-DASH stream with VP8

    24 avril 2017, par Kenneth Worden

    I’m trying to use FFMPEG to stream a live video feed from my webcam /dev/video0. Following scattered tutorials and scarce documentation (is this a known problem for the encoding community ?) I arrived at the following bash script :

    #!/bin/bash

    ffmpeg \
       -y \
       -f v4l2 \
           -i /dev/video0 \
           -s 640x480 \
           -input_format mjpeg \
           -r 24 \
       -map 0:0 \
       -pix_fmt yuv420p \
       -codec:v libvpx \
           -s 640x480 \
           -threads 4 \
           -b:v 50k \
           -tile-columns 4 \
           -frame-parallel 1 \
           -keyint_min 24 -g 24 \
       -f webm_chunk \
           -header "stream.hdr" \
           -chunk_start_index 1 \
       stream_%d.chk &

    sleep 2

    ffmpeg \
       -f webm_dash_manifest -live 1 \
       -i stream.hdr \
       -c copy \
       -map 0 \
       -f webm_dash_manifest -live 1 \
           -adaptation_sets "id=0,streams=0" \
           -chunk_start_index 1 \
           -chunk_duration_ms 1000 \
           -time_shift_buffer_depth 30000 \
           -minimum_update_period 60000 \
       stream_manifest.mpd

    When I run this script, my webcam light turns on, the stream.hdr and stream_manifest.mpd files are written, and chunks start to be created (i.e. stream_1.chk, stream_2.chk, etc...). However, FFMPEG throws the following error :

    Could not write header for output file #0 (incorrect codec parameters
     ?) : Invalid data found when processing input

    I will explain what I think I am doing with this script, and hopefully this will expose any errors in my thinking.

    First, we invoke FFMPEG to use Video for Linux 2 (v4l2) to read from my webcam (/dev/video0) of a resolution 640x480. The input format is mjpeg with a framerate of 24fps.

    I then declare that FFMPEG should "map" (copy) the video stream output by v4l2 to a file. I specify the pixel format (YUV420P) and use libvpx (VP8 encoding) to encode the video stream. I set the size to be 640x480, use 4 threads, set the bitrate to be 50kbps, do some magic with tile-columns and frame-parallel options, and set the I-frames to be 24 frames apart.

    I then create a stream.hdr file. The starting index is 1. This command continues to run infinitely until I kill it, grabbing new video from my webcam and outputting it into chunks.

    I then sleep for 2 seconds to give the previous command time to generate a header file.

    And that’s really it. The next invocation of FFMPEG simply creates the MPEG-DASH manifest file given the header generated in the previous step.

    So what’s going on ? Why can I not view the video in a web browser (I’m using Dash.js) ? I serve the manifest, header, and chunks on a Node.js server so that trivial issue is not the problem.


    Edit : Here is my full console output.

    ffmpeg version 3.0.7-0ubuntu0.16.10.1 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 6.2.0 (Ubuntu 6.2.0-5ubuntu12) 20161005
     configuration: --prefix=/usr --extra-version=0ubuntu0.16.10.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librubberband --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-chromaprint --enable-libx264
     libavutil      55. 17.103 / 55. 17.103
     libavcodec     57. 24.102 / 57. 24.102
     libavformat    57. 25.100 / 57. 25.100
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 31.100 /  6. 31.100
     libavresample   3.  0.  0 /  3.  0.  0
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
    [video4linux2,v4l2 @ 0x55847e244ea0] The driver changed the time per frame from 1/24 to 1/30
    [mjpeg @ 0x55847e245c00] Changing bps to 8
    Input #0, video4linux2,v4l2, from '/dev/video0':
     Duration: N/A, start: 64305.102081, bitrate: N/A
       Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 640x480, -5 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
    Codec AVOption frame-parallel (Enable frame parallel decodability features) specified for output file #0 (stream_%d.chk) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
    Codec AVOption tile-columns (Number of tile columns to use, log2) specified for output file #0 (stream_%d.chk) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
    [swscaler @ 0x55847e24b720] deprecated pixel format used, make sure you did set range correctly
    [libvpx @ 0x55847e248a20] v1.5.0
    Output #0, webm_chunk, to 'stream_%d.chk':
     Metadata:
       encoder         : Lavf57.25.100
       Stream #0:0: Video: vp8 (libvpx), yuv420p, 640x480, q=-1--1, 50 kb/s, 30 fps, 30 tbn, 30 tbc
       Metadata:
         encoder         : Lavc57.24.102 libvpx
       Side data:
         unknown side data type 10 (24 bytes)
    Stream mapping:
     Stream #0:0 -> #0:0 (mjpeg (native) -> vp8 (libvpx))
    Press [q] to stop, [?] for help
    frame=   21 fps=0.0 q=0.0 size=N/A time=00:00:00.70 bitrate=N/A dup=5 drop=frame=   36 fps= 35 q=0.0 size=N/A time=00:00:01.20 bitrate=N/A dup=5 drop=frame=   51 fps= 33 q=0.0 size=N/A time=00:00:01.70 bitrate=N/A dup=5 drop=ffmpeg version 3.0.7-0ubuntu0.16.10.1 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 6.2.0 (Ubuntu 6.2.0-5ubuntu12) 20161005
     configuration: --prefix=/usr --extra-version=0ubuntu0.16.10.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librubberband --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-chromaprint --enable-libx264
     libavutil      55. 17.103 / 55. 17.103
     libavcodec     57. 24.102 / 57. 24.102
     libavformat    57. 25.100 / 57. 25.100
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 31.100 /  6. 31.100
     libavresample   3.  0.  0 /  3.  0.  0
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
    Input #0, webm_dash_manifest, from 'stream.hdr':
     Metadata:
       encoder         : Lavf57.25.100
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: vp8, yuv420p, 640x480, SAR 1:1 DAR 4:3, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
       Metadata:
         webm_dash_manifest_file_name: stream.hdr
         webm_dash_manifest_track_number: 1
    Output #0, webm_dash_manifest, to 'stream_manifest.mpd':
     Metadata:
       encoder         : Lavf57.25.100
       Stream #0:0: Video: vp8, yuv420p, 640x480 [SAR 1:1 DAR 4:3], q=2-31, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
       Metadata:
         webm_dash_manifest_file_name: stream.hdr
         webm_dash_manifest_track_number: 1
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
    Could not write header for output file #0 (incorrect codec parameters ?): Invalid data found when processing input
    frame=   67 fps= 33 q=0.0 size
    frame=   82 fps= 32 q=0.0 size=N/A time=00:00:02.73 bitrate=N/A dup=5 drop=
    frame=   97 fps= 32 q=0.0 size=N/A time=00:00:03.23 bitrate=N/A dup=5 drop=
    frame=  112 fps= 32 q=0.0 size=N/A time=00:00:03.73 bitrate=N/A dup=5 ...