
Recherche avancée
Médias (1)
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (73)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)
Sur d’autres sites (8152)
-
Launching ffmpeg from inside my C code doesn't convert video stream from fifo
13 mai 2017, par abraxas1(Beaglebone Black, ubuntu 16.04 linux 4.4.62-ti-r99)
I have a raw h264 feed coming into my C program (into a callback from a custom API) and then I named-piped the raw h264 to a file and this CLI command converts that file successfully.
ffmpeg -report -re -framerate 30 -y -f h264 -i orbi_0148.cam1.h264 -c:v copy -an -video_size 1920x1080 -f mp4 orbi_0148.cam1-1.mp4
I then try to launch this command from my program with the same options using execv(),
first mkfifo the fifo’s,
then launch ffmpeg (shown below) starts the streaming form the cameras to my callback function, and out to the fifos.
I close the fifos at the end of video recording, and don’t get any file created and get this output, which I don’t understand at all.
(the successful command line output follows below this output generated from execv())
thanks, any help much appreciated.
hardware has come in now and the spot light is on me.(execv output with options -report -y -i /tmp/fifocam3.h264 -c:v copy -an -video_size 1920x1080 -f mp4 /media/sd-card/orbi_0140.cam3.mp4)
root@arm:/home/orbi# cat ffmpeg-20170511-001941.log
ffmpeg started on 2017-05-11 at 00:19:41
Report written to "ffmpeg-20170511-001941.log"
Command line:
/usr/bin/ffmpeg -report -y -i /tmp/fifocam3.h264 -c:v copy -an -video_size 1920x1080 -f mp4 /media/sd-card/orbi_0140.cam3.mp4
ffmpeg version 2.8.11-0ubuntu0.16.04.1 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.4) 20160609
configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/arm-linux-gnueabihf --incdir=/usr/include/arm-linux-gnueabihf --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11gr libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
Splitting the commandline.
Reading option '-report' ... matched as option 'report' (generate a report) with argument '1'.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option '-i' ... matched as input url with argument '/tmp/fifocam3.h264'.
Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'copy'.
Reading option '-an' ... matched as option 'an' (disable audio) with argument '1'.
Reading option '-video_size' ... matched as AVOption 'video_size' with argument '1920x1080'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'mp4'.
Reading option '/media/sd-card/orbi_0140.cam3.mp4' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option report (generate a report) with argument 1.
Applying option y (overwrite output files) with argument 1.
…
Successfully parsed a group of options.
Parsing a group of options: input url /tmp/fifocam3.h264.
…
Successfully parsed a group of options.
Opening an input file: /tmp/fifocam3.h264.
[h264 @ 0xf249a0] Format h264 probed with size=2048 and score=51
[h264 @ 0xf249a0] Before avformat_find_stream_info() pos: 0 bytes read:32768 seeks:0
[h264 @ 0xf25460] Current profile doesn't provide more RBSP data in PPS, skipping
[h264 @ 0xf25460] bytestream overread -4
[h264 @ 0xf25460] bytestream overread -4
[h264 @ 0xf25460] bytestream overread -4
[h264 @ 0xf25460] bytestream overread -4
[h264 @ 0xf25460] bytestream overread -10
[h264 @ 0xf25460] error while decoding MB 112 28, bytestream -10
[h264 @ 0xf25460] concealing 4737 DC, 4737 AC, 4737 MV errors in P frame
[h264 @ 0xf25460] Frame num gap 7 5
/////// successful command line run \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
root@arm:/home/orbi# ffmpeg -report -re -framerate 30 -y -f h264 -i orbi_0148.cam1.h264 -c:v copy -an -video_size 1920x1080 -f mp4 orbi_0148.cam1-1.mp4
root@arm:/home/orbi# cat ffmpeg-20170511-002629.log
ffmpeg started on 2017-05-11 at 00:26:29
Report written to "ffmpeg-20170511-002629.log"
Command line:
ffmpeg -report -re -framerate 30 -y -f h264 -i orbi_0148.cam1.h264 -c:v copy -an -video_size 1920x1080 -f mp4 orbi_0148.cam1-1.mp4
ffmpeg version 2.8.11-0ubuntu0.16.04.1 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.4) 20160609
configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/arm-linux-gnueabihf --incdir=/usr/include/arm-linux-gnueabihf --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11gr libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
Splitting the commandline.
Reading option '-report' ... matched as option 'report' (generate a report) with argument '1'.
Reading option '-re' ... matched as option 're' (read input at native frame rate) with argument '1'.
Reading option '-framerate' ... matched as AVOption 'framerate' with argument '30'.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'h264'.
Reading option '-i' ... matched as input url with argument 'orbi_0148.cam1.h264'.
Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'copy'.
Reading option '-an' ... matched as option 'an' (disable audio) with argument '1'.
Reading option '-video_size' ... matched as AVOption 'video_size' with argument '1920x1080'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'mp4'.
Reading option 'orbi_0148.cam1-1.mp4' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option report (generate a report) with argument 1.
Applying option y (overwrite output files) with argument 1.
…
Successfully parsed a group of options.
Parsing a group of options: input url orbi_0148.cam1.h264.
Applying option re (read input at native frame rate) with argument 1.
Applying option f (force format) with argument h264.
…
Successfully parsed a group of options.
Opening an input file: orbi_0148.cam1.h264.
[h264 @ 0x848a20] Before avformat_find_stream_info() pos: 0 bytes read:32768 seeks:0
[h264 @ 0x8515b0] Current profile doesn't provide more RBSP data in PPS, skipping
[h264 @ 0x8515b0] Current profile doesn't provide more RBSP data in PPS, skipping
[h264 @ 0x848a20] max_analyze_duration 5000000 reached at 5005000 microseconds st:0
[h264 @ 0x848a20] After avformat_find_stream_info() pos: 3773440 bytes read:3801088 seeks:0 frames:152
Input #0, h264, from 'orbi_0148.cam1.h264':
Duration: N/A, bitrate: N/A
Stream #0:0, 152, 1/1200000: Video: h264 (Main), yuv420p(tv), 1920x1080, 29.97 fps, 29.97 tbr, 1200k tbn, 59.94 tbc
Successfully opened the file.
Parsing a group of options: output url orbi_0148.cam1-1.mp4.
…
Applying option c:v (codec name) with argument copy.
Applying option an (disable audio) with argument 1.
Applying option f (force format) with argument mp4.
Successfully parsed a group of options.
Opening an output file: orbi_0148.cam1-1.mp4.
Successfully opened the file.
[mp4 @ 0x8fb230] Codec for stream 0 does not use global headers but container format requires global headers
Output #0, mp4, to 'orbi_0148.cam1-1.mp4':
Metadata:
encoder : Lavf56.40.101
Stream #0:0, 0, 1/1200000: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 1920x1080, q=2-31, 29.97 fps, 29.97 tbr, 1200k tbn, 1200k tbc
…
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[NULL @ 0x8515b0] Current profile doesn't provide more RBSP data in PPS, skipping
No more output streams to write to, finishing.e=00:00:08.64 bitrate=5887.7kbits/s
frame= 270 fps= 30 q=-1.0 Lsize= 6443kB time=00:00:08.97 bitrate=5880.6kbits/s
video:6440kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.056839%
Input file #0 (orbi_0148.cam1.h264):
Input stream #0:0 (video): 270 packets read (6594052 bytes);
Total: 270 packets (6594052 bytes) demuxed
Output file #0 (orbi_0148.cam1-1.mp4):
Output stream #0:0 (video): 270 packets muxed (6594052 bytes);
Total: 270 packets (6594052 bytes) muxed
0 frames successfully decoded, 0 decoding errors
[AVIOContext @ 0x931650] Statistics: 34 seeks, 308 writeouts
[AVIOContext @ 0x851190] Statistics: 6594052 bytes read, 0 seeks -
Progress with rtc.io
12 août 2014, par silviaAt the end of July, I gave a presentation about WebRTC and rtc.io at the WDCNZ Web Dev Conference in beautiful Wellington, NZ.
Putting that talk together reminded me about how far we have come in the last year both with the progress of WebRTC, its standards and browser implementations, as well as with our own small team at NICTA and our rtc.io WebRTC toolbox.
One of the most exciting opportunities is still under-exploited : the data channel. When I talked about the above slide and pointed out Bananabread, PeerCDN, Copay, PubNub and also later WebTorrent, that’s where I really started to get Web Developers excited about WebRTC. They can totally see the shift in paradigm to peer-to-peer applications away from the Server-based architecture of the current Web.
Many were also excited to learn more about rtc.io, our own npm nodules based approach to a JavaScript API for WebRTC.
We believe that the World of JavaScript has reached a critical stage where we can no longer code by copy-and-paste of JavaScript snippets from all over the Web universe. We need a more structured module reuse approach to JavaScript. Node with JavaScript on the back end really only motivated this development. However, we’ve needed it for a long time on the front end, too. One big library (jquery anyone ?) that does everything that anyone could ever need on the front-end isn’t going to work any longer with the amount of functionality that we now expect Web applications to support. Just look at the insane growth of npm compared to other module collections :
Packages per day across popular platforms (Shamelessly copied from : http://blog.nodejitsu.com/npm-innovation-through-modularity/) For those that – like myself – found it difficult to understand how to tap into the sheer power of npm modules as a font end developer, simply use browserify. npm modules are prepared following the CommonJS module definition spec. Browserify works natively with that and “compiles” all the dependencies of a npm modules into a single bundle.js file that you can use on the front end through a script tag as you would in plain HTML. You can learn more about browserify and module definitions and how to use browserify.
For those of you not quite ready to dive in with browserify we have prepared prepared the rtc module, which exposes the most commonly used packages of rtc.io through an “RTC” object from a browserified JavaScript file. You can also directly download the JavaScript file from GitHub.
Using rtc.io rtc JS library So, I hope you enjoy rtc.io and I hope you enjoy my slides and large collection of interesting links inside the deck, and of course : enjoy WebRTC ! Thanks to Damon, JEeff, Cathy, Pete and Nathan – you’re an awesome team !
On a side note, I was really excited to meet the author of browserify, James Halliday (@substack) at WDCNZ, whose talk on “building your own tools” seemed to take me back to the times where everything was done on the command-line. I think James is using Node and the Web in a way that would appeal to a Linux Kernel developer. Fascinating !!
The post Progress with rtc.io first appeared on ginger’s thoughts.
-
FFMPEG to create an MPEG-DASH stream with VP8
24 avril 2017, par Kenneth WordenI’m trying to use FFMPEG to stream a live video feed from my webcam
/dev/video0
. Following scattered tutorials and scarce documentation (is this a known problem for the encoding community ?) I arrived at the following bash script :#!/bin/bash
ffmpeg \
-y \
-f v4l2 \
-i /dev/video0 \
-s 640x480 \
-input_format mjpeg \
-r 24 \
-map 0:0 \
-pix_fmt yuv420p \
-codec:v libvpx \
-s 640x480 \
-threads 4 \
-b:v 50k \
-tile-columns 4 \
-frame-parallel 1 \
-keyint_min 24 -g 24 \
-f webm_chunk \
-header "stream.hdr" \
-chunk_start_index 1 \
stream_%d.chk &
sleep 2
ffmpeg \
-f webm_dash_manifest -live 1 \
-i stream.hdr \
-c copy \
-map 0 \
-f webm_dash_manifest -live 1 \
-adaptation_sets "id=0,streams=0" \
-chunk_start_index 1 \
-chunk_duration_ms 1000 \
-time_shift_buffer_depth 30000 \
-minimum_update_period 60000 \
stream_manifest.mpdWhen I run this script, my webcam light turns on, the
stream.hdr
andstream_manifest.mpd
files are written, and chunks start to be created (i.e.stream_1.chk
,stream_2.chk
, etc...). However, FFMPEG throws the following error :Could not write header for output file #0 (incorrect codec parameters
?) : Invalid data found when processing inputI will explain what I think I am doing with this script, and hopefully this will expose any errors in my thinking.
First, we invoke FFMPEG to use Video for Linux 2 (v4l2) to read from my webcam (
/dev/video0
) of a resolution 640x480. The input format ismjpeg
with a framerate of 24fps.I then declare that FFMPEG should "map" (copy) the video stream output by v4l2 to a file. I specify the pixel format (YUV420P) and use libvpx (VP8 encoding) to encode the video stream. I set the size to be 640x480, use 4 threads, set the bitrate to be 50kbps, do some magic with tile-columns and frame-parallel options, and set the I-frames to be 24 frames apart.
I then create a
stream.hdr
file. The starting index is 1. This command continues to run infinitely until I kill it, grabbing new video from my webcam and outputting it into chunks.I then sleep for 2 seconds to give the previous command time to generate a header file.
And that’s really it. The next invocation of FFMPEG simply creates the MPEG-DASH manifest file given the header generated in the previous step.
So what’s going on ? Why can I not view the video in a web browser (I’m using Dash.js) ? I serve the manifest, header, and chunks on a Node.js server so that trivial issue is not the problem.
Edit : Here is my full console output.
ffmpeg version 3.0.7-0ubuntu0.16.10.1 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 6.2.0 (Ubuntu 6.2.0-5ubuntu12) 20161005
configuration: --prefix=/usr --extra-version=0ubuntu0.16.10.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librubberband --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-chromaprint --enable-libx264
libavutil 55. 17.103 / 55. 17.103
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libavresample 3. 0. 0 / 3. 0. 0
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
[video4linux2,v4l2 @ 0x55847e244ea0] The driver changed the time per frame from 1/24 to 1/30
[mjpeg @ 0x55847e245c00] Changing bps to 8
Input #0, video4linux2,v4l2, from '/dev/video0':
Duration: N/A, start: 64305.102081, bitrate: N/A
Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 640x480, -5 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
Codec AVOption frame-parallel (Enable frame parallel decodability features) specified for output file #0 (stream_%d.chk) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
Codec AVOption tile-columns (Number of tile columns to use, log2) specified for output file #0 (stream_%d.chk) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
[swscaler @ 0x55847e24b720] deprecated pixel format used, make sure you did set range correctly
[libvpx @ 0x55847e248a20] v1.5.0
Output #0, webm_chunk, to 'stream_%d.chk':
Metadata:
encoder : Lavf57.25.100
Stream #0:0: Video: vp8 (libvpx), yuv420p, 640x480, q=-1--1, 50 kb/s, 30 fps, 30 tbn, 30 tbc
Metadata:
encoder : Lavc57.24.102 libvpx
Side data:
unknown side data type 10 (24 bytes)
Stream mapping:
Stream #0:0 -> #0:0 (mjpeg (native) -> vp8 (libvpx))
Press [q] to stop, [?] for help
frame= 21 fps=0.0 q=0.0 size=N/A time=00:00:00.70 bitrate=N/A dup=5 drop=frame= 36 fps= 35 q=0.0 size=N/A time=00:00:01.20 bitrate=N/A dup=5 drop=frame= 51 fps= 33 q=0.0 size=N/A time=00:00:01.70 bitrate=N/A dup=5 drop=ffmpeg version 3.0.7-0ubuntu0.16.10.1 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 6.2.0 (Ubuntu 6.2.0-5ubuntu12) 20161005
configuration: --prefix=/usr --extra-version=0ubuntu0.16.10.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librubberband --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-chromaprint --enable-libx264
libavutil 55. 17.103 / 55. 17.103
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libavresample 3. 0. 0 / 3. 0. 0
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, webm_dash_manifest, from 'stream.hdr':
Metadata:
encoder : Lavf57.25.100
Duration: N/A, bitrate: N/A
Stream #0:0: Video: vp8, yuv420p, 640x480, SAR 1:1 DAR 4:3, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
Metadata:
webm_dash_manifest_file_name: stream.hdr
webm_dash_manifest_track_number: 1
Output #0, webm_dash_manifest, to 'stream_manifest.mpd':
Metadata:
encoder : Lavf57.25.100
Stream #0:0: Video: vp8, yuv420p, 640x480 [SAR 1:1 DAR 4:3], q=2-31, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
Metadata:
webm_dash_manifest_file_name: stream.hdr
webm_dash_manifest_track_number: 1
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Could not write header for output file #0 (incorrect codec parameters ?): Invalid data found when processing input
frame= 67 fps= 33 q=0.0 size
frame= 82 fps= 32 q=0.0 size=N/A time=00:00:02.73 bitrate=N/A dup=5 drop=
frame= 97 fps= 32 q=0.0 size=N/A time=00:00:03.23 bitrate=N/A dup=5 drop=
frame= 112 fps= 32 q=0.0 size=N/A time=00:00:03.73 bitrate=N/A dup=5 ...