Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
ffmpeg doesn't correctly work with mkv
26 avril 2017, par AlexNow I'm trying to trim mkv file with ffmpeg command.
ffmpeg -y -i 1.mkv -filter_complex "[0:0]trim=start=201:duration=28,setpts=PTS-STARTPTS" a.mp4
after processing a.mp4 is created with length(28s), but there is problems.
- Video is some flapped while playing.
- Audio is continued though playing has ended beyond 28s.
So could you guys tell me what can I do?
When ffmpeg processing, it shows some errors as like this:
D:\Work\ffmpeg\files> ffmpeg -y -i 1.mkv -filter_complex "[0:0]trim=start=201:duration=28,setpts=PTS-STARTPTS" a.mp4 ffmpeg version N-77883-gd7c75a5 Copyright (c) 2000-2016 the FFmpeg developers built with gcc 5.2.0 (GCC) configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib libavutil 55. 13.100 / 55. 13.100 libavcodec 57. 22.100 / 57. 22.100 libavformat 57. 21.101 / 57. 21.101 libavdevice 57. 0.100 / 57. 0.100 libavfilter 6. 25.100 / 6. 25.100 libswscale 4. 0.100 / 4. 0.100 libswresample 2. 0.101 / 2. 0.101 libpostproc 54. 0.100 / 54. 0.100 Guessed Channel Layout for Input Stream #0.1 : mono Input #0, matroska,webm, from '1.mkv': Metadata: ENCODER : Lavf56.1.0 Duration: 00:35:40.08, start: 0.000000, bitrate: 348 kb/s Stream #0:0: Video: vp8, yuv420p, 640x480, SAR 1:1 DAR 4:3, 29.97 fps, 29.97 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: pcm_mulaw ([7][0][0][0] / 0x0007), 8000 Hz, 1 channels, s16, 64 kb/s [libx264 @ 000002859baf3300] using SAR=1/1 [libx264 @ 000002859baf3300] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX [libx264 @ 000002859baf3300] profile High, level 3.0 [libx264 @ 000002859baf3300] 264 - core 148 r2638 7599210 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'a.mp4': Metadata: encoder : Lavf57.21.101 Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 640x480 [SAR 1:1 DAR 4:3], q=-1--1, 29.97 fps, 30k tbn, 29.97 tbc Metadata: encoder : Lavc57.22.100 libx264 Side data: unknown side data type 10 (24 bytes) Stream #0:1: Audio: aac (LC) ([64][0][0][0] / 0x0040), 8000 Hz, mono, fltp, 48 kb/s Metadata: encoder : Lavc57.22.100 aac Stream mapping: Stream #0:0 (vp8) -> trim (graph 0) setpts (graph 0) -> Stream #0:0 (libx264) Stream #0:1 -> #0:1 (pcm_mulaw (native) -> aac (native)) Press [q] to stop, [?] for help [vp8 @ 000002859d9ae460] Discarding interframe without a prior keyframe! [vp8 @ 000002859d9aeb00] Discarding interframe without a prior keyframe! [vp8 @ 000002859e8eee40] Discarding interframe without a prior keyframe! [vp8 @ 000002859e8f5460] Discarding interframe without a prior keyframe! Error while decoding stream #0:0: Invalid data found when processing input Last message repeated 3 times frame= 0 fps=0.0 q=0.0 size= 62kB time=00:00:22.27 bitrate= 22.6kbits/s frame= 0 fps=0.0 q=0.0 size= 154kB time=00:00:41.08 bitrate= 30.6kbits/s frame= 0 fps=0.0 q=0.0 size= 255kB time=00:01:02.20 bitrate= 33.6kbits/s frame= 0 fps=0.0 q=0.0 size= 340kB time=00:01:19.34 bitrate= 35.1kbits/s frame= 0 fps=0.0 q=0.0 size= 423kB time=00:01:39.31 bitrate= 34.9kbits/s frame= 88 fps= 29 q=29.0 size= 518kB time=00:01:43.79 bitrate= 40.9kbits/ frame= 140 fps= 40 q=29.0 size= 613kB time=00:01:45.46 bitrate= 47.6kbits/ frame= 194 fps= 48 q=29.0 size= 719kB time=00:01:47.38 bitrate= 54.8kbits/ ... frame= 839 fps= 14 q=29.0 size= 10881kB time=00:34:51.52 bitrate= 42.6kbits/ frame= 839 fps= 14 q=29.0 size= 10962kB time=00:35:10.46 bitrate= 42.5kbits/ frame= 839 fps= 14 q=29.0 size= 11031kB time=00:35:25.44 bitrate= 42.5kbits/ frame= 839 fps= 14 q=-1.0 Lsize= 11266kB time=00:35:40.16 bitrate= 43.1kbits/s dup=284 drop=328 speed=34.6x video:1083kB audio:10104kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.706855% [libx264 @ 000002859baf3300] frame I:4 Avg QP:15.24 size: 22651 [libx264 @ 000002859baf3300] frame P:305 Avg QP:19.95 size: 2626 [libx264 @ 000002859baf3300] frame B:530 Avg QP:25.41 size: 408 [libx264 @ 000002859baf3300] consecutive B-frames: 15.7% 0.0% 0.4% 83.9% [libx264 @ 000002859baf3300] mb I I16..4: 15.9% 63.7% 20.4% [libx264 @ 000002859baf3300] mb P I16..4: 0.3% 1.9% 0.3% P16..4: 22.3% 6.2% 2.4% 0.0% 0.0% skip:66.5% [libx264 @ 000002859baf3300] mb B I16..4: 0.0% 0.1% 0.0% B16..8: 16.1% 0.6% 0.1% direct: 0.2% skip:82.9% L0:48.2% L1:48.5% BI: 3.4% [libx264 @ 000002859baf3300] 8x8 transform intra:71.1% inter:71.4% [libx264 @ 000002859baf3300] coded y,uvDC,uvAC intra: 72.2% 78.3% 40.7% inter: 5.9% 4.0% 0.1% [libx264 @ 000002859baf3300] i16 v,h,dc,p: 29% 30% 21% 20% [libx264 @ 000002859baf3300] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 18% 14% 5% 6% 9% 6% 9% 6% [libx264 @ 000002859baf3300] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 38% 20% 11% 5% 5% 7% 5% 6% 4% [libx264 @ 000002859baf3300] i8c dc,h,v,p: 46% 16% 27% 11% [libx264 @ 000002859baf3300] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 000002859baf3300] ref P L0: 63.4% 21.5% 11.5% 3.6% [libx264 @ 000002859baf3300] ref B L0: 92.6% 6.4% 1.0% [libx264 @ 000002859baf3300] ref B L1: 96.4% 3.6% [libx264 @ 000002859baf3300] kb/s:316.59 [aac @ 000002859baf4400] Qavg: 65377.508 D:\Work\ffmpeg\files>
-
ffmpeg library m4a moov atom not found when using custom IOContext
26 avril 2017, par trigger_deathI'm currently trying to implement FFmpeg into SFML so I have a wider range of audio files to read from but I get the error
[mov,mp4,m4a,3gp,3g2,mj2 @ #] moov atom not found
when opening an m4a file. Now this only happens when I use a custom IOContext to read the file instead of opening it from URL. This page here says I'm not supposed to use streams to open m4a files but is an IOContext considered a stream? Because I have no way to open it as a URL as that's how SFML works.// Explanation of InputStream class class InputStream { int64_t getSize() int64_t read(void* data, int64_t size); int64_t seek(int64_t position); int64_t tell(); // Gets the stream position }; // Used for IOContext int read(void* opaque, uint8_t* buf, int buf_size) { sf::InputStream* stream = (sf::InputStream*)opaque; return (int)stream->read(buf, buf_size); } // Used for IOContext int64_t seek(void* opaque, int64_t offset, int whence) { sf::InputStream* stream = (sf::InputStream*)opaque; switch (whence) { case SEEK_SET: break; case SEEK_CUR: offset += stream->tell(); break; case SEEK_END: offset = stream->getSize() - offset; } return (int64_t)stream->seek(offset); } bool open(sf::InputStream& stream) { AVFormatContext* m_formatContext = NULL; AVIOContext* m_ioContext = NULL; uint8_t* m_ioContextBuffer = NULL; size_t m_ioContextBufferSize = 0; av_register_all(); avformat_network_init(); m_formatContext = avformat_alloc_context(); m_ioContextBuffer = (uint8_t*)av_malloc(m_ioContextBufferSize); if (!m_ioContextBuffer) { close(); return false; } m_ioContext = avio_alloc_context( m_ioContextBuffer, m_ioContextBufferSize, 0, &stream, &::read, NULL, &::seek ); if (!m_ioContext) { close(); return false; } m_formatContext = avformat_alloc_context(); m_formatContext->pb = m_ioContext; if (avformat_open_input(&m_formatContext, NULL, NULL, NULL) != 0) { // FAILS HERE close(); return false; } //... return true; }
-
FFmpeg infile path, my domain
26 avril 2017, par user892134I have a file located at
html://www.example.com/wp-content/music.mp3
I've tested and confirmed ffmpeg is installed and have run
exec("ffmpeg -help",$output);
I successfully get an output. Now i want to start converting but i cannot locate the file above. I've tried
exec("ffmpeg -i html://www.example.com/wp-content/music.mp3",$output); exec("ffmpeg -i home/mywebsite/public_html/wp-content/music.mp3",$output);
I get no output for either. ffmpeg is located in
/usr/bin/ffmpeg
.How do i solve?
-
How can I improve the up-time of my coffee pot live stream ?
26 avril 2017, par tww0003Some Background on the Project:
Like most software developers I depend on coffee to keep me running, and so do my coworkers. I had an old iPhone sitting around, so I decided to pay homage to the first webcam and live stream my office coffee pot.
The stream has become popular within my company, so I want to make sure it will stay online with as little effort possible on my part. As of right now, it will occasionally go down, and I have to manually get it up and running again.
My Setup:
I have nginx set up on a digital ocean server (my nginx.conf is shown below), and downloaded an rtmp streaming app for my iPhone.
The phone is set to stream to
example.com/live/stream
and then I use an ffmpeg command to take that stream, strip the audio (the live stream is public and I don't want coworkers to feel like they have to be careful about what they say), and then make it accessible atrtmp://example.com/live/coffee
andexample.com/hls/coffee.m3u8
.Since I'm not too familiar with ffmpeg, I had to google around and find the appropriate command to strip the coffee stream of the audio and I found this:
ffmpeg -i rtmp://localhost/live/stream -vcodec libx264 -vprofile baseline -acodec aac -strict -2 -f flv -an rtmp://localhost/live/coffee
Essentially all I know about this command is that the input stream comes from,
localhost/live/stream
, it strips the audio with-an
, and then it outputs tortmp://localhost/live/coffee
.I would assume that
ffmpeg -i rtmp://localhost/live/stream -an rtmp://localhost/live/coffee
would have the same effect, but the page I found the command on was dealing with ffmpeg, and nginx, so I figured the extra parameters were useful.What I've noticed with this command is that it will error out, taking the live stream down. I wrote a small bash script to rerun the command when it stops, but I don't think this is the best solution.
Here is the bash script:
while true; do ffmpeg -i rtmp://localhost/live/stream -vcodec libx264 -vprofile baseline -acodec aac -strict -2 -f flv -an rtmp://localhost/live/coffee echo 'Something went wrong. Retrying...' sleep 1 done
I'm curious about 2 things:
- What is the best way to strip audio from an rtmp stream?
- What is the proper configuration for nginx to ensure that my rtmp stream will stay up for as long as possible?
Since I have close to 0 experience with nginx, ffmpeg, and rtmp streaming any help, or tips would be appreciated.
Here is my nginx.conf file:
worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name localhost; location / { root html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location /stat { rtmp_stat all; rtmp_stat_stylesheet stat.xsl; allow 127.0.0.1; } location /stat.xsl { root html; } location /hls { root /tmp; add_header Cache-Control no-cache; } location /dash { root /tmp; add_header Cache-Control no-cache; add_header Access-Control-Allow-Origin *; } } } rtmp { server { listen 1935; chunk_size 4000; application live { live on; hls on; hls_path /tmp/hls; dash on; dash_path /tmp/dash; } } }
edit:
I'm also running into this same issue: https://trac.ffmpeg.org/ticket/4401 -
MPEG-DASH - Multiplexed Representations Issue
26 avril 2017, par MikeI'm trying to learn ffmpeg, MP4Box, and MPEG-DASH, but I'm running into an issue with the .mp4 I'm using. I'm using ffmpeg to demux the mp4 with this command:
ffmpeg -i test.mp4 -c:v copy -g 72 -an video.mp4 -c:a copy audio.mp4
Once the two files are created, I use MP4Box to segment the files for the dash player using this command:
MP4Box -dash 4000 -frag 1000 -rap -segment-name segment_ output.mp4
Which does create all the files I think I need. Then I point the player to the output_dash.mpd and nothing happens except a ton of messages in the console:
[8] EME detected on this user agent! (ProtectionModel_21Jan2015) [11] Playback Initialized [21] [dash.js 2.3.0] MediaPlayer has been initialized [64] Parsing complete: ( xml2json: 3.42ms, objectiron: 2.61ms, total: 0.00603s) [65] Manifest has been refreshed at Wed Apr 12 2017 12:16:52 GMT-0600 (MDT)[1492021012.196] [72] MediaSource attached to element. Waiting on open... [77] MediaSource is open! [77] Duration successfully set to: 148.34 [78] Added 0 inline events [78] No video data. [79] No audio data. [79] No text data. [79] No fragmentedText data. [79] No embeddedText data. [80] Multiplexed representations are intentionally not supported, as they are not compliant with the DASH-AVC/264 guidelines [81] No streams to play.
Here is the MP4Box -info on the video I'm using:
* Movie Info * Timescale 1000 - Duration 00:02:28.336 Fragmented File no - 2 track(s) File suitable for progressive download (moov before mdat) File Brand mp42 - version 512 Created: GMT Wed Feb 6 06:28:16 2036 File has root IOD (9 bytes) Scene PL 0xff - Graphics PL 0xff - OD PL 0xff Visual PL: Not part of MPEG-4 Visual profiles (0xfe) Audio PL: Not part of MPEG-4 audio profiles (0xfe) No streams included in root OD iTunes Info: Name: Rogue One - A Star Wars Story Artist: Lucasfilm Genre: Trailer Created: 2016 Encoder Software: HandBrake 0.10.2 2015060900 Cover Art: JPEG File Track # 1 Info - TrackID 1 - TimeScale 90000 - Duration 00:02:28.335 Media Info: Language "Undetermined" - Type "vide:avc1" - 3552 samples Visual Track layout: x=0 y=0 width=1920 height=816 MPEG-4 Config: Visual Stream - ObjectTypeIndication 0x21 AVC/H264 Video - Visual Size 1920 x 816 AVC Info: 1 SPS - 1 PPS - Profile High @ Level 4.1 NAL Unit length bits: 32 Pixel Aspect Ratio 1:1 - Indicated track size 1920 x 816 Self-synchronized Track # 2 Info - TrackID 2 - TimeScale 44100 - Duration 00:02:28.305 Media Info: Language "English" - Type "soun:mp4a" - 6387 samples MPEG-4 Config: Audio Stream - ObjectTypeIndication 0x40 MPEG-4 Audio MPEG-4 Audio AAC LC - 2 Channel(s) - SampleRate 44100 Synchronized on stream 1 Alternate Group ID 1
I know I need to separate the video and audio and I think that's where my issue is. The command I'm using probably isn't doing the right thing.
Is there a better command to demux my mp4? Is the MP4Box command I'm using best for segmenting the files? If I use different files, will they always need to be demuxed?
One thing to mention, if I use the following commands everything works fine, but there is no audio because of the
-an
which means it's only video:ffmpeg -i test.mp4 -c:v copy -g 72 -an output.mp4 MP4Box -dash 4000 -frag 1000 -rap -segment-name segment_ output.mp4
UPDATE
I noticed that the video had no audio stream, but the audio had the video stream which is why I got the mux error. I thought that might be an issue so I ran this command to keep the unwanted streams out of the outputs:
ffmpeg -i test.mp4 -c:v copy -g 72 -an video.mp4 -c:a copy -vn audio.mp4
then I run:
MP4Box -dash 4000 -frag 1000 -rap -segment-name segment_ video.mp4 audio.mp4
now I no longer get the Multiplexed representations are intentionally not supported... message, but now I get:
[122] Video Element Error: MEDIA_ERR_SRC_NOT_SUPPORTED [123] [object MediaError] [125] Schedule controller stopping for audio [126] Caught pending play exception - continuing (NotSupportedError: Failed to load because no supported source was found.)
I tried playing the video and audio independently through Chrome and they both work, just not through the dash player. Ugh, this is painful to learn, but I feel like I'm making progress.