Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
Forward HLS Stream - Solution needed
29 avril 2017, par Chrisi'm receiving a HLS/AppleHTTP stream with FFmpeg. The source stream looks like:
Input #0, hls,applehttp, from 'http://example.com/hls/index.m3u8': Duration: 00:00:00.09, start: 42870.540944, bitrate: 91 kb/s Program 0 Metadata: variant_bitrate : 0 Stream #0:0: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 1024x576 [SAR 1:1 DAR 16:9], 12.50 fps, 25 tbr, 90k tbn, 50 tbc Stream #0:1: Audio: aac ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 114 kb/s Stream #0:2: Unknown: none ([21][0][0][0] / 0x0015)
I need to forward this kind of stream with FFmpeg to a Stream Server. My problem is, how to do it with FFmpeg without encoding and have high CPU usage (i think -video copy -audio copy). Second, which Streaming Server Software is the best (low cpu usage) to recieve the stream and send it to the users as HLS stream?
-
Transcoding fMP4 to HLS while writing on iOS using FFmpeg
29 avril 2017, par bclymerTL;DR
I want to convert fMP4 fragments to TS segments (for HLS) as the fragments are being written using FFmpeg on an iOS device.
Why?
I'm trying to achieve live uploading on iOS while maintaining a seamless, HD copy locally.
What I've tried
Rolling
AVAssetWriter
s where each writes for 8 seconds, then concating the MP4s together via FFmpeg.What went wrong - There are blips in the audio and video at times. I've identified 3 reasons for this.
1) Priming frames for audio written by the AAC encoder creating gaps.
2) Since video frames are 33.33ms long, and audio frames 0.022ms long, it's possible for them to not line up at the end of a file.
3) The lack of frame accurate encoding present on Mac OS, but not available for iOS Details Here
FFmpeg muxing a large video only MP4 file with raw audio into TS segments. The work was based off the Kickflip SDK
What Went Wrong - Every once in a while an audio only file would get uploaded, with no video whatsoever. Never able to reproduce it in-house, but it was pretty upsetting to our users when they didn't record what they thought they did. There were also issues with accurate seeking on the final segments, almost like the TS segments were incorrectly time stamped.
What I'm thinking now
Apple was pushing fMP4 at WWDC this year (2016) and I hadn't looked into it much at all before that. Since an fMP4 file can be read, and played while it's being written, I thought that it would be possible for FFmpeg to transcode the file as it's being written as well, as long as we hold off sending the bytes to FFmpeg until each fragment within the file is finished.
However, I'm not familiar enough with the FFmpeg C API, I only used it briefly within attempt #2.
What I need from you
- Is this a feasible solution? Is anybody familiar enough with fMP4 to know if I can actually accomplish this?
- How will I know that
AVFoundation
has finished writing a fragment within the file so that I can pipe it into FFmpeg? - How can I take data from a file on disk, chunk at a time, pass it into FFmpeg and have it spit out TS segments?
-
FFmpeg send stream on a web server
29 avril 2017, par LuzwitzFor stream my screen, i used :
ffmpeg -s 1920x1080 -f X11grab -i :0.0+0,0 -codec:v libvpx -b:v 4M -b:a libvorbis -crf 20 capture.webm
This command save the stream in a file : capture.webm.
But now, I want send stream on a udp server. So i make this command :
ffmpeg -s 1920x1080 -f X11grab -i :0.0+0,0 -codec:v libvpx -b:v 4M -b:a libvorbis -crf 20 -f webm udp://192.168.232.2:8080
But it doesn't run.
-
Running matplotlib animation on Mac using Spyder : says to install ffmpeg
28 avril 2017, par AddemI installed Anaconda on a new Mac, made a simple animation with matplotlib like
import numpy as np from matplotlib import pyplot as plt from matplotlib import animation fig = plt.figure() ax = plt.axes(xlim=(0, 2), ylim=(-2, 2)) line, = ax.plot([], [], lw=2) def init(): line.set_data([], []) return line, def animate(i): x = np.linspace(0, 2, 1000) y = np.sin(2 * np.pi * (x - 0.01 * i)) line.set_data(x, y) return line, anim = animation.FuncAnimation(fig, animate, init_func=init, frames=200, interval=20, blit=True) anim.save('basic_animation.mp4', fps=30, extra_args=['-vcodec', 'libx264']) plt.show()
When I run it, it tells me to install ffmpeg. I tried using these instructions: http://stephenjungels.com/jungels.net/articles/ffmpeg-howto.html But the instructions were unclear about what I was supposed to download, especially when it got to the part about a "patch". This also just feels insanely complicated for something that seems like it should be much simpler. I also tried following some instructions for installing ffmpeg using Homebrew but the instructions were again poorly written so that some of the buttons it said should be there weren't. I tried to figure it out by guessing what I should do, and it seemed to work but with a lot of warning messages. By the end of the process, when I type into a terminal
which ffmpeg
it returns
/usr/local/bin/ffmpeg
. However, even after restarting Spyder and re-running the code, it still tells me to install ffmpeg. I also navigated to/usr/local/bin
and it doesn't have a folderffmpeg
. So my guess is that ffmpeg didn't install.I read in the matplotlib documentation that Anaconda doesn't give a build for Python that is appropriate, something about a "framework" build (http://matplotlib.org/faq/osx_framework.html). But it says that in Anaconda you can install it easily by running
conda install python.app
which I did and it worked. It then says to usepythonw
rather thanpython
. I'm not really sure what this means, because in Spyder I don't run scripts from the terminal. I tried navigating to the file anyway and running it withpythonw anim.py
and it mysteriously gave me an I/O error.
Do I really need to install ffmpeg or is there some simpler fix?
If I do need to install ffmpeg, where get I get up-to-date instructions that make the process clear?
-
MPEG-DASH Livestreaming using ffmpeg, avconv, MP4Box and Dashjs
28 avril 2017, par Sushant MongiaI'm working on delivering Live Streaming with DASH capabilities. Long story short, it's a very crude testbed setup so I might be off the mark in some respects. I'm also posting this as a simple setup for the community and people out there struggling to look up a Live Streaming with DASH Tutorial.
Setup:
OS: Ubuntu 16.04
Encoding Tools:
ffmpeg : To record a livestream using a desktop webcam in mpeg2 format
avconv : To convert mpeg2 to mpeg4 file format
MP4Box : To DASH it, i.e. produce the .mpd, some conf files, seg_init and the segments
Dashjs : Reference Client 2.4.1
Server: ApacheProcess:
I've written 3 bash scripts, with basically an infinite while loop in them, with ffmpeg , avconv and the MP4Box commands, one in each of them. I first run the ffmpeg script that records a video using the desktop webcam and then I run the avconv script that kills the ffmpeg command and converts the file from mpeg2 format to mpeg 4 format. Now since the ffmpeg command is in an infinite while loop, it restarts. Then I run the MP4Box command that DASH-es the avconv commands' output. Then everything is sent to the DASHjs client and pretty much the whole setup gets repeated every 5 seconds.Commands :
ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video0 -f mpegts -codec:v mpeg1video -s 640x480 -b:v 1000k -bf 0 livestream
avconv -i livestream out.mp4
MP4Box command in a loop, MP4Box -dash-live 4000 -fps 24 -frag 6000 -profile dashavc264:live -dynamic -mpd-refresh 5000 -dash-ctx dashtest.txt -time-shift -1 -inter 0 -segment-name output-seg -bs-switching no out.mp4
Problem :
The ffmpeg sends a chunk of 5 seconds (that's due to the sleep command in my avconv bash script) and the MP4Box reads that 5 second chunk and loops that chunk. So when the next chunk comes in, newer segments are produced, but the player is still playing the older segments, typically just the very first few segments in a loop.Questions :
1) Am I missing out on some core concept here? Are the commands and their respective attributes with the right parameters and the right values?
2) I believe there should be a way to pipeline these processes in a better manner, should I be looking into writing a python script maybe?Happy to provide more info! Cheers