Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
python/video - removing frames without re-encoding
18 février 2017, par nxetI understand this might be a trivial question, but so far I had no luck with the various solutions I tried and I'm sure there must be a convenient way to achieve this.
How would I proceed removing frames/milliseconds from a video file without slicing, merging and re-encoding?
All solutions I found involved exporting various times to various formats, and I'm hopeful there will be no need to do so.With
ffmpeg
/avconv
it's necessary to convert the temporary streams to.ts
, then concatenate, and finally re-encode in the original format.Python library
MoviePy
seemed to do quite exactly what I needed but:- The
cutout
function returns a file which can not be exported, as thewrite_videofile
function tries and fails to fetch the removed frames - If I instead slice the original file into various clips and then merge them with
concatenate_videoclips
, the export doesn't fail but takes twice the length of the video. The resulting video has then a faster frame-rate, with only the cue points for the concatenated videos being timely placed, and audio playing at normal speed. It's also worth noting that the output file, despite being 5-7% shorter in duration, is about 15% bigger.
Is there any other library I'm not aware of I might look into?
What I'm trying to imagine is an abstraction layer providing easy access to the most common video formats, giving me the ability to pop the unwanted frames and update the file headers without delving into the specifics of each and every format. - The
-
ffmpeg live stream multiple outputs to same pipe (stdout Python)
18 février 2017, par WobbaFettttttSo I am writing a stream ingest in
Python
withffmpeg
and want myPython
script to receive the video and audio feeds separately without making a temporary file.I have tried these
subprocess
PIPE
commands inspired by THIS BLOG and THIS FFMPEG TUTORIAL forPython
integration and multipleffmpeg
outputs:from subprocess import Popen, PIPE stream = Popen([ 'ffmpeg', '-i', URL, '\\', '-f', 'image2pipe', '-pix_fmt', 'rgb24', '-vcodec', 'rawvideo', '-', '\\', '-f', 's16le', '-ac', '2', '-ar', '44100', '-vcodec', 'pcm_s16le', '-' ], stdout=PIPE, stderr=PIPE, bufsize=10**8) print 'ingesting stream' for i in range(300): video = stream.stdout.read(640*360*3) video = numpy.fromstring(video, dtype='uint8') video = video.reshape((360,640,3)) stream.stdout.flush() audio = stream.stdout.read(2*44100*2/10) audio = numpy.fromstring(audio, dtype='int16') audio = audio.reshape((len(audio)/2, 2)) stream.stdout.flush() stream.terminate()
but this gives first pipe read seems to read no data:
Traceback (most recent call last): File "test.py", line 63, in
video = video.reshape((360,640,3)) ValueError: cannot reshape array of size 0 into shape (360,640,3) I tried running each pipe individually and they work just fine
-
Converting Video to mp3 node.js
18 février 2017, par Adhit LokeshI'm trying below code. can anyone tell me whats wrong in the code
var ffmpeg = require('fluent-ffmpeg'); var proc = new ffmpeg({ source: 'C:/Users/Public/Videos/Sample Videos/Wildlife.wmv', nolog: true }) proc.setFfmpegPath("C:\\nodejs\\ffmpeg") .toFormat('mp3') .on('end', function() { console.log('file has been converted successfully'); }) .on('error', function(err) { console.log('an error happened: ' + err.message); }) // save to file <-- the new file I want --> .saveToFile('C:/Users/Public/Videos/Sample Videos/Wildlife.mp3');
C:\nodejs>node test.js an error happened: spawn C:\nodejs\ffmpeg ENOENT
-
Have anyone used ffmpeg with iOS Swift app ? [on hold]
18 février 2017, par Varun RajI'm trying to build an app that can convert recorded videos from MP4 to webm for better size and quality. I wanted to use FFMPEG and VP9 for it, but not sure if it's technically possible.
Using Swift 3 and Xcode 8.
AVCapture for recording the video and I wanted the recorded video to be converted to webm and then upload it to S3 Bucket.
Now I'm just stuck with the conversion part of it.
-
Compile FFmpeg with libfdk_aac
18 février 2017, par ToydorI been reading on how to convert mp3 to m4a, and found that I must compile FFmpeg if I'll use the AAC encoder, libfdk_aac.
But reading FFmpeg guide on how to compile FFmpeg with libfdk_aac makes no sense for a beginner like me.
To use libfdk_aac the encoding guide says:
Requires ffmpeg to be configured with --enable-libfdk_aac --enable-nonfree.
Where do I put those flags?
Do I put it here somewhere?:
cd ~/ffmpeg_sources git clone --depth 1 git://github.com/mstorsjo/fdk-aac.git cd fdk-aac autoreconf -fiv ./configure --prefix="$HOME/ffmpeg_build" --disable-shared make make install make distclean
Or maybe here somewhere?
cd ~/ffmpeg_sources git clone --depth 1 git://source.ffmpeg.org/ffmpeg cd ffmpeg PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" export PKG_CONFIG_PATH ./configure --prefix="$HOME/ffmpeg_build" \ --extra-cflags="-I$HOME/ffmpeg_build/include" --extra-ldflags="-L$HOME/ffmpeg_build/lib" \ --bindir="$HOME/bin" --extra-libs="-ldl" --enable-gpl --enable-libass --enable-libfdk-aac \ --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx \ --enable-libx264 --enable-nonfree --enable-x11grab make make install make distclean hash -r
If I'm reading the compile guide right I guess that these two chunks of code is what I need to compile FFmpeg.
I'm using Ubuntu server 12.4
UPDATE
After upgrading my system to Ubuntu 16.04 I had to install ffmpeg again. I still needed libfdk-aac. Fortunately there's a good step-by-step guide at http://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu on how to compile ffmpeg.
I thought I would share how to compile if just interested in compiling ffmpeg with libfdk-aac and libmp3lame.
If you haven't already a bin in home directory:
mkdir ~/bin
Install dependencies. Didn't need the non-server packages:
sudo apt-get update sudo apt-get -y install autoconf automake build-essential libass-dev libfreetype6-dev libtheora-dev libtool libvorbis-dev pkg-config texinfo zlib1g-dev
Then install the encoders. Had to install yasm as well, otherwise I got errors when compiling.
sudo apt-get install libfdk-aac-dev sudo apt-get install libmp3lame-dev sudo apt-get install yasm
Then compile ffmpeg with needed flags
cd ~/ffmpeg_sources wget http://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2 tar xjvf ffmpeg-snapshot.tar.bz2 cd ffmpeg PATH="$HOME/bin:$PATH" PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure \ --prefix="$HOME/ffmpeg_build" \ --pkg-config-flags="--static" \ --extra-cflags="-I$HOME/ffmpeg_build/include" \ --extra-ldflags="-L$HOME/ffmpeg_build/lib" \ --bindir="$HOME/bin" \ --enable-libass \ --enable-libfdk-aac \ --enable-libfreetype \ --enable-libtheora \ --enable-libvorbis \ --enable-libmp3lame \ --enable-nonfree \ --enable-gpl PATH="$HOME/bin:$PATH" make make install make distclean hash -r