Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
Cannot execute ffmpeg outside its directory
5 décembre 2013, par aswinI try to combine 2 video(mp4) files into one file to view them side by side. To achieve this, I use QProcess to execute static ffmpeg.exe like the following line of code:
QProcess::execute("ffmpeg.exe -i D:\\1.mp4 -vf \"[in] scale=iw/2:ih/2, pad=2*iw:ih [left]; movie=D\\\\:/2.mp4, scale=iw/3:ih/3 [right]; [left][right] overlay=main_w/2:0 [out]\" output.mp4");
The code above works successfully, but I want to put the ffmpeg file into a subdirectory of my application. So I create a new folder, name it FFmpeg and change the code like the following:
QProcess::execute("FFmpeg\\ffmpeg.exe -i D:\\1.mp4 -vf \"[in] scale=iw/2:ih/2, pad=2*iw:ih [left]; movie=D\\\\:/2.mp4, scale=iw/3:ih/3 [right]; [left][right] overlay=main_w/2:0 [out]\" output.mp4");
The later line of code doesn't seem to work. The console output says "Unable to load libfaac.dll".
I don't know what's wrong because there is no libfaac.dll either in my first code where i put ffmpeg file in the same folder with my qt application. I've tried to download libfaac.dll and put the file inside FFmpeg folder but the dll doesn't seem to be compatible with my ffmpeg.
How can I solve this problem?
-
Linking problems using CMake
5 décembre 2013, par ZoellickI'm writing an application containig 2 internal libraries and depends on more 2 external libraries (ffmpeg and opencv). I'm also using CMake to produce UNIX makefiles. And the problem is when i'm trying to build sources, it compiles but don't link with ffmpeg at all and the next output the linker gives:
../../Decoder/libDecoder.a(ConverterAVFrameToRGB.cpp.o): In function `FaceVideo::ConverterAVFrameToRGB::to_rgb_conversion(std::vector >&, int, int, int)': ConverterAVFrameToRGB.cpp:(.text+0x990): undefined reference to `av_frame_free' ../../Decoder/libDecoder.a(FfmpegDecoder.cpp.o): In function `FaceVideo::FfmpegDecoder::destroy()': FfmpegDecoder.cpp:(.text+0xa30): undefined reference to `av_frame_free' ../../Decoder/libDecoder.a(FfmpegDecoder.cpp.o): In function `FaceVideo::FfmpegDecoder::decode_next_chunk(int)': FfmpegDecoder.cpp:(.text+0xb6b): undefined reference to `av_frame_clone' FfmpegDecoder.cpp:(.text+0xc13): undefined reference to `av_frame_free' ../../Decoder/libDecoder.a(FfmpegEncoder.cpp.o): In function `FaceVideo::FfmpegEncoder::destroy()': FfmpegEncoder.cpp:(.text+0x132): undefined reference to `avcodec_free_frame' ../../Decoder/libDecoder.a(FfmpegEncoder.cpp.o): In function `FaceVideo::FfmpegEncoder::encode()': FfmpegEncoder.cpp:(.text+0x4c4): undefined reference to `avcodec_encode_video2' FfmpegEncoder.cpp:(.text+0x592): undefined reference to `avcodec_encode_video2' ../../Decoder/libDecoder.a(FrameSaver.cpp.o): In function `FaceVideo::FrameSaver::saver(std::vector >&, int, int, int)': FrameSaver.cpp:(.text+0x869): undefined reference to `av_frame_free' collect2: ld returned 1 exit status
That's excatly what i don't want to see.
There are three Cmake files: two for internal libraries (use
add_library(Decoder ${SOURCES_DECODER}) and add_library(Detector ${SOURCES_DETECTOR})
in them) and one for main executable (use
add_executable(Tool ${SOURCES_TOOL}) and target_link_libraries (Tool Decoder avutil avcodec swscale avformat Detector ${OpenCV_LIBS})
in it).
As far as i understand from CMake manuals and examples, this should make linker link this libraries together, but no effect.
I've been trying lot of things such as:
1) Addinglink_directories()
with path to libraries (/usr/lib/x86_64-linux-gnu/ for me) wherever it's possile, nothing changed.
2) Linking every library separately, i mean i tried do something like this in my internal libraries CMake files:target_link_libraries (Decoder avutil avcodec swscale avformat)
. And then link library together into my Tool CMake file:target_link_libraries (Tool Decoder Detector)
.
3) Editing output makefiles.
4) Compiling simple one-file application just to test whether i can do it or not. I can.g++ -lavcodec -o out mysource.cpp
works perfectly.
5) Compling ffmpeg manually and installing it.The fact is i realy don't know what should i do. I have no idea. And i would very appreciate your every answer.
Output when CMAKE_VERBOSE_MAKEFILE is set
! /usr/bin/c++ -march=x86-64 -Wall -fPIC -pthread -std=c++0x -D__STDC_CONSTANT_MACROS -march=x86-64 -fPIC CMakeFiles/FaceDetectorTool.dir/home/anton/Programming/facevideo/branches/Stream_Prototype/src/tools/FaceDetectorTool/facedetector.cpp.o -o FaceDetectorTool -rdynamic ../../Detector/libDetector.a ../../Decoder/libDecoder.a -lavutil -lavcodec -lswscale -lavformat /usr/local/lib/libopencv_videostab.so.2.4.7 /usr/local/lib/libopencv_video.so.2.4.7 /usr/local/lib/libopencv_ts.a /usr/local/lib/libopencv_superres.so.2.4.7 /usr/local/lib/libopencv_stitching.so.2.4.7 /usr/local/lib/libopencv_photo.so.2.4.7 /usr/local/lib/libopencv_ocl.so.2.4.7 /usr/local/lib/libopencv_objdetect.so.2.4.7 /usr/local/lib/libopencv_nonfree.so.2.4.7 /usr/local/lib/libopencv_ml.so.2.4.7 /usr/local/lib/libopencv_legacy.so.2.4.7 /usr/local/lib/libopencv_imgproc.so.2.4.7 /usr/local/lib/libopencv_highgui.so.2.4.7 /usr/local/lib/libopencv_gpu.so.2.4.7 /usr/local/lib/libopencv_flann.so.2.4.7 /usr/local/lib/libopencv_features2d.so.2.4.7 /usr/local/lib/libopencv_core.so.2.4.7 /usr/local/lib/libopencv_contrib.so.2.4.7 /usr/local/lib/libopencv_calib3d.so.2.4.7 -ldl -lm -lpthread -lrt /usr/local/lib/libopencv_nonfree.so.2.4.7 /usr/local/lib/libopencv_ocl.so.2.4.7 /usr/local/lib/libopencv_gpu.so.2.4.7 /usr/local/lib/libopencv_photo.so.2.4.7 /usr/local/lib/libopencv_objdetect.so.2.4.7 /usr/local/lib/libopencv_legacy.so.2.4.7 /usr/local/lib/libopencv_video.so.2.4.7 /usr/local/lib/libopencv_ml.so.2.4.7 /usr/local/lib/libopencv_calib3d.so.2.4.7 /usr/local/lib/libopencv_features2d.so.2.4.7 /usr/local/lib/libopencv_highgui.so.2.4.7 /usr/local/lib/libopencv_imgproc.so.2.4.7 /usr/local/lib/libopencv_flann.so.2.4.7 /usr/local/lib/libopencv_core.so.2.4.7 -Wl,-rpath,/usr/local/lib
-
avcodec_decode_audio4 decoded data different on device and simulator
5 décembre 2013, par JonaI'm having issues where the decoded audio sounds punchy/garbled simply can't tell what is being heard when decoded on an iPhone/iPad device. However, when decoding the audio on simulator the audio is perfect.
I do realize emulator runs on a different CPU architecture i386 while actual device is ARMv7 so the issue could be something around that.
I have printed the compressed data on device and simulator before being decoded and it matches. Then I printed the decoded data for device and simulator and they are different. That leads me to think something is happening with the decoder. I can see that it is not a byte swap issue since the data is just completely different.
Any ideas on what I could or look into? I don't have this issue on Android...
First decoded audio frame.
DEVICE:
0x00, 0x00, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x70, 0x08, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0xda, 0x0a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
EMULATOR:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
UPDATE 1
I was able to resolve the issue by disabling asm with--disable-asm
. Something must not be working correctly with the ARM asm files. -
How do I make this render callback only provide a specific channel ?
4 décembre 2013, par awfulcodeI'm using the wonderful kxmovie as the base for an app I'm writing. Instead of only using the remote I/O, I added a mixer to this. The idea is to have each audio channel from a video file be connected to its own bus on the mixer.
I have two questions for you.
Is there a way of calling the render callback only once and yet feed each bus only one channel?
If I need to call separate callbacks for different busses, how can I change the original code so it only renders a specific channel? Maybe pass the inOutputBusNumber value to the render callback?
Here's the code for the original render callback.
As always, thank you so much.
P.S.: Does anyone have any idea why it's using _outdata+iChannel in the FFT operation?
- (BOOL) renderFrames: (UInt32) numFrames ioData: (AudioBufferList *) ioData { for (int iBuffer=0; iBuffer < ioData->mNumberBuffers; ++iBuffer) { memset(ioData->mBuffers[iBuffer].mData, 0, ioData->mBuffers[iBuffer].mDataByteSize); } if (_playing && _outputBlock ) { // Collect data to render from the callbacks _outputBlock(_outData, numFrames, _numOutputChannels); // Put the rendered data into the output buffer if (_numBytesPerSample == 4) // then we've already got floats { float zero = 0.0; for (int iBuffer=0; iBuffer < ioData->mNumberBuffers; ++iBuffer) { int thisNumChannels = ioData->mBuffers[iBuffer].mNumberChannels; for (int iChannel = 0; iChannel < thisNumChannels; ++iChannel) { vDSP_vsadd(_outData+iChannel, _numOutputChannels, &zero, (float *)ioData->mBuffers[iBuffer].mData, thisNumChannels, numFrames); } } } else if (_numBytesPerSample == 2) // then we need to convert SInt16 -> Float (and also scale) { float scale = (float)INT16_MAX; vDSP_vsmul(_outData, 1, &scale, _outData, 1, numFrames*_numOutputChannels); for (int iBuffer=0; iBuffer < ioData->mNumberBuffers; ++iBuffer) { int thisNumChannels = ioData->mBuffers[iBuffer].mNumberChannels; for (int iChannel = 0; iChannel < thisNumChannels; ++iChannel) { vDSP_vfix16(_outData+iChannel, _numOutputChannels, (SInt16 *)ioData->mBuffers[iBuffer].mData+iChannel, thisNumChannels, numFrames); } } } } return noErr; }
-
Extracting all frames of a video using ffmpeg
4 décembre 2013, par Gonzalo SoleraI'm trying to extract all the frames of a video using ffmpeg compiled statically for android. I want to extract all the frames with a lower quality (at jpg format) and then, select the specific frames I want with a higher resolution and extract them at png format. But the problem is that I need to know exactly at what time is placed the selected frame in order to be able to extract te same frame at higher resolution. When I calcule how much time are between frames (duration_of_the_video/total_frames_extracted) and I multiple it by the number of the frame I want to extract again, the result time isn't the exact time of the frame.
How could I extract a specific number of frames of a video using ffmpeg? I'm trying to extract all the frames but sometimes not all the frames are extracted. For example, I have a 1600 ms long video but when I use this command:
ffmpeg -i file.mp4 -y %d.jpg
it doesn't extract all the frames because it only extracts 45 frames but the frame rate is 30 fps (1600/45 = 35.5555 and 1000/30 = 33.33333).
So, in order to be able to calculate at what time is placed the frame I want, I would need to extract ALL the frames of the video or extract a fixed number of frames (it doesn't matter if some frames are repeated if I can get the time of them).
This is the output when I try to extract all the frames (there should be 48 but there are 45 instead so I can't calculate the exact time...)
I'm not sure If I have explained correctly but I will appreciate any help. Thanks!