Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
Reading images from AXIS ip camera using OpenCV
14 juin 2016, par batumanI am trying to access AXIS IP camera from my program using OpenCV. My OpenCV version is 3.1. I follow this tutorial link.
I have all libraries installed. My following program can load mp4 video successfully. That means ffmpeg and necessary libraries are working fine.
#include
#include opencv.hpp> using namespace std; int main() { cv::VideoCapture vcap("test.mp4"); cv::Mat image; // const string address = "rtsp://root:pass@192.168.0.90/axis-media/media.amp?camera=1"; // if(!vcap.open(address)){ // std::cout << "Error opening video stream or file " << std::endl; // return -1; // } for(;;){ if(!vcap.read(image)){ std::cout << "No frame" << std::endl; cv::waitKey(0); } cv::imshow("Display", image); cv::waitKey(1); } return 0; } When I tried to access the IP Camera as follow
cv::VideoCapture vcap("rtsp://root:pass@192.168.0.90/axis-media/media.amp?camera=1");
I have the following error
GStreamer Plugin: Embedded video playback halted; module source reported: Could not open resource for reading and writing. OpenCV Error: Unspecified error (GStreamer: unable to start pipeline ) in cvCaptureFromCAM_GStreamer, file /home/Softwares/opencv/opencv/modules /videoio/src/cap_gstreamer.cpp, line 818 terminate called after throwing an instance of 'cv::Exception' what(): /home/Softwares/opencv/opencv/modules/videoio/src/cap_gstreamer. cpp:818: error: (-2) GStreamer: unable to start pipeline in function cvCaptureFromCAM_GStreamer
User is root and password is pass and ip 192.168.0.90 are all defaults.
My if config gave me
ifconfig eth0 Link encap:Ethernet HWaddr b8:2a:72:c6:b8:13 inet6 addr: fe80::ba2a:72ff:fec6:b813/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:723959 errors:0 dropped:0 overruns:0 frame:0 TX packets:116637 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:199422245 (199.4 MB) TX bytes:13701699 (13.7 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:24829 errors:0 dropped:0 overruns:0 frame:0 TX packets:24829 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2502903 (2.5 MB) TX bytes:2502903 (2.5 MB) wlan0 Link encap:Ethernet HWaddr a0:a8:cd:99:92:60 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
What could be the problem for this access to the camera?
Thanks
-
Is there a Mac OS X equivalent of Windows' named pipe
14 juin 2016, par Mike VersteegIn windows I use named pipes to interface to a console program (ffmpeg). Thought I had a try rewriting this for the Mac, but I cannot find named pipe support in either FireMonkey or even native OS X code. Does it exist at all?
-
Possible to output FFMPEG progress via Node ?
14 juin 2016, par ErraticFoxI am building a desktop app with Node.js and Electron. This app is only client side, there is no servers and uses FFMPEG. I've seen there's a way to display the progress via PHP, but I was wondering if I was able to do this via Node.
I found this, but I can't seem to get it to output progress. The code below is what I am using for that Node Module and the console only logs [] after conversion is complete.
var inputDir = uploadFile.files[0].path var spawn = require('child_process').spawn var ffmpegBin = require('ffmpeg-static'), progressStream = require('ffmpeg-progress-stream') var params = [ '-y', '-i', inputDir, "C:\\Users\\ErraticFox\\Desktop\\output.mp3" ] ffmpeg = spawn(ffmpegBin.path, params); ffmpeg.stderr .pipe(progressStream(120)) .pipe(concat(function(data) { results = data; console.log(data) }))
-
JavaCV FFmpegFrameRecorder Video output reddish color
14 juin 2016, par Diego PerozoI am trying to make a video
.mp4
file out of a group of images using FFmpegFrameRecorder as a part of a bigger program, so I set up a test project in which I try to make a video out of 100 instances of the same frame at 25fps. The program seems to work. However, every time I run it the image seems to be reddish. As if a red filter had been applied to it.Here's the code snippet:
public static void main(String[] args) { File file = new File("C:/Users/Diego/Desktop/tc-images/image0.jpg"); BufferedImage img = null; try { img = ImageIO.read(file); } catch (IOException e1) { e1.printStackTrace(); } IplImage image = IplImage.createFrom(img); FFmpegFrameRecorder recorder = new FFmpegFrameRecorder("C:/Users/Diego/Desktop/tc-images/test.mp4",1920,1080); try { recorder.setVideoCodec(13); recorder.setFormat("mp4"); recorder.setPixelFormat(0); recorder.setFrameRate(25); recorder.start(); for (int i=0;i<100;i++){ recorder.record(image); } recorder.stop(); } catch (Exception e){ e.printStackTrace(); } }
I'd appreciate it if anybody told me what's wrong. Thanks in advance for any help.
-
FFmpeg extracts different number of frames when using -filter_complex together with the split filter
14 juin 2016, par KonstantinI am fiddling with ffmpeg, extracting jpg pictures from videos. I am splitting the input stream into two output stream with -filter_complex, because I process my videos from direct http link (scarce free space on VPS), and I don't want to read through the whole video twice (traffic quota is also scarce). Furthermore I need two series of pitcures, one for applying some filters (fps changing, scale, unsharp, crop, scale) and then selecting from them by naked eye, and the other series being untouched (expect fps changing, and cropping the black borders), using them for furter processing after selecting from the first series. I call my ffmpeg command from Ruby script, so it contains some string interpolation / substitution in the form #{}. My working command line looked like:
ffmpeg -y -fflags +genpts -loglevel verbose -i #{url} -filter_complex "[0:v]fps=fps=#{new_fps.round(5).to_s},split=2[in1][in2];[in1]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]},scale=#{thumb_width}:-1:flags=lanczos,unsharp,lutyuv=y=gammaval(#{gammaval})[out1];[in2]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]}[out2]" -f #{format} -c copy #{options} -map_chapters -1 - -map '[out1]' -f image2 -q 1 %06d.jpg -map '[out2]' -f image2 -q 1 big_%06d.jpg
#{options} is set when output is MP4, then its value is "-movflags frag_keyframe+empty_moov" so I can send it to standard output without seeking capability and uploading the stream somewhere without making huge temporary video files. So I get two series of pictures, one of them is filtered, sharpened, the other is in fact untouched. And I also get an output stream of the video on the standard output which is handled by Open3.popen3 library connecting the output stream of the input of two other commands.
Problem arise when I would like to seek in the video to a given point and omitting the streamed video output on the STDOUT. I try to apply combined seeking, fast seek before the given time code and the slow seek to the exact time code, given in floating seconds:
ffmpeg -report -y -fflags +genpts -loglevel verbose -ss #{(seek_to-seek_before).to_s} -i #{url} -ss #{seek_before.to_s} -t #{t_duration.to_s} -filter_complex "[0:v]fps=fps=#{pics_per_sec},split=2[in1][in2];[in1]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]},scale=#{thumb_width}:-1:flags=lanczos,unsharp,lutyuv=y=gammaval(#{gammaval})[out1];[in2]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]}[out2]" -map '[out1]' -f image2 -q 1 %06d.jpg -map '[out2]' -f image2 -q 1 big_%06d.jpg
Running this command I get the needed two series of pictures, but they contains different number of images, 233 vs. 484.
Actual values can be read from this interpolated / substituted command line:
ffmpeg -report -y -fflags +genpts -loglevel verbose -ss 1619.0443599999999 -i fabf.avi -ss 50.0 -t 46.505879999999934 -filter_complex "[0:v]fps=fps=5,split=2[in1][in2];[in1]crop=iw-0:ih-0:0:0,scale=280:-1:flags=lanczos,unsharp,lutyuv=y=gammaval(0.526316)[out1];[in2]crop=iw-0:ih-0:0:0[out2]" -map '[out1]' -f image2 -q 1 %06d.jpg -map '[out2]' -f image2 -q 1 big_%06d.jpg
Detailed log can be found here: http://www.filefactory.com/file/1yih17k2hrmp/ffmpeg-20160610-223820.txt Before last line it shows 188 duplicated frames.
I also tried passing "-vsync 0" option, but didn't help. When I generate the two series of images in two consecutive steps, with two different command lines, then no problem arises, I get same amount of pictures in both series of course. So my question would be, how can I use the later command line, generating the two series of images by only one reading / parsing of the remote video file?