My company has a bunch of IP cameras that we distribute - specifically Grandstream - and the manufacturer has changed their firmware. The normal keepalive that ffmpeg uses for the rtsp streams ( either ff_rtsp_send_cmd_async(s, "GET_PARAMETER", rt->control_uri, NULL); or ff_rtsp_send_cmd_async(s, "OPTIONS", "*", NULL); both in in libavformat/rtspdec.c) is no longer working, for two reasons:
1) The new Grandstream firmware is now checking for a receiver report to determine whether or not the program reading the stream is live, not just anything.
2) The new Grandstream firmware requires (...)
I would like to add a text to a video file with FFMPEG. While I was able to do this with plain text
$FFMPEG -y -i $SOURCE \\ -vf drawtext="fontfile=/usr/share/fonts/Lato-Reg-webfont.ttf:fontsize=40:box=1:boxcolor=black:fontcolor=white:text='$WATERMARK':x=(main_w-text_w)-10:y=(main_h-text_h)-4" \\ -threads $THREADS -f mp4 -vcodec mpeg4 -b $MOBILE_BITRATE -r $MOBILE_FRAME_RATE -strict -2 \\ -s $RESOLUTION_SD -acodec libfaac -ar $MOBILE_AUDIO_RATE -ac $MOBILE_AUDIO_CHANNELS -ab $MOBILE_AUDIO_BITRATE \\ $VIDEONAME_MOBILE-android.mp4
this won't look good enough. So I tried with (...)
thanks to click my question.
To make mux stream avi video, I had using directshow avimux filter. But, directshow avimux filter is only use to media/img files. How can I add text information to avi file and meet the stream mux(audio+video+text) condition?
I run ffmpeg on Windows.
I try to run ffmpeg -i input.avi-filter:v frei0r=pixeliz0r=0.02:0.02 ouput.avi
I have this error:
No such filter: 'frei0r Error opening filters!
When I run ffmpeg.exe I got:
ffmpeg version git-N-30610-g1929807, Copyright (c) 2000-2011 the FFmpeg develope rs built on Jun 7 2011 15:55:06 with gcc 4.5.3
configuration: --enable-gpl --enable-version3 --enable-memalign-hack --enable- runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libo pencore-amrnb --enable-libopencore-amrwb --enable-libfreetype --enable-libgsm -- enable-libmp3lame (...)
I'm new to FFMpeg so this may be a dumb question, but I don't see the answer in the documentation.
I want to decode frames from a DVD vob files. Opening the first VOB in the group works fine, but how do I tell ffmpeg to continue on to the next VOB and read all the VOBs on a DVD?
I have the VOB files in a folder on a hard disk.
Need to play some video files from a Cisco DMP, and need to use mpeg2video for video and mp2 for audio.
Im using ffmpeg -i to verify video format.
This video plays correctly: Input #0, mpeg, from 'ATT_Telepresence_Scheduling.mpg': Duration: 00:07:14.08, start: 0.522456, bitrate: 474 kb/s Stream #0:0[0x1e0]: Video: mpeg2video (Main), yuv420p, 600x340 [SAR 1:1 DAR 30:17], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc Stream #0:1[0x1c0]: Audio: mp2, 44100 Hz, stereo, s16p, 128 kb/s
This video does not play(Black screen): Input #0, mpegts, from 'Telepresence_part2.ts': Duration: (...)
I would like to Forward error correction based on per rtp packet parity schema using gstreamer flow. How can I do that ?
Currently i need to convert mp4 video to webm and ogg . To convert mp4 to webm i have used "ffmpeg.exe". I am running following code. [DllImport("User32.dll")] public static extern bool SetForegroundWindow(IntPtr hWnd); public void mciConvertWavMP3(string fileName, bool waitFlag) string savepath = Server.MapPath(fileName); string destpath = Server.MapPath(fileName); string pworkingDir = Server.MapPath("~/ffmpeg/"); // string outfile = "-b:a 16 --resample 24 -m j " + savepath + " " + savepath.Replace(".wav", ".mp3") + ""; //--- lame code // string outfile = "-b 192k -i " + savepath + " " (...)
I know how to pipe the ffmpeg raw_video output into my program to perform some baseband processing but how can we do that and pass to the program the timestamp of each frame.
For example, if I write: fmpeg -i input stream.ts -f rawvideo -an - | myprog -w 320 -h 240 -f 24.0 >> output.yuv
I just receive each frame sequentially and I suppose that their time interval are constant and is 1/24 seconds.
to provide the real timestamp when using a video file, I can first use ffprobe to extract time info, like below and provide the pts file to my program as an additional parameter. ffprobe (...)
I want to convert YUV420P image (received from H.264 stream) to RGB, while also resizing it, using sws_scale.
The size of the original image is 480 × 800. Just converting with same dimensions works fine.
But when I try to change the dimensions, I get a distorted image, with the following pattern: changing to 481 × 800 will yield a distorted B&W image which looks like it's cut in the middle 482 × 800 will be even more distorted 483 × 800 is distorted but in color 484 × 800 is ok (scaled correctly).
Now this pattern follows - scaling will only work fine if the difference between divides (...)
I can successfully add watermark to mp4 video , But Audio is not working in new mp4.please help me out.And also please tell me is there way to compress a file during this conversion..60mb file is converted to 110 mb file while adding watermark. print exec('/usr/local/bin/ffmpeg -y -i video.mp4 -vf "movie=hi.png [watermark]; [in][watermark] overlay=10:main_h-overlay_h-10 [out]" video1.mp4');
I am trying to build a simple FFMPEG MPEG2 video PES decoder on ANDROID using the ffmpeg/doc/example/decoding__encoding_8c-source.html.
I am using FFMPEG version 2.0!
I initialize the ffmpeg system with the following code: int VideoPlayer::_setupFFmpeg() int rc; AVCodec *codec; av_register_all(); codec = avcodec_find_decoder(AV_CODEC_ID_MPEG2VIDEO); if(NULL == codec) LOGE("_setupFFmpeg. No codec!"); return -1; LOGI("found: %p. name: %s", codec, codec->name); _video_dec_ctx = avcodec_alloc_context3(codec); if(NULL == _video_dec_ctx) LOGE("_setupFFmpeg. Could not allocate codec context (...)
I have a bunch of images (to be more specific they are 1024x768, 24bpp RGB PNG files) that I want to encode into a video files.
And I need to use 'libavcodec' library, not 'ffmpeg' tool. (well I know they are basically same in the origin, I am emphasizing because someone may answer to use 'ffmpeg' tool, but that's not a solution what I am looking for)
I am using h264 encoder.
Target : A high quality video with equal resolution (1024 x 768), YUV420P each image has a duration of 1 second. 24 fps
Problems : i've tried with many different (...)
So i have been trying to use the ffmpeg api's in my c project. So i first looked at some examples on the ffmpeg website and i have been trying to compile one of the examples. Im getting Undefined symbols for architecture x86_64: "_av_frame_alloc", referenced from: _audio_decode in ccd4jpMz.o ld: symbol(s) not found for architecture x86_64 collect2: ld returned 1 exit status
I looked at the documentation on ffmpegs website and it says av_frame_alloc is a part of libavutil so when i linked libavutil i got the same error. I tried linking all the libraries, i tried different orders of (...)