I am running a script to tail a log file as per the code snippet below. I am running into a problem where by the line passed into $line is missing a number amount of bytes from the beginning when several lines are written to the log file at nearly the same time.
I can check the file afterwards and see that the offending line is complete in the file so why is it incomplete in the script. Some kind of buffering issue perhaps?
The processing can sometimes take several seconds to complete would that make a difference? #!/bin/bash tail -F /var/log/mylog.log | while read line do log "$line" (...)
I got error with receiving rtsp stream from IPCam Edimax IC-3030 and I don't know what to do. Can anyone help me or show me a way to solution ? /home/prog12# ffplay "rtsp://192.168.1.7/ipcam_h264.sdp" ffplay version 2.1.4 Copyright (c) 2003-2014 the FFmpeg developers built on Mar 22 2014 18:16:53 with gcc 4.8 (Ubuntu/Linaro 4.8.1-10ubuntu9) configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-libfaac --enable- libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable- libxvid --enable-x11grab (...)
This question already has an answer here: How can I extract a good quality JPEG image from an H264 video file with ffmpeg? 2 answers
I have this simple command: ffmpeg -i movie_file -f image2 ./%d.jpg
The result with JPG is really really bad, and in PNG is much better. The problem us that the png is 197KB and JPG is 7.1KB.
Is there anyway to compress the PNG more? what can be done about this?
see bellow, png on the right, jpg on the left. what can be done about this? (...)
I am cutting out segments from a long mp4 file and then rejoining parts of them. However, since FFMPEG apparently keeps the same MOOV atom for the trimmed files as the original, it looks to FFMPEG that the trimmed videos are all identical since they all have the same MOOV atom, and therefore only uses the first segment when trying to join the videos. Is there a way around this? Unfortunately since FFMPEG is embedded in an Android app, I can only use version 0.11. Edit:
This is a sample of the processes: ffmpeg -i /sdcard/path/movie.mp4 -ss 00:00:06.000, -t 00:00:05.270, -c:a aac -c:v (...)
I want to ask for help on converting a mpg videofile recorded from TV to a mp4 format my SAT Receiver can play, (mp4). Below are the format infos of source and target output by ffmpeg -i: All done on Linux.
source: Stream #0:1[0x1e0]: Video: mpeg2video (Main), yuv420p(tv), 720x576 [SAR 64:45 DAR 16:9], max. 9500 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc Stream #0:2[0x83]: Audio: ac3, 48000 Hz, stereo, fltp, 448 kb/s
target: Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv), 720x406 [SAR 1:1 DAR 360:203], 1894 kb/s, 25 fps, 25 tbr, 25k tbn, 50 tbc (default) Stream (...)
I have a folder with these permissions 775 owner: user group: user
I'm trying to convert files in above folder to MP4 using ffmpeg and write converted files back into above folder.
ffmpeg use www-data to write so every time it gives me permission denied error.
I tried adding www-data to group user and above folder has group write access (775) but I still getting the error. Error goes away when I chmod the folder to 777.
Is there a way to give www-data write access to above folder without changing current (...)
I'm having trouble converting and stripping the video part from a video when converting to Ogg and WMA. I'm using the same arguments (apart from the codec) when converting to MP3, and in that case it works as expected.
Example: $ ffmpeg -y -i input -vn -ar 44100 -ac 2 -ab 192k -acodec libvorbis -threads 2 output ffmpeg version 1.2.1 Copyright (c) 2000-2013 the FFmpeg developers built on May 13 2013 14:06:15 with gcc 4.0.1 (GCC) (Apple Inc. build 5493) configuration: --prefix=/Volumes/Ramdisk/sw --enable-gpl --enable-pthreads --enable-version3 --enable-libspeex --enable-libvpx (...)
I use ffmpeg with h264 build and javacv on android to stream video from camera to rtmp server. I was trying to set all possible video framerates and bitrates, set preset ultrafast, but I still have stable 5 seconds delay. If i use android mediarecorder sending mpegts stream with rtmp to server,i have only 2 seconds delay, and if I use -fflags nobuffer options on the client(ffmpeg), the video is appearing immediately.
I don`t know how in android ffmpeg reduces this latency. Here the code: recorder = new FFmpegFrameRecorder(ffmpeg_link, imageWidth, imageHeight, 1); (...)
I use FFMPEG (command line Input) to convert my videos to a specific output format. The problem I am facing is when I try to pass a constant bit rate(700 kbps) to FFMPEG, the result is an output video with a different bit rate(say 1000 kbps). This phenomenon occurs invariably for all videos.Why is this happening? I need to maintain a constant bit rate. Can anyone help me out.
My FFMPEG version is 0.5
The command line parameter which I am passing to FFMPEG is, -i inputfile -b 700k -ab 64k -vcodec libx264 -acodec libfaac -ac 2 -ar 44100 -y -s 320x240 outputfile
I was able to force (...)
I tried more than 5 open source library, I have no success to build them on windows.
This is my first time using NDK and wondering why I can't simply find the .so file and use it instead of compiling from scratch.
Please guide me with your knowledge, I am kind of stuck here. thank you
I am trying to export all audio channels from a multichannel quicktime file with ffmpeg which has the following audio configuration, but am unsure if the command below is correct. All the files look and play correct in Quicktime player except the L+R_Total.wav which refuses to play in Quicktime player but plays fine in VLC, so not sure if i'm using the wrong command: Track 1 - mono Track 2 - mono Track 3 - mono Track 4 - mono Track 5 - mono Track 6 - mono Track 7 - stereo
I am using: /Users/me/Desktop/python/ffmpeg/ffmpeg -i /Users/me/Desktop/test.mov -acodec pcm_s24le -map 0:1 -y (...)
I am able to display the video from my webcam or any other integrated device into a picturebox . Also i am able to Save the video into an avi file using FFMPEG DLL files. I want to do both things simultaneously ie Save the video in the avi file as well as at the same time display the live feed too. This is for a surveillance project where i want to monitor the live feed and save those too. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; using AForge.Video; using (...)
Trying to excute following ffmpeg command in ubuntu.
*ffmpeg -i "rtmp://IP/live/1234 live=1" -f flv rtmp://IP/live/1234_56 * ffmpeg version 2.2.git Copyright (c) 2000-2014 the FFmpeg developers built on Apr 8 2014 13:15:21 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5) configuration: --prefix=/home/encoder/ffmpeg_build --extra-cflags=-I/home/encoder/ffmpeg_build/include --extra-ldflags=-L/home/encoder/ffmpeg_build/lib --bindir=/home/encoder/bin --extra-libs=-ldl --enable-gpl --enable-libass --enable-libfdk-aac --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis (...)
I'm trying create a stream of different videos that get uploaded to the server every hour.
The problem is that I cant concatenate the whole bunch of files together to get a nice stable stream because the playlist gets updated with new content quite frequently. Also I cant afford restarting FFmpeg because then the stream will get terminated for the clients.
The only way I have thought to overcome this is to create a continuous stream that I can change, a file that that can be written and read at the same time. So far I only managed to create loads of read errors when concatenating (...)