22:20
Using php to read an rtsp stream from an ffmpeg command utilizing image2pipe, I'm finding the images are truncated in some manner and only display a partial image. My task is to write unique jpg filenames at 30fps. Below is the php code. I've simplified the filename structure in this example for clarity. The script performs as expected with no obvious errors writing out 30fps consistently. I can't figure out what extraneous or missing information in the content is causing the output images to appear corrupt.
$cmd = "ffmpeg -rtsp_transport tcp -framerate 30 -i (...)
22:45
I have two videos (A.MOV and B.MOV), that are supposed to be perfectly synced. However, Video A (3708 frames), has 5 more frames than Video B (3703 frames). When I look at the timestamps of these frames, the extra A frames are at the very end of the video. In addition, the last frame in each video is a "padded" frame, with a duration much longer (around 10 times) than that of the other frames.
I need my videos to be perfectly synced, with the same frame number. Since the extra frames are at the end of the video, I want to trim off the extra frames in A, as well as trim the "padded" (...)
22:49
Here is my problem: I have one video src 1080p (on the frontend). On the frontend, I send this video-route to the backend:
const req = async()=>tryconst res = await axios.get('/catalog/item',params:SeriesName:seriesName);return data:res.data;catch(err)console.log(err);return false;const fetchedData = await req();-On the backend i return seriesName.Now i can make a full path,what the video is,and where it is,code:
const videoUrl = 'C:/Users/arMori/Desktop/RedditClone/reddit/public/videos';console.log('IT VideoURL',videoUrl);
const (...)
19:23
I am facing issues with adding text over a video as a watermark using pbmedia/laravel-ffmpeg.
Where am I going wrong?
Code:
$format = new X264();
$format->setAudioCodec('aac');
$format->setVideoCodec('libx264');
$format->setKiloBitrate(0);
$localPath = '/' . $this->video->id . '.mp4';
$ffmpeg = FFMpeg::fromDisk("public")
->open($localPath)
->addFilter(function ($filters)
(...)
19:03
I want to record video as well as audio from webcam using ffmpeg,
I have used the following codes to know what devices are available:
ffmpeg -list_devices true -f dshow -i dummy
And got the result:
ffmpeg version N-54082-g96b33dd Copyright (c) 2000-2013 the FFmpeg developers
built on Jun 17 2013 02:05:16 with gcc 4.7.3 (GCC)
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av
isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab
le-iconv --enable-libass --enable-libbluray (...)
11:57
I am trying to send a video stream encoded with h264 (hardware accelerated with nvidia encoder) via WebRTC for low latency display on a browser.
More precisely, I have a thread that encodes an opengl framebuffer at a fixed frame rate, the resulting AVPacket's data (I encode using ffmpeg's C api) is then forwarded via WebRTC to the client (using aiortc)
The problem is that I observe significant delays, that seem to depend on the frame rate I use.
For example, running it locally, I get around 160ms delay when running at 30fps, and around 30ms when encoding at (...)
07:58
I have seen lots of examples on how to get the language codes using the command line interface, but how do you get them using the libraries?
07:12
I have a 360 video captured with an Insta360 X3, which is in the INSV format. I would like to extract a frame from this video using Python, FFmpeg, or any other suitable tool, without using Insta360 Studio to export the video to MP4 first.
Here is what I have tried so far:
FFmpeg: I attempted to use FFmpeg to directly convert the INSV file to images, but I encountered errors, possibly due to the proprietary nature of the INSV format.
ffmpeg -i input.insv -vf "select=eq(n,0)" -q:v 3 output.jpg
This command did not work as expected and produced an error.
Python (...)
08:01
I try to convert m3u8 file to WebM using ffmpeg and stream it onto browser.
I want it to play on browser without using any javascript.
Stream video method:
public void streamVideo(OutputStream os )
String url = "m3u8file.m3u8";
byte[] bytes = new byte[BUFFER];
int bytesRead = -1;
ProcessBuilder pb = new ProcessBuilder(
"ffmpeg",
"-i", url,
"-c:v", "libvpx-vp9",
"-b:v", "1M",
"-c:a", "libopus",
"-b:a", "128k",
"-f", "webm",
"pipe:1"
);
pb.redirectErrorStream(true);
try (...)
05:38
I am currently doing video scrolly kind of things. I'm using GSAP and Scrolltrigger and sveltekit to play video on scroll. At the beginning of the implementation, the video is very laggy on scroll on all browsers and cross devices.
So I do some research and found out video encoding is so important and found this ffmpeg command through codepen. ffmpeg -i originalVideo.mp4 -movflags faststart -vcodec libx264 -crf 23 -g 1 -pix_fmt yuv420p output.mp4
When I use the video which is exported by the above command, it works smoothly on windows, mac and ios. But I'm still having (...)
03:57
I can successfully obtain the RTSP stream and use the av_dump_format function to print the stream information, but I cannot retrieve the time_base. Why is that?
FFmpeg is version: n4.0.6
this is my code:
int main(int argc, char *argv[])
struct timeval systemstart_time;
struct timeval rtspfetch_time1;
struct timeval rtspfetch_time2;
AVFormatContext *pFormatCtx = NULL;
AVPacket *av_packet = NULL;
AVDictionary *options = NULL;
gettimeofday(&systemstart_time, NULL);
print_time_with_microseconds(systemstart_time);
(...)