Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
FFMPEG demuxer seek error in Chrome range slider with AWS S3 audio file
4 avril, par Tania RasciaI'm encountering an issue where if you click or slide enough on a range slider in Chrome, it will eventually stop working and give this error, an error
2
(network error):PIPELINE_ERROR_READ: FFmpegDemuxer: demuxer seek failed
If I google this error, I only find the Chrome source code:
Line 1749 of Chrome source code
First, this issue only happens in Chrome, not Firefox. Second, I can only get it to happen with an encrypted file from AWS, which I can't get into a sandbox, so the sandbox I made will never encounter that error with the random audio file I used.
The only difference I can find between this code and the failing code is the source of the audio file (AWS S3).
-
ffmpeg decoding mixing frames
4 avril, par Paulo MorgadoI'm using FFmpeg.AutoGen in a .NET 8.0 application for decoding H264 and JPEG.
Every usage uses its own instance (there are no shared objects) and there isn't simultanous use of each object.
However, they can be used from different threads.
But I'm getting the encoded results with mixed data from all sources. It happens mostly for the same origin codeec, but not esclusivelly.
I have a base class that looks like this:
public abstract unsafe class Decoder : IDisposable { protected readonly object sync = new(); protected AVCodecContext* codecContext; protected AVFrame* frame; protected AVPacket* packet; protected SwsContext* swsContext; private bool disposed = false; protected Decoder(AVCodecID codecId, ChannelWriter<(ArraySegment
decodedData, int width, int height)> writer) { Writer = writer; var codec = ffmpeg.avcodec_find_decoder(codecId); codecContext = ffmpeg.avcodec_alloc_context3(codec); ffmpeg.avcodec_open2(codecContext, codec, null); frame = ffmpeg.av_frame_alloc(); packet = ffmpeg.av_packet_alloc(); } ... This is the H264 decoding code:
public override void Decode(ReadOnlySpan
data) { CheckDisposed(); lock (sync) { fixed (byte* pData = data) { ffmpeg.av_packet_unref(packet); packet->data = pData; packet->size = data.Length; var result = ffmpeg.avcodec_send_packet(codecContext, packet); if (result < 0 && result != ffmpeg.AVERROR(ffmpeg.EAGAIN)) { EvalResult(result); } while (true) { ffmpeg.av_frame_unref(frame); result = ffmpeg.avcodec_receive_frame(codecContext, frame); if (result == ffmpeg.AVERROR(ffmpeg.EAGAIN)) { // Need more input data, continue sending packets return; } else if (result < 0) { EvalResult(result); } var width = frame->width; var height = frame->height; if (swsContext == null) { swsContext = ffmpeg.sws_getContext( width, height, (AVPixelFormat)frame->format, width, height, AVPixelFormat.AV_PIX_FMT_BGRA, ffmpeg.SWS_BILINEAR, null, null, null); } var bgraFrame = ffmpeg.av_frame_alloc(); bgraFrame->format = (int)AVPixelFormat.AV_PIX_FMT_BGRA; bgraFrame->width = width; bgraFrame->height = height; ffmpeg.av_frame_get_buffer(bgraFrame, 32); ffmpeg.sws_scale( swsContext, frame->data, frame->linesize, 0, height, bgraFrame->data, bgraFrame->linesize); var bgraSize = width * height * 4; var bgraArray = ArrayPool .Shared.Rent(bgraSize); var bgraSegment = new ArraySegment (bgraArray, 0, bgraSize); fixed (byte* pBGRAData = bgraSegment.Array) { var data4 = new byte_ptr4(); data4.UpdateFrom(bgraFrame->data.ToArray()); var linesize4 = new int4(); linesize4.UpdateFrom(bgraFrame->linesize.ToArray()); ffmpeg.av_image_copy_to_buffer( pBGRAData, bgraSize, data4, linesize4, (AVPixelFormat)bgraFrame->format, width, height, 1); } ffmpeg.av_frame_free(&bgraFrame); Writer.TryWrite((bgraSegment, width, height)); } } } } And this is the JPEG decoding code:
public override void Decode(ReadOnlySpan
data) { CheckDisposed(); lock (sync) { fixed (byte* pData = data) { ffmpeg.av_packet_unref(packet); packet->data = pData; packet->size = data.Length; var result = ffmpeg.avcodec_send_packet(codecContext, packet); if (result < 0 && result != ffmpeg.AVERROR(ffmpeg.EAGAIN)) { EvalResult(result); } while (true) { ffmpeg.av_frame_unref(frame); result = ffmpeg.avcodec_receive_frame(codecContext, frame); if (result == ffmpeg.AVERROR(ffmpeg.EAGAIN)) { // Need more input data, continue sending packets return; } else if (result < 0) { EvalResult(result); } var width = frame->width; var height = frame->height; if (swsContext == null) { swsContext = ffmpeg.sws_getContext( width, height, (AVPixelFormat)frame->format, width, height, AVPixelFormat.AV_PIX_FMT_BGRA, ffmpeg.SWS_BILINEAR, null, null, null); } var bgraFrame = ffmpeg.av_frame_alloc(); bgraFrame->format = (int)AVPixelFormat.AV_PIX_FMT_BGRA; bgraFrame->width = frame->width; bgraFrame->height = frame->height; ffmpeg.av_frame_get_buffer(bgraFrame, 32); ffmpeg.sws_scale( swsContext, frame->data, frame->linesize, 0, frame->height, bgraFrame->data, bgraFrame->linesize); int bgraDataSize = bgraFrame->linesize[0] * bgraFrame->height; byte[] bgraData = ArrayPool .Shared.Rent(bgraDataSize); var bgraSegment = new ArraySegment (bgraData, 0, bgraDataSize); Marshal.Copy((IntPtr)bgraFrame->data[0], bgraData, 0, bgraDataSize); ffmpeg.av_frame_free(&bgraFrame); Writer.TryWrite((bgraSegment, width, height)); } } } } What am I doing wrong here?
-
OpenCV FFMPEG RTSP Camera Feed Errors
4 avril, par trn2020I'm getting these errors at random times when saving frames from an rtsp camera feed. The errors happen at different times, usually after 100-200 images have been saved, and the errors themselves are not always exactly the same. They cause the images that are saved at the time of the error to be distorted either to the point of being completely grey or contain distorted pixels.
#Frame_142 - [hevc @ 0c3bf800] The cu_qp_delta 29 is outside the valid range [-26, 25].
#Frame_406 - [hevc @ 0b6bdb80] Could not find ref with POC 41
I've tried implementing the code in both python and c++ with the same result. Also tried saving as .png instead of .jpg. The rtsp feed works fine when using imshow to display the camera, the problem only appears to happen when trying to save the frames. From what I can gather the errors have to do with ffmpeg but google isn't much help for these types of errors.
#include
#include #include #include using namespace std; using namespace cv; int main() { VideoCapture cap("rtsp://admin:admin@192.168.88.97/media/video1"); if (!cap.isOpened()) return -1; for (int i = 0; i < 500; i++) { Mat frame; cap >> frame; imwrite("C:\\Users\\Documents\\Dev\\c++\\OpenCVExample\\frames\\frame" + std::to_string(i) + ".png", frame); cout << i << "\n"; std::this_thread::sleep_for(std::chrono::milliseconds(10)); } return 0; } -
FFmpeg WASM Custom build : defining custom flags
4 avril, par EtturI wish to create custom ffmpeg.wasm build
Now the official GUIDE show four commands "make dev" "make prd" etc
So I cloned THE REPO and ran "make prd". It did build, but obviously, this build was with default settings, whatever exactly they made be.
So pardon my stupidity, as I cannot figure out how / where / what do I edit to set the custom flags about what I want to be included / excluded in the build?
-
What's FFmpeg doing with avcodec_send_packet() ?
4 avril, par JimI'm trying to optimise a piece of software for playing video, which internally uses the FFmpeg libraries for decoding. We've found that on some large (4K, 60fps) video, it sometimes takes longer to decode a frame than that frame should be displayed for; sadly, because of the problem domain, simply buffering/skipping frames is not an option.
However, it appears that the FFmpeg executable is able to decode the video in question fine, at about 2x speed, so I've been trying to work out what we're doing wrong.
I've written a very stripped-back decoder program for testing; the source is here (it's about 200 lines). From profiling it, it appears that the one major bottleneck during decoding is the
avcodec_send_packet()
function, which can take up to 50ms per call. However, measuring the same call in FFmpeg shows strange behaviour:(these are the times taken for each call to avcodec_send_packet() in milliseconds, when decoding a 4K 25fps VP9-encoded video.)
Basically, it seems that when FFmpeg uses this function, it only really takes any amount of time to complete every N calls, where N is the number of threads being used for decoding. However, both my test decoder and the actual product use 4 threads for decoding, and this doesn't happen; when using frame-based threading, the test decoder behaves like FFmpeg using only 1 thread. This would seem to indicate that we're not using multithreading at all, but we've still seen performance improvements by using more threads.
FFmpeg's results average out to being about twice as fast overall as our decoders, so clearly we're doing something wrong. I've been reading through FFmpeg's source to try to find any clues, but it's so far eluded me.
My question is: what's FFmpeg doing here that we're not? Alternatively, how can we increase the performance of our decoder?
Any help is greatly appreciated.