Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
How to fill an AVFrame structure in order to encode an YUY2 video (or UYVY) into H265
22 avril, par Rich DengI want to compress a video stream in YUY2 or UYVY format to, say H265. If I understand the answers given this thread correctly, I should be able use the function
av_image_fill_arrays()
to fill the data and linesize arrays of anAVFrame
object, callavcodec_send_frame()
, and thenavcodec_receive_packet()
to get encoded data:bool VideoEncoder::Init(const AM_MEDIA_TYPE* pMediaType) { // we should have a valid pointer if (pMediaType) { m_mtInput.Empty(); m_mtInput.Set(*pMediaType); } else return false; // find encoder m_pCodec = m_spAVCodecDlls->avcodec_find_encoder(AV_CODEC_ID_HEVC); m_pCodecCtx = m_spAVCodecDlls->avcodec_alloc_context3(m_pCodec); if (!m_pCodec || !m_pCodecCtx) { Log.Log(_T("Failed to find or allocate codec context!")); return false; } AVPixelFormat ePixFmtInput = GetInputPixelFormat(); if (CanConvertInputFormat(ePixFmtInput) == false) { return false; } // we are able to convert // so continue with setting it up int nWidth = m_mtInput.GetWidth(); int nHeight = m_mtInput.GetHeight(); // Set encoding parameters // Set bitrate (4 Mbps for 1920x1080) m_pCodecCtx->bit_rate = (((int64)4000000 * nWidth / 1920) * nHeight / 1080); m_pCodecCtx->width = nWidth; m_pCodecCtx->height = nHeight; // use reference time as time_base m_pCodecCtx->time_base.den = 10000000; m_pCodecCtx->time_base.num = 1; SetAVRational(m_pCodecCtx->framerate, m_mtInput.GetFrameRate()); //m_pCodecCtx->framerate = (AVRational){ 30, 1 }; m_pCodecCtx->gop_size = 10; // GOP size m_pCodecCtx->max_b_frames = 1; // set pixel format m_pCodecCtx->pix_fmt = ePixFmtInput; // YUV 4:2:0 format or YUV 4:2:2 // Open the codec if (m_spAVCodecDlls->avcodec_open2(m_pCodecCtx, m_pCodec, NULL) < 0) { return false; } return true; } bool VideoEncoder::AllocateFrame() { m_pFrame = m_spAVCodecDlls->av_frame_alloc(); if (m_pFrame == NULL) { Log.Log(_T("Failed to allocate frame object!")); return false; } m_pFrame->format = m_pCodecCtx->pix_fmt; m_pFrame->width = m_pCodecCtx->width; m_pFrame->height = m_pCodecCtx->height; m_pFrame->time_base.den = m_pCodecCtx->time_base.den; m_pFrame->time_base.num = m_pCodecCtx->time_base.num; return true; } bool VideoEncoder::Encode(IMediaSample* pSample) { if (m_pFrame == NULL) { return false; } // get the time stamps REFERENCE_TIME rtStart, rtEnd; HRESULT hr = pSample->GetTime(&rtStart, &rtEnd); m_rtInputFrameStart = rtStart; m_rtInputFrameEnd = rtEnd; // get length int nLength = pSample->GetActualDataLength(); // get pointer to actual sample data uint8_t* pData = NULL; hr = pSample->GetPointer(&pData); if (FAILED(hr) || NULL == pData) return false; m_pFrame->flags = (S_OK == pSample->IsSyncPoint()) ? (m_pFrame->flags | AV_FRAME_FLAG_KEY) : (m_pFrame->flags & ~AV_FRAME_FLAG_KEY); // clear old data for (int n = 0; n < AV_NUM_DATA_POINTERS; n++) { m_pFrame->data[n] = NULL;// (uint8_t*)aryData[n]; m_pFrame->linesize[n] = 0;// = aryStride[n]; } int nRet = 0; int nStride = m_mtInput.GetStride(); nRet = m_spAVCodecDlls->av_image_fill_arrays(m_pFrame->data, m_pFrame->linesize, pData, ePixFmt, m_pFrame->width, m_pFrame->height, 32); if (nRet < 0) { return false; } m_pFrame->pts = (int64_t) rtStart; m_pFrame->duration = rtEnd - rtStart; nRet = m_spAVCodecDlls->avcodec_send_frame(m_pCodecCtx, m_pFrame); if (nRet == AVERROR(EAGAIN)) { ReceivePacket(); nRet = m_spAVCodecDlls->avcodec_send_frame(m_pCodecCtx, m_pFrame); } if (nRet < 0) { return false; } // Receive the encoded packets ReceivePacket(); return true; } bool VideoEncoder::ReceivePacket() { bool bRet = true; AVPacket* pkt = m_spAVCodecDlls->av_packet_alloc(); while (m_spAVCodecDlls->avcodec_receive_packet(m_pCodecCtx, pkt) == 0) { // Write pkt->data to output file or stream m_pCallback->VideoEncoderWriteEncodedSample(pkt); if (m_OutFile.IsOpen()) m_OutFile.Write(pkt->data, pkt->size); m_spAVCodecDlls->av_packet_unref(pkt); } m_spAVCodecDlls->av_packet_free(&pkt); return bRet; }
I must have done something wrong. The result is not correct. For example, rather than a video with a person's face showing in the middle of the screen, I get a mostly green screen with parts of the face showing up at the lower left and lower right corners.
Can someone help me?
-
mobile-ffmpeg-https (4.3.1) POD install failed
22 avril, par shruti tupkariI am getting error on my pod install command , Error creating package is mobile-ffmpeg-https (4.3.1). Image attached for more details.
Actually i have not used this package anywhere in my project. I tried to use it and then removed it from project. But i am not getting why this is still showing up in pod install.
i tried deleting my podfile.lock and again running pod install but issue remains.
thanks in advance .
-
libavcodec.so.58 not found when running software compiled with opencv
22 avril, par AbinayaI am using ubuntu 22.04. Now every time I try to run software compiled with Opencv, I get the following error:
`libavcodec.so.58 => not found libavformat.so.58 => not found libavutil.so.56 => not found libswscale.so.5 => not found `
Looking around /lib/x86_64-linux-gnu/, I can find libavcodec.so.59, but not libavcodec.so.58.
When trying to run sudo apt-get install libavcodec58, I get:
Package 'libavcodec58' has no installation candidate
I've scoured the internet in search of an answer, but could not find anything at this point. Any help with solving this problem will be very much appreciated.
I have tried to recreate symbolic link with 'ls -l libavcodec.so.59' 1 root root 23 Aug 10 2024 libavcodec.so.59 ->libavcodec.so.59.37.100`
'dconfig -vl libavcodec.so.59.37.100' libavcodec.so.59 -> libavcodec.so.59.37.100`
But I am still struck
-
Send image and audio data to FFmpeg via named pipes
22 avril, par Nicke ManarinI'm able to send frames one by one to FFmpeg via a name pipe to create a video out of them, but if I try sending audio to a second named pipe, FFmpeg only accepts 1 frame in the frame pipe and starts reading from the audio pipe soon after it.
ffmpeg.exe -loglevel debug -hwaccel auto -f:v rawvideo -r 25 -pix_fmt bgra -video_size 782x601 -i \\.\pipe\video_to_ffmpeg -f:a s16le -ac 2 -ar 48000 -i \\.\pipe\audio_to_ffmpeg -c:v libx264 -preset fast -pix_fmt yuv420p -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" -crf 23 -f:v mp4 -vsync vfr -c:a aac -b:a 128k -ar 48000 -ac 2 -y "C:\Users\user\Desktop\video.mp4"
I start both pipes like so:
_imagePipeServer = new NamedPipeServerStream(ImagePipeName, PipeDirection.Out, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous); _imagePipeStreamWriter = new StreamWriter(_imagePipeServer); _imagePipeServer.BeginWaitForConnection(null, null); if (hasAudio) { _audioPipeServer = new NamedPipeServerStream(AudioPipeName, PipeDirection.Out, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous); _audioPipeStreamWriter = new StreamWriter(_audioPipeServer); _audioPipeServer.BeginWaitForConnection(null, null); }
And send the data to the pipes using these methods:
public void EncodeFrame(nint bufferAddress, int height, int bufferStride) { var frameSize = height * bufferStride; var frameBytes = new byte[frameSize]; System.Runtime.InteropServices.Marshal.Copy(bufferAddress, frameBytes, 0, frameSize); if (_imagePipeServer?.IsConnected != true) throw new FFmpegException("Pipe not connected", Arguments, Output); _imagePipeStreamWriter?.BaseStream.Write(frameBytes, 0, frameBytes.Length); }
public void EncodeAudio(ISampleProvider provider, long length) { if (_audioPipeServer?.IsConnected != true) throw new FFmpegException("Pipe not connected", Arguments, Output); var buffer = new byte[provider.WaveFormat.AverageBytesPerSecond * length / TimeSpan.TicksPerSecond]; var bytesRead = provider.ToWaveProvider().Read(buffer, 0, buffer.Length); if (bytesRead < 1) return; _audioPipeStreamWriter?.BaseStream.Write(buffer, 0, bytesRead); _audioPipeStreamWriter?.BaseStream.Flush(); }
Not sending the audio (and thus not creating the audio pipe) works, with FFmpeg taking one frame at time and creating the video normally.
But if I try sending the audio via a secondary pipe, I can only send one frame. This is the output when that happens (Btw, FFmpeg v7.1):
Splitting the commandline. Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'. Reading option '-hwaccel' ... matched as option 'hwaccel' (use HW accelerated decoding) with argument 'auto'. Reading option '-f:v' ... matched as option 'f' (force container format (auto-detected otherwise)) with argument 'rawvideo'. Reading option '-r' ... matched as option 'r' (override input framerate/convert to given output framerate (Hz value, fraction or abbreviation)) with argument '25'. Reading option '-pix_fmt' ... matched as option 'pix_fmt' (set pixel format) with argument 'bgra'. Reading option '-video_size' ... matched as AVOption 'video_size' with argument '782x601'. Reading option '-i' ... matched as input url with argument '\\.\pipe\video_to_ffmpeg'. Reading option '-f:a' ... matched as option 'f' (force container format (auto-detected otherwise)) with argument 's16le'. Reading option '-ac' ... matched as option 'ac' (set number of audio channels) with argument '2'. Reading option '-ar' ... matched as option 'ar' (set audio sampling rate (in Hz)) with argument '48000'. Reading option '-i' ... matched as input url with argument '\\.\pipe\audio_to_ffmpeg'. Reading option '-c:v' ... matched as option 'c' (select encoder/decoder ('copy' to copy stream without reencoding)) with argument 'libx264'. Reading option '-preset' ... matched as AVOption 'preset' with argument 'fast'. Reading option '-pix_fmt' ... matched as option 'pix_fmt' (set pixel format) with argument 'yuv420p'. Reading option '-vf' ... matched as option 'vf' (alias for -filter:v (apply filters to video streams)) with argument 'scale=trunc(iw/2)*2:trunc(ih/2)*2'. Reading option '-crf' ... matched as AVOption 'crf' with argument '23'. Reading option '-f:v' ... matched as option 'f' (force container format (auto-detected otherwise)) with argument 'mp4'. Reading option '-fps_mode' ... matched as option 'fps_mode' (set framerate mode for matching video streams; overrides vsync) with argument 'vfr'. Reading option '-c:a' ... matched as option 'c' (select encoder/decoder ('copy' to copy stream without reencoding)) with argument 'aac'. Reading option '-b:a' ... matched as option 'b' (video bitrate (please use -b:v)) with argument '128k'. Reading option '-ar' ... matched as option 'ar' (set audio sampling rate (in Hz)) with argument '48000'. Reading option '-ac' ... matched as option 'ac' (set number of audio channels) with argument '2'. Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'. Reading option 'C:\Users\user\Desktop\video.mp4' ... matched as output url. Finished splitting the commandline. Parsing a group of options: global. Applying option loglevel (set logging level) with argument debug. Applying option y (overwrite output files) with argument 1. Successfully parsed a group of options. Parsing a group of options: input url \\.\pipe\video_to_ffmpeg. Applying option hwaccel (use HW accelerated decoding) with argument auto. Applying option f:v (force container format (auto-detected otherwise)) with argument rawvideo. Applying option r (override input framerate/convert to given output framerate (Hz value, fraction or abbreviation)) with argument 25. Applying option pix_fmt (set pixel format) with argument bgra. Successfully parsed a group of options. Opening an input file: \\.\pipe\video_to_ffmpeg. [rawvideo @ 000001c302ee08c0] Opening '\\.\pipe\video_to_ffmpeg' for reading [file @ 000001c302ee1000] Setting default whitelist 'file,crypto,data' [rawvideo @ 000001c302ee08c0] Before avformat_find_stream_info() pos: 0 bytes read:65536 seeks:0 nb_streams:1 [rawvideo @ 000001c302ee08c0] All info found [rawvideo @ 000001c302ee08c0] After avformat_find_stream_info() pos: 1879928 bytes read:1879928 seeks:0 frames:1 Input #0, rawvideo, from '\\.\pipe\video_to_ffmpeg': Duration: N/A, start: 0.000000, bitrate: 375985 kb/s Stream #0:0, 1, 1/25: Video: rawvideo, 1 reference frame (BGRA / 0x41524742), bgra, 782x601, 0/1, 375985 kb/s, 25 tbr, 25 tbn Successfully opened the file. Parsing a group of options: input url \\.\pipe\audio_to_ffmpeg. Applying option f:a (force container format (auto-detected otherwise)) with argument s16le. Applying option ac (set number of audio channels) with argument 2. Applying option ar (set audio sampling rate (in Hz)) with argument 48000. Successfully parsed a group of options. Opening an input file: \\.\pipe\audio_to_ffmpeg. [s16le @ 000001c302ef5380] Opening '\\.\pipe\audio_to_ffmpeg' for reading [file @ 000001c302ef58c0] Setting default whitelist 'file,crypto,data'
The difference if I try sending 1 frame then some bytes (arbitrary length based on fps) of audio is that I get this extra comment at the end:
[s16le @ 0000025948c96d00] Before avformat_find_stream_info() pos: 0 bytes read:15360 seeks:0 nb_streams:1
Extra calls to
EncodeFrame()
hang forever at theBaseStream.Write(frameBytes, 0, frameBytes.Length)
call, suggesting that FFmpeg is no longer reading the data.Something is causing FFmpeg to close or stop reading the first pipe and only accept data from the second one.
Perhaps the command is missing something?
Updated results
Using a
BlockingCollection
, with the consumers in another thread I end up getting this:Parsing a group of options: input url \\.\pipe\video_to_ffmpeg. Applying option hwaccel (use HW accelerated decoding) with argument auto. Applying option f:v (force container format (auto-detected otherwise)) with argument rawvideo. Applying option r (override input framerate/convert to given output framerate (Hz value, fraction or abbreviation)) with argument 25. Applying option pix_fmt (set pixel format) with argument bgra. Successfully parsed a group of options. Opening an input file: \\.\pipe\video_to_ffmpeg. [rawvideo @ 000001d33fc00940] Opening '\\.\pipe\video_to_ffmpeg' for reading [file @ 000001d33fc01080] Setting default whitelist 'file,crypto,data' CODE: Sent frame [rawvideo @ 000001d33fc00940] Before avformat_find_stream_info() pos: 0 bytes read:65536 seeks:0 nb_streams:1 [rawvideo @ 000001d33fc00940] All info found [rawvideo @ 000001d33fc00940] After avformat_find_stream_info() pos: 1879928 bytes read:1879928 seeks:0 frames:1 Input #0, rawvideo, from '\\.\pipe\video_to_ffmpeg': Duration: N/A, start: 0.000000, bitrate: 375985 kb/s Stream #0:0, 1, 1/25: Video: rawvideo, 1 reference frame (BGRA / 0x41524742), bgra, 782x601, 0/1, 375985 kb/s, 25 tbr, 25 tbn Successfully opened the file. Parsing a group of options: input url \\.\pipe\audio_to_ffmpeg. Applying option f:a (force container format (auto-detected otherwise)) with argument s16le. Applying option ac (set number of audio channels) with argument 2. Applying option ar (set audio sampling rate (in Hz)) with argument 48000. Successfully parsed a group of options. Opening an input file: \\.\pipe\audio_to_ffmpeg. [s16le @ 000001d33fc155c0] Opening '\\.\pipe\audio_to_ffmpeg' for reading [file @ 000001d33fc15980] Setting default whitelist 'file,crypto,data' CODE: Sent frame [s16le @ 000001d33fc155c0] Before avformat_find_stream_info() pos: 0 bytes read:404 seeks:0 nb_streams:1 CODE: Sent audio [Next frame cannot be inserted, pipes get closed]
I think the issue is with the audio data being sent, as it makes the next frame data to end the command. I'm not sure how much audio data to send to FFmpeg, right now I'm trying to match the FPS (1/25, so 40ms of data).
public int EncodeAudio2(ISampleProvider provider, int samplesOffset, long length) { var sampleCount = (int)(provider.WaveFormat.SampleRate * ((double)length / TimeSpan.TicksPerSecond)); var floatBuffer = new float[sampleCount * provider.WaveFormat.Channels]; var samplesRead = provider.Read(floatBuffer, samplesOffset, sampleCount * provider.WaveFormat.Channels); if (samplesRead < 1) return 0; //IF Float32 //var byteBuffer = new byte[samplesRead * 4]; //4 bytes per float, f32le. //Buffer.BlockCopy(floatBuffer, 0, byteBuffer, 0, byteBuffer.Length); //IF Short16 var byteBuffer = new byte[samplesRead * 2]; //2 bytes per sample for s16le. for (var i = 0; i < samplesRead; i++) { var pcmSample = (short)(Math.Clamp(floatBuffer[i], -1.0f, 1.0f) * short.MaxValue); byteBuffer[i * 2] = (byte)(pcmSample & 0xFF); //Low byte. byteBuffer[i * 2 + 1] = (byte)((pcmSample >> 8) & 0xFF); //High byte. } _audioCollection.Add(byteBuffer); return samplesRead; }
-
Ffmpeg command not streaming to youtube [closed]
21 avril, par Ahmed Seddik BouchibaI'm trying to stream my desktop (from an X11 session) to YouTube Live using ffmpeg. I'm running this on a Linux machine with an active X server, and I set the DISPLAY variable accordingly (:0 in most cases).
Here's the ffmpeg command I've tried:
ffmpeg -loglevel info \ -probesize ${PROBESIZE} -analyzeduration ${ANALYZE_DURATION} \ -f x11grab -video_size ${VIDEO_SIZE} -r ${FRAME_RATE} -draw_mouse 0 -i ${DISPLAY} \ -f alsa -i default \ -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset fast \ -r 30 -g 60 -b:v 2000k -bufsize 4000k \ -acodec libmp3lame -ar 44100 -b:a 128k \ -map 0:v:0 -map 1:a:0 -vsync 0 \ -f flv "${RTMP_URL}" &
Environment variables are set correctly (DISPLAY, VIDEO_SIZE, FRAME_RATE, etc.), and I replaced ${RTMP_URL} with the correct YouTube RTMP endpoint (e.g., rtmp://a.rtmp.youtube.com/live2/). But nothing seems to work — the stream never starts or appears on YouTube, and sometimes I get timeout or "connection refused" errors.
I've checked:
That I'm logged into an active X session
That I have access to the display (even tried xhost +)
That ffmpeg has access to ALSA (sound seems okay)
Questions:
Am I missing something in my command?
Is there a better way to stream both screen and audio from an X server to YouTube Live?
Could this be a codec or YouTube-specific format issue?
Any help or working examples would be really appreciated. Thanks!