Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
Issue with Subtitles Not Embedding in Exported Video Using FFmpeg
25 mars, par Yash ChauhanI've been encountering an issue while attempting to embed subtitles into a video using FFmpeg. Despite following the process outlined below, the exported video does not include the subtitles:
export async function embedSubtitles(ffmpeg: FFmpeg, videoFile: File, srtContent: string) { await ffmpeg.writeFile('input.mp4', await fetchFile(videoFile)); await ffmpeg.writeFile('./subtitles.srt', srtContent); const ffmpeg_cmd = [ '-i', 'input.mp4', '-vf', 'subtitles=./subtitles.srt:force_style=\'FontSize=24,FontName=Arial\'', '-c:a', 'copy', 'output.mp4' ]; await ffmpeg.exec(ffmpeg_cmd); const data = await ffmpeg.readFile('output.mp4') as Uint8Array; const blob = new Blob([data], { type: 'video/mp4' }); const url = URL.createObjectURL(blob); const a = document.createElement('a'); a.href = url; a.download = 'output.mp4'; a.click(); return { url, output: 'output.mp4' }; }
I've reviewed this code extensively and cannot determine why the subtitles are not being embedded. I've ensured that srtContent is correctly formatted and that FFmpeg executes without errors. What changes or alternative methods can I use to properly embed subtitles into the exported video?
-
FFMPEG SRT multiple callers in to one listener ?
24 mars, par IanI am using this line:
exec_push /home/production/bin/ffmpeg -i rtmp://localhost:1935/live/slot4 -codec copy -g 1 -bsf:v h264_mp4toannexb -f mpegts srt://0.0.0.0:50330?mode=listener -loglevel verbose;
in nginx to launch FFMPEG and have it transmux RTMP to SRT. That being said, i'm curious if there is a flag or a way to have multiple SRT callers call into this one stream. If not, can you provide and alternative solution? -
encoding a video to AV1 for compression
24 mars, par living beingI wrote a code to encode a 1080p video to AV1 codec to reduce the file size. At the same time, I encoded the same video to x265. As they say, AV1 should reduce the size around 40-50% more than x265. But I didn't achieve it.
The original file size: 74 MB
the file encoded with AV1: 53 MB
The file encoded with x265: 34 MB
My AV1 code:
ffmpeg -i "input.webm" -vcodec libsvtav1 -preset 4 -crf 38 -acodec libopus -ac 1 -b:a 24K "output.mkv";
My x265 code:
ffmpeg -i "input.webm" -vcodec libx265 -preset fast -crf 31 -acodec libopus -ac 1 -b:a 24K "output.mkv";
I used a higher CRF for AV1 code to get a smaller file, but it didn't work. What's wrong with my AV1 code?
I used different input files and got similar results. My OS: Ubuntu 24.10
-
How to Pause and Resume Screen Recording in FFmpeg on Windows ?
24 mars, par Iman SajadpurI use FFmpeg on Windows to record my screen. I want to pause and resume the recording properly. I know that pressing
Ctrl + S
, Pause key on the Keyboard, or suspending FFmpeg via Resource Monitor stops the process, but screen recording conntinues in the background. Here is an example of the command I use for screen recording:ffmpeg -f gdigrab -probesize 100M -i desktop -f dshow -channel_layout stereo -i audio="Microphone (2- High Definition Audio Device)" output.mp4
How can I pause recording completely so that no frames are captured during the pause and resume it seamlessly?
-
Assigning of dts values to encoded packets
24 mars, par AlexI have a dump of H264-encoded data, which I need to put in mp4 container. I verified the validity of the encoded data by using mp4box utility against it. The mp4 file created by mp4box contained a proper 17 seconds long video. It is interesting that if I try ffmpeg to achieve the same, the resulting video is 34 seconds long and rather crappy (probably ffmpeg tries to decode video and then encode it, which results in the loss of video quality?) Anyway, for my project I can't use command line approach and need to come up wit a programmatic way to embed the data in the mp4 container.
Below is the code I use (I removed error checking for brevity. During execution all the calls succeed):
AVFormatContext* pInputFormatContext = avformat_alloc_context(); avformat_open_input(&pInputFormatContext, "Data.264", NULL, NULL); avformat_find_stream_info(pInputFormatContext, NULL); AVRational* pTime_base = &pInputFormatContext->streams[0]->time_base; int nFrameRate = pInputFormatContext->streams[0]->r_frame_rate.num / pFormatCtx->streams[0]->r_frame_rate.den; int nWidth = pInputFormatContext->streams[0]->codecpar->width; int nHeight = pInputFormatContext->streams[0]->codecpar->height; // nWidth = 1920, nHeight = 1080, nFrameRate = 25 // Create output objects AVFormatContext* pOutputFormatContext = NULL; avformat_alloc_output_context2(&pOutputFormatContext, NULL, NULL, "Destination.mp4"); AVCodec* pVideoCodec = avcodec_find_encoder(pOutputFormatContext->oformat->video_codec/*AV_CODEC_ID_264*/); AVStream* pOutputStream = avformat_new_stream(pOutputFormatContext, NULL); pOutputStream->id = pOutputFormatContext->nb_streams - 1; AVCodecContext* pCodecContext = avcodec_alloc_context3(pVideoCodec); switch (pVideoCodec->type) { case AVMEDIA_TYPE_VIDEO: pCodecContext->codec_id = codec_id; pCodecContext->bit_rate = 400000; /* Resolution must be a multiple of two. */ pCodecContext->width = nFrameWidth; pCodecContext->height = nFrameHeight; /* timebase: This is the fundamental unit of time (in seconds) in terms * of which frame timestamps are represented. For fixed-fps content, * timebase should be 1/framerate and timestamp increments should be * identical to 1. */ pOutputStream->time_base.num = 1; pOutputStream->time_base.den = nFrameRate; pCodecContext->time_base = pOutputStream->time_base; pCodecContext->gop_size = 12; /* emit one intra frame every twelve frames at most */ pCodecContext->pix_fmt = STREAM_PIX_FMT; break; default: break; } /* copy the stream parameters to the muxer */ avcodec_parameters_from_context(pOutputStream->codecpar, pCodecContext); avio_open(&pOutputFormatContext->pb, "Destination.mp4", AVIO_FLAG_WRITE); // Start writing AVDictionary* pDict = NULL; avformat_write_header(pOutputFormatContext, &pDict); // Process packets AVPacket packet; int64_t nCurrentDts = 0; int64_t nDuration = 0; int nReadResult = 0; while (nReadResult == 0) { nReadResult = av_read_frame(m_pInputFormatContext, &packet); // At this point, packet.dts == AV_NOPTS_VALUE. // The duration field of the packet contains valid data packet.flags |= AV_PKT_FLAG_KEY; nDuration = packet.duration; packet.dts = nCurrentDts; packet.dts = av_rescale_q(nCurrentDts, pOutputFormatContext->streams[0]->codec->time_base, pOutputFormatContext->streams[0]->time_base); av_interleaved_write_frame(pOutputFormatContext, &packet); nCurrentDts += nDuration; nDuration += packet.duration; av_free_packet(&packet); } av_write_trailer(pOutputFormatContext);
The properties for the Destination.mp4 file I receive indicate it is about 1 hour long with frame rate 0. I am sure the culprit is in the way I calculate dts values for each packet and use av_rescale_q(), but I do not have sufficient understanding of the avformat library to figure out the proper way to do it. Any help will be appreciated!