18:24
I have a function for decoding audio in C using ffmpeg:
int read_and_decode(const char *filename, float **audio_buffer, int *sample_rate, int *num_samples)
AVFormatContext *fmt_ctx = NULL;
AVCodecContext *codec_ctx = NULL;
AVCodec *codec;
AVPacket packet;
AVFrame *frame = av_frame_alloc();
if (!frame)
fprintf(stderr, "Could not allocate memory for AVFrame\\n");
return -1;
int audio_stream_index = -1, ret;
if (avformat_open_input(&fmt_ctx, filename, NULL, NULL) != 0)
fprintf(stderr, "Could not open the file: %s\\n", (...)
18:16
I'm trying to change FFMPEG encoder writing application with FFMPEG -metadata and for whatever reason, it's reading the input but not actually writing anything out.
map_metadata -metadata:s:v:0 -metadata writing_application, basically every single stack overflow and stack exchange thread, but they all won't write to the file at all.
ffmpeg -i x.mp4 -s 1920x1080 -r 59.94 -c:v h264_nvenc -b:v 6000k -vf yadif=1 -preset fast -fflags +bitexact -flags:v +bitexact -flags:a +bitexact -ac 2 x.mp4
ffmpeg -i x.mp4 -c:v copy -c:a copy -metadata (...)
18:13
I'm developing a simple scene with A-Frame and React.JS where there is a videosphere that will create and render when video are fully loaded and ready to play.
My goal is to render 4k (to device who can reproduce it) video on videosphere to show at the users the environment.
On desktop versions all works fine also with 4K videos while on mobile works only for 1920x1080.
I already check if my phone can render a 4k texture video and it can render untill 4096, I checked also that video.videoWidth are 4096.
The error I have is with decoder
MediaError code: 4, (...)
16:12
I'm trying to blend a few frames from a video and output a png image using this command:
ffmpeg -i input.mp4 -vf "select=between(n\\,2\\,5),blend=all_mode=average" -frames:v 1 out.png
I get this error, but can't make sense of it:
Simple filtergraph ... was expected to have exactly 1 input and 1
output. However, it had 2 input(s) and 1 output(s). Please adjust, or
use a complex filtergraph (-filter_complex) instead.
What am I doing (...)
13:19
I've 2 h264/aac stream TS files (say a.ts and b.ts) that have same duration and packet numbers. However PCR/PTS/DTS data of their audio/video stream packets are different. How do I copy PCR/PTS/DTS data from a.ts to b.ts using ffmpeg or other tools without actually overwriting the audio and video frames data? Ideally want to do this without re-encoding.
10:28
Okay... So Im trying to create a video making program thing and I'm basically finished but I wanted to change 1 key thing and don't know how to...
Here's the code:
import assemblyai as aai
import colorama
from colorama import *
import time
import os
aai.settings.api_key = "(cant share)"
print('')
print('')
print(Fore.GREEN + 'Process 3: Creating subtitles...')
print('')
time.sleep(1)
print(Fore.YELLOW + '>> Creating subtitles...')
transcript = aai. (...)
09:27
Right now i have:
1 - video (.mp4)
N - audio with its timestamp start and end (.wav)
I want to merge those audio to the video based on that time with still preserving the original video audio
So that the final illustration will looks like this:
audio ----aud1------aud2------aud3--------aud4---------audn------->
video+audio ------------------------------------------------------------>
the audio position will be based on the time, i already have this data
start=00:00:12.040,end=00:00:16.640 aud1
start=00:00:16.640,end=00:00:21.520 (...)
09:19
Is there any way to convert subtitles from "hdmv_pgs_subtitle" to "mov_text" (needed in MP4).
I try:
ffmpeg.exe i "%%F" c copy -scodec mov_text "test.mp4"
but I get:
Error while opening encoder for output stream #0:2 - maybe incorrect parameters
such as bit_rate, rate, width or height
Stream properties:
Duration: 00:01:48.14, start: 4199.920000, bitrate: 12394 kb/s
Program 1
Stream #0:0[0x1011], 169, 1/90000: Video: h264 (High), 1 reference frame (HDMV / 0x564D4448), yuv420p(top first, left), 1920x1080 (1920x1088) [SAR (...)
04:54
I have 601 sequential images, they change size and aspect ratio on frame 36 and 485, creating 3 distinct sizes of images in the set.
I want to create a timelapse and shave off the first 200 frames and only show the remaining 401, but if I do a trim filter on the input, the filter treats each of the 3 sections of different sized frames as separate 'streams' with their own reset PTS, all of which start at the exact same time. This means the final output of the below commmand is only 249 frames long instead of 401.
How can I fix this so I just get the final 401 (...)