21:48
My use case: Extract just the audio from a youtube URL directly to a .wav at 32-bit float 48000.
Preferably without any post process args or secondary passes or after-the-fact conversion or muxing.
I want f32le, aka PCM_f32le, aka PCM 32-bit floating-point little-endian, which is supported by ffmpeg. Also want 48000 sample rate, as stated.
Is this possible?
My current command:
yt-dlp -f bestaudio --extract-audio --audio-format wav --audio-quality 0
What do I need to add to achieve my use case / (...)
18:55
I have an Android TV, I want to stream its screen in my Ubuntu PC.
I used this command:
adb shell screenrecord --output-format=h264 - | ffplay -
and after waiting for a while it displays the screenshot of the TV. But I want to display live stream of the Android TV.
I also used the following command as well but got the same result:
adb exec-out screenrecord --bit-rate=16m --output-format=h264 --size 800x600 - | ffplay -framerate 60 -framedrop -bufsize 16M -
How can I achieve this using this command?
Or is there a way to achieve it with another way by (...)
21:04
The following code works perfectly as long as I only move the crop rectangle, however as soon as I change its size I no longer get frames out of my filter (av_buffersink_get_frame returns -11). It's crazy, even after the size changes, if it eventually changes to the original size that frame will go through, then it will go back to no longer providing frames.
Would anyone happen to know what I'm doing wrong?
My filter setup (note the crop & scale combination, it should (I think?) scale whatever I crop to the output video size):
// buffer source -> buffer sink (...)
16:35
I am tried so many time to figure out the problem in detecting the face and also it's not so smooth enough to like other tools out there.
So basically I am using python and Yolo in this project but I want the person who is talking and who the ROI (region of interest) is.
Here is the code:
from ultralytics import YOLO
from ultralytics.engine.results import Results
from moviepy.editor import VideoFileClip, concatenate_videoclips
from moviepy.video.fx.crop import crop
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
# Load the input (...)
13:22
I have a sequence of exr files which I want to convert into a video using moviepy. When Since the colors in the exrs need be converted (otherwise the video appears almost black) I need to specify a color transfer characteristic. When I run ffmpeg directly using
ffmpeg -y -apply_trc iec61966_2_1 -i input_%d.exr -vcodec mpeg4 output.mp4
everything is working perfectly fine. However, if I read the exrs using clip = ImageSequenceClip("folder_to_my_exrs/", fps = 24) and try to write the video using .write_videofile("output.mp4", codec = "mpeg4", ffmpeg_params = (...)
13:01
I want to prepare WEB page containing films from security camera recorders. Each recorder transmit video files in DAV format so each film is converted to MP4 format by script, using such syntax:
ffmpeg -y -i movie.dav -vcodec libx264 -crf 24 movie.mp4
So I included in HTMLv5 code such entry:
It works correctly with Chrome but not with Firefox. For proper work in FF it is necessary add link to OGG file. So correct HTMLv5 syntax for both browsers should look like this:
Can anybody help (...)
10:43
I'm converting MKV files and hardcoding subtitles into MP4 format. I have over 100 files and want to speed the process up by enabling hardware encoding. I am able to hardware encode without hardcoding the subtitles via --filter_complex but as soon as I apply the filter it errors out.
Here is my command line that works perfectly fine.
ffmpeg -i input.mkv -filter_complex "[0:v:0]subtitles='input.mkv':si=0[v]" -map "[v]" -map 0:a:1 -c:a copy -map_chapters -1 "output.mp4"
Here is my command line that works with hardware encoding without --filter_complex
(...)