Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
Python pydub issue : Can't repoen a wav file after it as been reencoded by ffpeg [Silved]
28 avril, par Trent TompkinsSolved by Michael Butscher. Updated code is now in the repo in the fixed branch. Thanks!
I have a python script that reads in all the *.wav files in a directory and saves them in a folder called output after it adjusts their volume.
lower.py:
import os from pydub import AudioSegment as ass # get the current working directory current_working_directory = os.getcwd() all_files = os.listdir(); while len(all_files) > 0: file = all_files.pop() song = ass.from_wav(file) song = song - 16 song.export('out\\'+file)
https://github.com/tibberous/ChangeWavVolume/tree/fixed
The problem is (and this took me a LONG time to figure out), after the files are output once, if you try to read them again with AudioSegment ::from_wav(file), ffmpeg throws an error saying it can't decode the file. The output wav files play in Windows Media Player, and even have their audio adjusted, but I am guessing there somehow an error either in the file data itself or the file headers.
I put the code on github. Checkout the branch 'fixed'.
Btw, I just installed everything, so I should have the current versions of everything. I am using the Chocolatey package manager for Windows 10. Read README.txt (in progress) to see how I setup my dev environment.
Python 3.12.2 Chocolatey v2.2.2 C:\Users\moren>C:\ProgramData\chocolatey\bin\ffmpeg ffmpeg version 7.0-essentials_build-www.gyan.dev Copyright (c) 2000-2024 the FFmpeg developers built with gcc 13.2.0 (Rev5, Built by MSYS2 project) configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-libgme --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora --enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband libavutil 59. 8.100 / 59. 8.100 libavcodec 61. 3.100 / 61. 3.100 libavformat 61. 1.100 / 61. 1.100 libavdevice 61. 1.100 / 61. 1.100 libavfilter 10. 1.100 / 10. 1.100 libswscale 8. 1.100 / 8. 1.100 libswresample 5. 1.100 / 5. 1.100 libpostproc 58. 1.100 / 58. 1.100 Universal media converter
I opened the wav files the program generated with Media Player, VLC Media Player and Audacity, thinking they might throw an error, but the only thing that can't seem to read the exported wav files is the script that created them!
-
Using ffmpeg to resync an encoded video [closed]
27 avril, par Marc MoutonI am encoding a few videos with av1an and svt-av1-psy. I have one file with audio that goes out of sync. I usually encode the audio in opus, but even remuxing the original stream result in out of sync. It seems the new av1 video is shorter of 831ms compare to original. Using mkvtoolnix to set -831ms and use the "correct video timing..." option result in a file that is not far of being well sync, but no quiet perfect and the total lenght of the video is even shorter.
So is there any way to use the original h264 video file as a reference for the new AV1 ?
-
ffplay cannot play video after being encoded and decoded with vvenc and vvdec [closed]
27 avril, par TompyCZAfter encoding a video sequence named crowd run, from https://media.xiph.org/video/derf/, the 2160p version with ffmpeg integrated with vvenc and vvdec according to this tutorial https://github.com/fraunhoferhhi/vvenc/wiki/FFmpeg-Integration, and then decoding it back, when I want to compare it with vmaf, or play it with ffplay, i get the following error
Format yuv4mpegpipe detected only with low score of 1, misdetection possible! [yuv4mpegpipe @ 000001ce93ee4ec0] Header too large. (file path replaced with this text): Invalid argument
I encoded it using this command (but I tried other presets with the same result)
ffmpeg -benchmark -y -i INPUT.y4m -c:v vvc -preset slower -b:v 10M -threads 6 -an ENCODED.mkv
and decoded using
ffmpeg -i ENCODED.mkv -f rawvideo -pix_fmt yuv420p OUTPUT.y4m
When I ffplay the .mkv in the middle, it does play, but in console, it says (including the benchmark data)
Invalid value at vps_layer_id[i]: bitstream ended.sq= 0B f=0/0 Failed to read unit 0 (type 14). [libvvdec @ 0x7fb32c002e40] Failed to parse extradata. Input #0, matroska,webm, from 'OUTPUT.mkv': Metadata: ENCODER : Lavf60.17.100 Duration: 00:00:10.00, start: 0.300000, bitrate: 9524 kb/s Stream #0:0: Video: vvc (Main 10) (vvi1 / 0x31697676), yuv420p10le(tv, progressive), 3840x2160, SAR 1:1 DAR 16:9, 50 fps, 50 tbr, 1k tbn Metadata: ENCODER : Lavc60.33.102 libvvenc DURATION : 00:00:10.000000000 7.44 M-V: 0.057 fd= 236 aq= 0KB vq= 1127KB sq= 0B f=72/0
Is there anything I'm doing wrong to cause this? How could I fix this?
I tried searching the error on the internet, didn't find anything directly related to the problem.
-
Convert SDR-JPEG to HDR-AVIF [closed]
27 avril, par Jonas JanzenI would like to convert a jpg file into an avif file that is to be saved in HDR10-capable metadata (PQ curve, 2020 color space, 10 bit).
The idea is to save normal SDR images in HDR-capable containers so that they can be displayed in all their glory on HDR-capable displays.
I want to play with inverse tone mapping, to manipulate the output, so I implemented in Python via subprocess.
So far I just want the input image to be saved in AVIF as HDR and look the same at the end as before, so that I can then make changes in the next step.
I used the following command for this:
ffmpeg_command = [ 'ffmpeg',
Input File '-i', temp_file,
Used Library '-c', 'libaom-av1',
'-still-picture', '1',
Output Metadata '-pix_fmt', 'yuv420p10le', '-strict', 'experimental', '-color_primaries', 'bt2020', '-color_trc', 'smpte2084', '-colorspace', 'bt2020nc', '-color_range', 'pc',
Output File output_file ]
So far my attempts have only been successful with the HLG characteristic. Here you can see that the images are really brighter in the peaks on my HDR monitor.
With the PQ characteristic curve, the images are far too oversaturated.
I guess this is because the HLG curve is compatible with the gamma curve, but PQ is not.
Now my question is what I need to change.
Which curve does FFMpeg expect as input.
In Python I can change the images mathematically without any problems.
The Example Images are again tone mapped down to jpg, to show what happened.
-
How to slide mutiple images left continuously ?
27 avril, par mikezangI use ffmpeg to slide images from left to center as below:
#slide 1st image from right to center #countdown 10 seconds #slide 1st image from center to left and in the same time #slide 2nd image from right to center #countdown 10 seconds #... ffmpeg -loglevel quiet -loop 1 -i input.png -filter_complex "split=2[bg][slider];[bg]drawbox=c=black:t=fill[bg];[bg][slider]overlay=x='max(W-w*t,0)':y=0" -t 10 -y output.mp4
There is black background between two images, I want to the preious page keep slide from center to left and next page slide fron right to center, what can I do?