Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
Python script than run another python script, ffmpeg2vmaf
16 mai 2018, par AsusioI want to build a little API to use ffmpeg2vmaf to analyze videos that are made for the video streaming (DASH, HLS).
I'm on Linux Ubuntu 16.04, I'm using python 3.
My API can now concatenate the files into an MP4 file. To do it I'm using the library
subprocess
and the commandcat
.But when I want to use ffmpeg2vmaf it can't find a library that ffmpeg2vmaf use.
This is how I do:
try: os.chdir("/home/USERNAME/VMAF/vmaf/") output_cmd = subprocess.check_output(["sudo ./ffmpeg2vmaf WIDTH HEIGHT \ '/home/alexis/video/ref.mp4' '/home/alexis/video/dist.mp4'\ >> '/home/alexis/analyze/analyze.txt'"], shell = True) except subprocess.CalledProcessError: print("Error")
The error is:
Traceback (most recent call last): File "./ffmpeg2vmaf", line 8, in
from vmaf.config import VmafConfig, DisplayConfig ImportError: No module named vmaf.config But if I use the same command without python, in the terminal, it works.
I have try to put my API in the same folder as "ffmpeg2vmaf" but it still doesn't work.
Thank you by advance
-
FFMPEG Segment name
16 mai 2018, par user726720I'm trying to save a segment from ffmpeg but keep on getting the following error:
[mpegts @ 0000000003a20560] Invalid segment filename template 'Test-%date:~7,4% %date:~3,3%-%date:~0,2%_%time:~0,2%-%time:~3,2%-%time:~6,2%.ts' Could not write header for output file #0 (incorrect codec parameters ?): Inval d argument
I do understand the way i'm trying to get the date/time in there is not correct. Can someone help me correct it please.
here is the command:
ffmpeg -i rtp://10.0.0.239:1234 -vcodec copy -acodec copy -f segment -segment_time 60 -segment_format ts "Test- %date:~7,4%-%date:~3,3%-%date:~0,2%_%time:~0,2%-%time:~3,2%-%time:~6,2%.ts"
-
Start .bat after files get download in torrent client
16 mai 2018, par AniEncoder -
Correct syntax for ffmpeg filter combination ?
16 mai 2018, par shokuI'm playing with ffmpeg to generate a pretty video out of an mp3 + jpg.
I've managed to generate a video that takes a jpg as a background, and adds a waveform complex filter on top of it (and removes the black bg as an overlay).
This works: ffmpeg -y -i 1.mp3 -loop 1 -i 1.jpg -filter_complex "[0:a]showwaves=s=1280x720:mode=cline,colorkey=0x000000:0.01:0.1,format=yuva420p[v];[1:v][v]overlay[outv]" -map "[outv]" -pix_fmt yuv420p -map 0:a -c:v libx264 -c:a copy -shortest output.mp4
I've been trying to add text somewhere in the generated video too. I'm trying the drawtext filter. I can't get this to work however, so it seems I don't understand the syntax, or how to combine filters.
This doesn't work: ffmpeg -y -i 1.mp3 -loop 1 -i 1.jpg -filter_complex "[0:a]showwaves=s=1280x720:mode=line,colorkey=0x000000:0.01:0.1,format=yuva420p[v];[1:v][v]overlay[outv]" -filter_complex "[v]drawtext=text='My custom text test':fontcolor=White@0.5: fontsize=30:font=Arvo:x=(w-text_w)/5:y=(h-text_h)/5[out]" -map "[outv]" -pix_fmt yuv420p -map 0:a -c:v libx264 -c:a copy -shortest output.mp4
Would love some pointers!
-
ffmpeg - How to filter video when record Window Application in the same time
16 mai 2018, par user3181176I have a problem when record Window Application and filter (overlay, audio mixing...) with others filter video.
This code work perfectly :
ffmpeg -rtbufsize 1500M -f dshow -i audio="virtual-audio-capturer" -f gdigrab -framerate 30 -draw_mouse 0 -i title="Main" -vf crop=850:480:156:100 -pix_fmt yuv420p -profile:v baseline -y ok.mp4 (With Main is my Application)
But this code bellow doesn't:
ffmpeg -rtbufsize 1500M -f dshow -y -i audio="virtual-audio-capturer" -f gdigrab -framerate 30 -draw_mouse 0 -i title="Main" -vf crop=850:480:156:100 -pix_fmt yuv420p -profile:v baseline -stream_loop 999 -i "filter/filter.mp4" -filter_complex "[2:v]scale=385:216, setdar=dar=16/9[v1]; [2:v]scale=385:216, setdar=dar=16/9[v2]; [v1][v2]blend=shortest=1:all_opacity=1[v3]; movie=filter/nds_bg.mp4:loop=999,setpts=N/(FRAME_RATE*TB), scale=640:360[v4] ;[v4][v3]overlay=shortest=1:x=20:y=130;[2:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,asetrate=8.5/10*44100,atempo=10/8.5,lowpass=f=2500,highpass=f=400,volume =3,bass=g=-30,equalizer=f=10.5:width_type=o:width=1:g=-30, equalizer=f=31.5:width_type=o:width=1:g=- 30,equalizer=f=63:width_type=o:width=1:g=-10, equalizer=f=125:width_type=o:width=1:g=-20,equalizer=f=250:width_type=o:width=1:g=-1.5,equalizer=f=500:width_type=o:width=1:g=-20,equalizer=f=1000:width_type=o:width=1:g=-20,equalizer=f=8000:width_type=o:width=3:g=1,equalizer=f=16000:width_type=o:width=3:g=1" -vcodec libx264 -pix_fmt yuv420p -r 26 -g 30 -b:30 800k -shortest -acodec libmp3lame -b:a 128k -preset:v ultrafast -ar 44100 -f flv -bufsize 3000k -s 640x360 out.mp4
I think this code need to output to video first
ffmpeg -rtbufsize 1500M -f dshow -y -i audio="virtual-audio-capturer" -f gdigrab -framerate 30 -draw_mouse 0 -i title="Main" -vf crop=850:480:156:100 -pix_fmt yuv420p -profile:v baseline
Please show me how to merge these code or any solution. Thank you so much!