Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
ffmpeg - Any way to add a transition to a list of clips to combine into a single video ?
29 avril, par J. PenaLet's say I have a directory with 5 videos with the titles listed as the following
- 1.mp4
- 2.mp4
- 3.mp4
- 4.mp4
- 5.mp4
I created the text file named mylist.txt with the following written.
file '1.mp4' file '2.mp4' file '3.mp4' file '4.mp4' file '5.mp4'
I have an ffmpeg command that will combine these clips into one file beautifully already using the text file.
ffmpeg -f concat -safe 0 -i mylist.txt -c copy combined.mp4
My question is how can I add a fade transition to clips 2-4 using the text file? The filter_complex parameter gives me an error.
Streamcopy requested for output stream fed from a complex filtergraph. Filtering and streamcopy cannot be used together.
-
Codec AVOption b (set bitrate (in bits/s)) specified for output file #0 (360p.m3u 8) has not been used for any stream.
28 avril, par Marc ArchenaultI am receiving this warning from ffmpeg. The job runs and all the videos seem to output correctly. I have 2 hls, 1 mp4 with overlay + A single thumbnail.
The warning:
Stream #1:0: Video: png, rgba(pc), 500x38, 25 tbr, 25 tbn, 25 tbc Codec AVOption b (set bitrate (in bits/s)) specified for output file #0 (360p.m3u 8) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of s ome encoder which was not actually used for any stream.
The windows batch command:
ffmpeg -loglevel info -threads 2 -hide_banner -y -i SampleVideo_1280x720_10mb.mp4 -i ecocastvideo-overlay-shadow-white-500.png^ -filter_complex "[1]colorchannelmixer=aa=0.7,scale=iw*0.7:-1[wm];[0:v][wm]overlay=(main_w-overlay_w)-10:(main_h-overlay_h)-30,split=3[a][b][c];[a]scale=w=640:h=360:force_original_aspect_ratio=decrease[a];[b]scale=w=1280:h=720:force_original_aspect_ratio=decrease[b];[c]scale=w=1280:h=720:force_original_aspect_ratio=decrease[c]"^ -map "[a]" -map 0:v -c:v h264 -profile:v main -crf 20 -preset veryfast -sc_threshold 0 -g 72 -keyint_min 72 -hls_time 4 -hls_playlist_type vod -b:a 96k -hls_flags single_file 360p.m3u8^ -map "[b]" -map 0:v -c:v h264 -profile:v main -crf 20 -preset veryfast -sc_threshold 0 -g 72 -keyint_min 72 -hls_time 4 -hls_playlist_type vod -b:a 96k -hls_flags single_file 720p.m3u8^ -map "[c]" -map 0:v -c:v h264 -profile:v main -preset veryfast 720.mp4^ -map 0:v -y -ss 0.5 -vframes 1 -an -s 120x90 -ss 30 thumbname-00001.png
-
FFmpeg container is unable to communicate to my application container in Quarkus [duplicate]
28 avril, par Abhi11I'm working on a application where i am running different container such as clamav,postgres,my quarkus application.all the configuration is defined in docker compose file.now i want to perform some operation on file that's why want to use ffmpeg. i am using ffmpeg as separate container and connected via docker network.i want to execute some ffmpeg command for example check version and other advance.I'm getting error. Please help me to solve this.
My docker-compose.yml --
version: "3.8" services: clamav: image: clamav/clamav:latest container_name: clamav ports: - "3310:3310" restart: always healthcheck: test: ["CMD", "clamdscan", "--version"] interval: 30s timeout: 10s retries: 5 networks: - techtonic-antivirus-net postgres-database: image: postgres:15 container_name: postgres environment: POSTGRES_DB: antivirustt POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres ports: - "5434:5432"`enter code here` restart: always volumes: - pgdata:/var/lib/postgresql/data networks: - techtonic-antivirus-net # Add Redis container redis: image: redis:latestenter code here container_name: redis ports: - "6379:6379" restart: always networks: - techtonic-antivirus-net ffmpeg: image: jrottenberg/ffmpeg:latest container_name: ffmpeg restart: always entrypoint: ["tail", "-f", "/dev/null"] # Keeps the container running networks: - techtonic-antivirus-net healthcheck: test: ["CMD", "ffmpeg", "-version"] interval: 10s timeout: 5s retries: 3 techtonic-antivirus: build: context: . dockerfile: src/main/docker/Dockerfile.native-micro container_name: techtonic-antivirus depends_on: clamav: condition: service_healthy postgres-database: condition: service_started redis: condition: service_started ports: - "8080:8080" environment: - CLAMAV_HOST=${CLAMAV_HOST} - CLAMAV_PORT=${CLAMAV_PORT} - QUARKUS_PROFILE=${QUARKUS_PROFILE:-dev} - QUARKUS_DATASOURCE_JDBC_URL=jdbc:postgresql://postgres-database:5432/antivirustt - QUARKUS_DATASOURCE_USERNAME=postgres - QUARKUS_DATASOURCE_PASSWORD=postgres - QUARKUS_REDIS_HOSTS=redis://redis:6379 restart: always volumes: - ffmpeg:/ffmpeg - /var/run/docker.sock:/var/run/docker.sock networks: - techtonic-antivirus-net volumes: pgdata: ffmpeg: networks: techtonic-antivirus-net:
My Service code :
@Slf4j @ApplicationScoped public class CheckFFmpegStatus { public static boolean isFFmpegReady() { try { // ProcessBuilder pb = new ProcessBuilder("docker", "exec", "ffmpeg", "ffmpeg", "-version"); ProcessBuilder pb = new ProcessBuilder("ffmpeg", "-version"); Process process = pb.start(); int exitCode = process.waitFor(); if (exitCode == 0) { log.info("✅ FFmpeg is ready and reachable inside the container."); } else { log.warn("⚠️ FFmpeg process exited with code: " + exitCode); } return exitCode == 0; } catch (IOException | InterruptedException e) { log.error("❌ Failed to check FFmpeg status", e); return false; } } }
-
Long video files are shortened when exported [closed]
28 avril, par MathildaI'm working on videos with a maximal length of 23h59m59s (let's say 24h). I wanted to do several modifications on these videos, including dividing them into smaller parts and converting them into images. I'm doing it with Python in the Anaconda interface.
My 24-hour videos are properly read by the Windows "Films & TV" player. They are also properly read by VLC media player. However, as soon as I try to use them:
- in editing software (Movavi);
- on an online editing site (Video-cutter);
- on Python (
cv2
andmoviepy
packages); or - on
ffmpeg
the indicated video duration is correct only if the video is less than 13h15m22s. If it is longer than that, the indicated duration is equal to the real duration minus 13h15m22s, regardless of the loaded video's metadata (size, weight, fps, duration).
I have tried with videos of different durations, different pixel sizes and weights in GB, and different fps. I had variable fps mode on the videos, so I tried to set them to fixed mode using ffmpeg, but the exported video with fixed fps still has 13h15m22s missing. For videos taken from the web (YouTube), this problem does not appear on any of the 4 tested supports, so there is no problem coming from my machine or software. The issue must be in the videos. But I don't understand what could be in the videos for it to always be 13h15m22s that are removed, regardless of the video size. Moreover, if the video was corrupted, I wouldn't be able to display it in Films & TV or VLC media player.
I used
ffmpeg
to see if the video was corrupted:ffmpeg.exe -v error -i "/Users/myname/Desktop/myvideo.mp4" -f null >error.log 2>&1 ffmpeg.exe -v error -i "/Users/myname/Desktop/myvideo.mp4" -f null ffprobe -show_entries stream=r_frame_rate,nb_read_frames,duration -select_streams v -count_frames -of compact=p=1:nk=1 -threads 3 -v 0
And nothing appears in the output so I guess there's no error detected. I tried to repair my video with
ffmpeg
anyway:ffmpeg -i "/Users/myname/Desktop/myvideo.mp4" -c copy "/Users/myname/Desktop/myvideo_fixed.mp4”
And there is still 13h15m22s missing.
I finally tried to repair it with Untrunc, which gave me a completely black video of 55 minutes for a video supposed to last several hours. Untrunc asks for the corrupted file and a functional video file taken by the same camera as input, so I used a 3min26 file for this, assuming that since the file is less than 13h15m22s and does not appear truncated on the software, it is not corrupted. Maybe I should have taken a larger video file (a few hours for example), but I'm not sure it's going to change anything.
I don't know what to do to be able to cut these videos and use them in Python without them being truncated.
-
ffmpeg scale down video dynamically (squeeze-back) or zoompan out to smaller than original
27 avril, par SamI have 2 videos, I'm trying to overlay one on top of the other, and have it shrink down in an animated fashion until it appears like a picture-in-picture setup. Then, after a few seconds it should scale back up.
This is what I am trying to achieve (these would be videos, not images):
This is the closest I've been able to get, but, crucially, zoompanning "out" (as opposed to "in") does not appear to work; so, of course, this does not work:
ffmpeg -i bg.mov -i top.mov -filter_complex "[0:v]zoompan=z='pzoom-0.1':d=1, setpts=PTS-STARTPTS[top]; [1:v]setpts=PTS-STARTPTS+2/TB, scale=1920x1080, format=yuva420p,colorchannelmixer=aa=1.0[bottom]; [top][bottom]overlay=shortest=0" -vcodec libx264 out.mp4
Is this achievable with ffmpeg?