Newest 'ffmpeg' Questions - Stack Overflow

Articles published on the website

  • Merge individual frame to video file using Opencv and FFmpeg

    16 August, by Rohit

    I am trying to stack a individual frame to a video file using Opencv. I want to combine two different code together to make the individual frame. Following code help me extract the individual frame

    while True:
    mask = object_detector.apply(frame)
    _, mask  = cv2.threshold(mask,254,255,cv2.THRESH_BINARY)       
    contours,_ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
    res = cv2.bitwise_and(frame,frame,mask=mask)
    for cnt in contours:
        area = cv2.contourArea(cnt)
        if area>1000:   
            #print("Area of contour:", area)
            cv2.drawContours(frame, [cnt], -1, (0,255,0),2)
            cv2.imwrite("file%d.jpg"%count, frame)

    And I attach the frame together separately using following code using ffmpeg command

    ffmpeg -r 3 -i frame%03d.jpg -c:v libx264 -vf fps=25 -pix_fmt yuv420p video.mp4

    I tried storing the individual frame in array, but it didn't work. It doesn't show any error, but pc crash.

  • Generate and concatenate videos from images with ffmpeg in single command

    16 August, by YulkyTulky

    My goal is to generate a video from images. Let's say I have 2 images 1.png and 2.png.

    I can do

    ffmpeg -loop 1 1.png -t 3 1.mp4
    ffmpeg -loop 1 2.png -t 5 2.mp4

    to create a 3 second video from the first image and 5 second video from the second image.

    Then, I merge the two videos using

    ffmpeg -i 1.mp4 -I 2.mp4 -filter_complex "concat" final.mp4 

    to create my final 8 second video.

    This process seems extremely inefficient, and I feel I do not have to use all this processing power+disk reading/writing to create 2 intermediary video files when I only want the one final video.

    Is there a way to execute this entire process in one ffmpeg command (efficiently)?

  • How can I make sure ffmpeg is found with videohash?

    16 August, by Alarm-1202

    I have been trying to deploy an app on Digital Ocean; I have installed the necessary packages but I keep getting an error from the videohash library: videohash.exceptions.FFmpegNotFound I have tried adding the package directory to PATH by running:

    export PATH="$PATH:/workspace/web/.heroku/python/lib/python3.10/site-packages/ffmpeg"

    I also tried adding it as an environment variable at app and component level but nothing I do seems to work. Is there another way I could try to solve this issue?

  • Cut and assemble audio clips ffmpeg

    16 August, by Kruglianin

    Can you tell me if it's possible to make a script for my task?

    Or tell me the right sequence of steps.

    An example of the task:

    There is an audio file "example.mp3" with duration of 60 seconds.

    I want to implement the following sequence of actions.

    Cut the first audio fragment from "example.mp3" from second 5 to 15 and cut the second audio fragment from second 50 to 60.

    Then, these two audio segments should be merged into one audio file and saved in a new folder named "example.mp3".

    That is, the resulting new "example.mp3" would be 20 seconds long and would consist of a first and second audio fragment.

  • How to repackage mov/mp4 file into HLS playlist with multiple audio streams

    16 August, by Martyna

    I'm trying to convert some videos (in the different formats, e.g., mp4, mov) which contain one video stream and multiple audio streams into one HLS playlist with multiple audio streams (treated as languages) and only one video stream.

    I already browsed a lot of stack threads and tried many different approaches, but I was only able to find answers for creating different HLS playlists with different audios.

    Sample scenario which I have to handle:

    1. I have one mov file, containing one video stream and 2 audio streams.
    2. I need to create an HLS playlist from this mov file, which will use this one video stream, but would encode these 2 audio streams as language tracks (so let's say it's ENG and FRA)
    3. Such prepared HLS can be later streamed in the player, and the end user would have a possibility to switch between audio tracks while watching the clip.

    What I was able to achieve is to create multiple hls playlists each with different audio track.

    ffmpeg -i "file_name.mp4" \
    -map 0:v -map 0:a -c:v copy -c:a copy -start_number 0 \
    -f hls \
    -hls_time 10 \
    -hls_playlist_type vod \
    -hls_list_size 0 \
    -master_pl_name master_playlist_name.m3u8 \
    -var_stream_map "v:0,agroup:groupname a:0,agroup:groupname,language:ENG a:1,agroup:groupname" file_name_%v_.m3u8

    My biggest issue is that I'm having hard time understanding how -map and -var_stream_map options should be used in my case, or if they even should be used in this scenario.

    An example of the result of ffmpeg -i command on the original mov file which should be converted into HLS.

      Stream #0:0[0x1](eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1920x1080, 8786 kb/s, 25 fps, 25 tbr, 12800 tbn (default)
          handler_name    : Apple Video Media Handler
          vendor_id       : [0][0][0][0]
          timecode        : 00:00:56:05
      Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 127 kb/s (default)
          handler_name    : SoundHandler
          vendor_id       : [0][0][0][0]
      Stream #0:2[0x3](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 127 kb/s
          handler_name    : SoundHandler
          vendor_id       : [0][0][0][0]

    I also checked this blogpost and I would like to achieve this exact effect, but with video, not with audio.

    For example, -var_stream_map "v:0,a:0 v:1,a:0 v:2,a:0" implies that the audio stream denoted by a:0 is used in all three video renditions.