Recherche avancée

Médias (91)

Sur d’autres sites (147)

  • Extremely slow rendering time using Moviepy

    15 janvier, par pacorisas

    I'm trying to create the following : two stacked videos (one on top of each other) with subtitles (like those videos you see in tiktok) from an srt file. For this, I'm first taking the top and bottom video and creating a CompositeVideoClip :

    


    clips_array([[video_clip], [random_bottom_clip]])


    


    Then, I'm taking this CompositeVideoClip and using a generator, creating the SubtitlesClip which then I will add to another CompositeVideoClip :

    


    sub = SubtitlesClip(os.path.join(temp_directory, f"subtitles.srt"), generator)
final = CompositeVideoClip([myvideo, sub.set_position(('center', 'center'))]).set_duration("00:02:40")


    


    Lastly, I'm adding some more text-clips (just an small title for the video) and rendering :

    


    video_with_text = CompositeVideoClip([final] + text_clips)
video_with_text.write_videofile(part_path, fps=30,threads=12,codec="h264_nvenc")


    


    Here is the problem. I tried to render a video of 180 seconds (3 minutes) and the video takes up to hour and a half (80 minutes) which is wild. I tried some render settings as you can see like changing 'codec' and using all the 'threads' of my CPU.
I tried to not use so many CompositeVideoClips, I read that when you concatenate those the final render will suffer a lot, but I didn't manage to find a way "not to use" that many CompositeVideoClips, any idea ?

    


    My PC is not that bad. 16GB, AMD Ryzen 5 5600 6-Core , NVIDIA 1650 SUPER.

    


    My goal is to at least bring the render to less than an hour. Right now is like 1.23s/it

    


  • ffmpeg piped to python and displayed with cv2.imshow slides rightward and changes colors

    10 septembre 2021, par Michael

    Code :

    


    import cv2
import time
import subprocess
import numpy as np

w,h = 1920, 1080
fps = 15

def ffmpegGrab():
    """Generator to read frames from ffmpeg subprocess"""
    cmd = f'.\\Resources\\ffmpeg.exe -f gdigrab -framerate {fps} -offset_x 0 -offset_y 0 -video_size {w}x{h} -i desktop -pix_fmt bgr24 -vcodec rawvideo -an -sn -f image2pipe -' 

    proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
    while True:
        raw_frame = proc.stdout.read(w*h*3)
        frame = np.fromstring(raw_frame, np.uint8)
        frame = frame.reshape((h, w, 3))
        yield frame

# Get frame generator
gen = ffmpegGrab()

# Get start time
start = time.time()

# Read video frames from ffmpeg in loop
nFrames = 0
while True:
    # Read next frame from ffmpeg
    frame = next(gen)
    nFrames += 1

    frame = cv2.resize(frame, (w // 4, h // 4))

    cv2.imshow('screenshot', frame)

    if cv2.waitKey(1) == ord("q"):
        break

    fps = nFrames/(time.time()-start)
    print(f'FPS: {fps}')


cv2.destroyAllWindows()


    


    The code does display the desktop capture however the color format seems to switch and the video scrolls rightward as if it is repeated. Am I going about this in the correct way ?

    


  • Convert frames to video on demand

    29 janvier 2020, par user3900456

    I’m working on a c++ project that generates frames to be converted to a video later.
    The project currently dumps all frames as jpg or png files in a folder and then I run ffmpeg manually to generate a mp4 video file.

    This project runs on a web server and an ios/android app (under development) will call this web server to have the video generated and downloaded.

    The web service is pretty much done and working fine.

    I don’t like this approach for obvious reasons like a server dependency, cost etc...
    I successfully created a POC that exposes the frame generator lib to android and I got it to save the frames in a folder, my next step now is to convert it to video. I considered using any ffmpeg for android/ios lib and just call it when the frames are done.

    Although it seems like I fixed half of the problem, I found a new one which is... each frame depending on the configuration could end up having 200kb+ in size, so depending on the amount of frames, it will take a lot of space from the user’s device.
    I’m sure this will become a huge problem very easily.

    So I believe that the ideal solution would be to generate the mp4 file on demand as each frame is created, so in the end there would be no storage space being taken as I woudn’t need to save a file for the frame.

    The problem is that I don’t know how to do that, I don’t know much about ffmpeg, I know it’s open source but I have no idea how to include a reference to it from the frames generator and generate the video "on demand".
    I heard about libav as well but again, same problem...

    I would really appreciate any sugestion on how to do it. What I need is basically a way to generate a mp4 video file given a list of frames.

    thanks for any help !