Basically I have a folder with hundreds of video files(*.avi) each one with more or less an hour long. What I would like to achieve is a piece of code that could go through each one of those videos and randomly select two or three frames from each file and then stitch it back together or in alternative save the frames in a folder as jpegs.
Initially I thought I could do this using R but quickly I've realised that I would need something else possible working together with R.
Is it possible to call FFMPEG from R to do the task above?
I have a script which subclips a larger video and then stitches needed parts together using concatenate and write video file in moviepy. But the problem is everytime i run the script i get the video but the audio is off sync, usually the audio starts normally, but over time becomes delayed. I suspect it is because i want to concatenate around 220 smaller mp4 videos into one big mp4 video, but every smaller clips receives the error:'UserWarning: In file test.mp4, 1555200 bytes wanted but 0 bytes read at frame index y (out of a total y+1 frames), at time x sec. Using the last valid frame instead.'
I use moviepy v2
MY CODE (it doesnt produce any strict errors, but does give the aformentioned UserWarning when writing video_unsilenced.mp4):
n = 0
cuts = []
input_paths = []
vc = []
os.makedirs(r"ShortsBot\SUBCLIPS")
for timer in range(len(startsilence)-1):
w = VideoFileClip(r"ShortsBot\output\cropped_video.mp4").subclipped(endsilence[n],(startsilence[n+1]+0.5))
w.write_videofile(r"ShortsBot\SUBCLIPS\video" + str(n) + ".mp4")
a = VideoFileClip(r"ShortsBot\SUBCLIPS\video" + str(n) + ".mp4")
vc.append(a)
n+=1
output_fname = "video_unsilenced.mp4"
clip = mpy.concatenate_videoclips(clips=vc, method= 'compose')
clip.write_videofile(filename=output_fname, fps=30)
_ = [a.close() for a in vc]
Because moviepy is shaving off a frame or two for every video clip and at the same time writing the audio of the concatenated clip normally (without shaving off the audio that is in the missing frames), the video and audio slowly become out of sync.And the more clips i want to concatenate, the more the audio becomes out of sync, basically confirming my suspicion that it is because moviepy is using the last valid frame and writing the audio normally. The question i have is how can i fix this? Ive looked for similar questions, but havent found the exact answer i was looking for. Sorry if this is something basic, im a beginner programmer in python and would really appreciate some tips or some sort of fix. Thanks everyone!
I would like to create a video consisting of some text. The video will only be 0.5 seconds long. The background should just be some colour, I am able to create such video from a photo, but cant find anywhere how this could be done without a photo, just using text to create such video.
I'm trying to convert audio streams to AAC in C++. FFplay plays everything fine (now) but VLC still has problems with one particular situation: 5.1(side). FFplay only plays it if I filter 5.1(side) to 5.1. Filtering to stereo or mono works well and as expected.
My setup right now is:
send packet
receive audio AVFrame
apply filter
resample to produce output AVFrame with 1024 samples (required by AAC)
send new audio frame
receive audio packet
Weirdly enough, using FFmpeg's CLI converts my file properly.
ffmpeg -i test.mp4
But FFprobe tells me that the audio stream is now 6 channels instead of 5.1(side) or 5.1. I did try to set AAC to 6 channels in both the AVStream and the AVCodecContext. Setting it in the AVStream doesn't change anything in FFprobe and the AVCodecContext for AAC doesn't allow it.