15:29
I have a program that retrieves images (png) and audio files from Azure Blob Storage to merge them into a video, which is then written to a temporary file and saved back to Blob Storage. I'm coding in **Python **and here is the function I use from MoviePy:
final_clip.write_videofile(temp_file_name, fps=24, codec='libx264', audio_codec='mp3')
I have containerized my code, and the **Docker **image works perfectly on my machine. However, once deployed on Azure, I encounter this problem with writing the video:
Failed: [Errno 32] Broken pipe MoviePy error: (...)
19:47
I'm running a browser on a Linux server, and I'm trying to figure out the best way to capture the audio/video of the screen, and stream that with as little latency as possible to an API on another server.
Requirements:
Stream the audio from server A to server B so that server B can pipe that forward to an online transcription service.
Stream the audio and video from server A to server B so that server B can store the contents to some kind of blob storage. If the stream is killed for some reason before it's ended, the partial contents should be saved and (...)
19:17
I have three .wav files in my folder and I want to convert them into .mp3 with ffmpeg.
I wrote this bash script, but when I execute it, only the first one is converted to mp3.
What should I do to make script keep going through my files?
This is the script:
#!/bin/bash
find . -name '*.wav' | while read f; do
ffmpeg -i "$f" -ab 320k -ac 2 "$f%.*.mp3"
done
17:19
I have been trying to merge a video, audio and subtitle stream in ffmepg but, since it is my first time trying it out, I am having a bit of trouble figuring out how to do so.
Goal: The output I am looking for a the video in the background with the subtitles overlayed in the center of the screen with the audio from the input stream, which in this case is an mp3 file.
These are my input streams:
video_input_stream = ffmpeg.input(background_video)
audio_input_stream = ffmpeg.input(audio)
subtitle_input_stream = ffmpeg.input(subtitle_file)
and I have (...)
14:14
I created an AWS Lambda function in node.js 18 that is using a static, ver 7 build of FFmpeg located in a lambda layer. Unfortunately it's just the ffmpeg build and doesn't include ffprobe.
I have an mp4 audio file in one S3 bucket and a wav audio file in a second S3 bucket. I'm uploading the output file to a third S3 bucket.
Specs on the files (please let me know if any more info is needed)
Audio:
wav, 13kbps, aac (LC), 6:28 duration
Video:
mp4, 1280x720 resolution, 25 frame rate, h264 codec, 3:27 duration
Goal:
Create blank video to (...)
14:00
I'm using LibAV to decode video (H.264) using dxva2 or d3d11va hardware decoder.
I then copy the decoded video frame from the GPU memory to system memory by calling av_hwframe_transfer_data().
I noticed that when using the modern d3d11va hardware decoder, the CPU load is twice as much as when using the old dxva2 hardware decoder.
I checked with LibAV 6.1 and with the old version V4.4 - the same result.