I have a program where I build an ffmpeg command string to capture videos with options input through a gtk3 gui. Once I have all my options selected, I spawn a process with the ffmpeg command string. And I add a child watch to tell me when the process has completed. // Spawn child process ret = g_spawn_async (NULL, argin, NULL, G_SPAWN_DO_NOT_REAP_CHILD, NULL, NULL, &pid1, NULL); if ( !ret ) g_error ("SPAWN FAILED"); return; /* Add watch function to catch termination of the process. This function * will clean any remnants of process */ g_child_watch_add (pid1, (...)
I am building a program that use ffmpeg to stream webcam content over internet. I would like to know if it is possible to use the GPU for the streaming part on the raspberry pi model 3. If yes, how could I implement this on ffmpeg?
I am new to video processing. I am trying to analyze Cuda video Decoder.
Can anyone explain the decoding process in CUVID. The NVIDIA CUVID documentation has very little information regarding the decoding of slice and macro blocks. Is cuvidDecodePicture will split the picture into macro blocks/slices to decode the picture? How entropy used in H.264 was handled in CUVID? Why are the CUVIDPICPARAMS has two separate structures for mpeg4 and H.264? How to compare the process of video decoding in ffmpeg and CUVID?
Thanks in (...)
I've been trying to figure out how FFMPEG decides it's dimensions after cropping a video. After cropping width 400 by 3 (133.3), it becomes 132. 640 by 3 (213.3) becomes 212. 426 by 2 (213) becomes 212. I thought it might be int((dimension+1)/crop) - 1 (such as (400-1/3) -1 becomes 132 which is correct) but this fails on 720/2 which becomes 359, but it should be 360. Any ideas?
I'm trying to make a blend between some images using blend = Popen(['convert', 'test_images/*.jpg', '-delay', '10', '-morph', '10', '-'], stdout=PIPE)
and pipe the output to ffmpeg to write a video from the image sequence video = Popen(['ffmpeg', '-i', '-', '-f', 'image2pipe', '-r', '30', '-c:v', 'libx264', '-pix_fmt', 'yuv420p', 'test.mp4'], stdin=PIPE) for _ in range(15): video.stdin.write(blend.stdout.read()) video.stdin.close()
I'm trying to do it all in memory and not write to disk. All I get currently is a 50kb mpeg4 file which does not (...)
I have to a create video with animation using gif image file so can you please help how to create animated videos using ffmpeg library?
I want to handle videos in the camera roll of iOS system by FFMpeg.
However, the ffmpeg can only use the path to read videos, byt iOS forid directly access camera roll file url(eg. /var/mobile/Media/DCIM/105APPLE/IMG_5638.MOV).
The only way I can do that is copying the videos to sandbox and read by ffmpeg. How can I directly read videos by ffmpeg?
I need to add an image and different position for each millisecond the video frame, using php and ffmpeg. For example: In the first second I add an image in the X position, the next second another image in the position X. If you use the command directly on the terminal, the conversion is successful. But in PHP I have difficulties. In php, use the following command:
I used the sleep function (PHP), but it did not work. Please, as I have not much experience with ffmpeg and php, you can help me.
Thank you very (...)
I'm attempting to transcode/remux an RTSP stream in H264 format into a MPEG4 container, containing just the H264 video stream. Basically, webcam output into a MP4 container.
I can get a poorly coded MP4 produced, using this code: // Variables here for demo AVFormatContext * video_file_output_format = nullptr; AVFormatContext * rtsp_format_context = nullptr; AVCodecContext * video_file_codec_context = nullptr; AVCodecContext * rtsp_vidstream_codec_context = nullptr; AVPacket packet = 0; AVStream * video_file_stream = nullptr; AVCodec * rtsp_decoder_codec = nullptr; int (...)
I'm trying to find a solution that creates caps (video screenshots) at given framerates and is more performant than my current way: ffmpeg -i "/path/to/video.mp4" -vf "select=gte(n\\,10000)" -vframes 1 "/path/to/cap.png"
I have a zillion videos, each between 10 minutes and 2 hours and I need exactly 6 caps per video. With my current way it takes ages, as ffmpeg is "walking" though the video to find the next frame where it should take the cap. This "walking time" makes almost 99% of job which is currently about 4 minutes per video (depending on its length).
I'm wondering if there's maybe (...)