Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
Anyone familiar with how ffmpeg handles Out of order MPEGTS packets received on UDP
3 mai 2017, par RupeshLet us assume an encoder / ffmpeg that is pushing mpegts via udp. And we have another ffmpeg that is receiving these mpegts packets. In receiver end, because media is received over UDP, it is likely that some packets may be lost, or get out of order. I am interested to know how receiving ffmpeg handles it.
Sending process ffmpeg -re -i xyz.mp4 -codec copy -f mpegts udp://localhost:5011
Receiving process -i udp://localhost:5011 output.mov
-
ffmpeg won't stop encoding, even with -shortest command
3 mai 2017, par smartzerI am having an issue with ffmpeg stopping the encoding process, and searching on the internet has gotten me no working solutions. I am calling ffmpeg in Linux through Python's subprocess module, as so:
mergeFiles = subprocess.Popen("ffmpeg -i /home/pi/Video/video.mov -i /home/pi/Audio/test.wav -acodec copy -vcodec copy -map 0:v -map 1:a -shortest /home/pi/Final/output.mkv", shell=True)
The command prompt is waiting for me to manually end the encoding process with "ctrl-c", but I won't have access to a keyboard to kill the encoding. I just want it to stop when it's done. I have even attempted to use mergeFiles.kill() from Python after a couple seconds, and that doesn't even work. Help!
EDIT: If I wasn't clear, I meant that there is no error, ffmpeg simply won't continue until I hit "ctrl-c". I just want it to stop encoding when it's done. This is what my command prompt looks like:
It's just waiting for me to press "ctrl-c"
-
JavaFX MediaPlayer unable to play local m3u8 file
3 mai 2017, par LennartI want to show a live stream of a web cam in my JavaFX application using the MediaPlayer/MediaView. My attempt was to use ffmpeg to record a HLS and to play the resulting m3u8 file, but that throws the following exception (VLC plays the video without problems):
MediaException: UNKNOWN : com.sun.media.jfxmedia.MediaException: Could not create player! : com.sun.media.jfxmedia.MediaException: Could not create player! at javafx.scene.media.MediaException.exceptionToMediaException(MediaException.java:146) at javafx.scene.media.MediaPlayer.init(MediaPlayer.java:511) at javafx.scene.media.MediaPlayer.
(MediaPlayer.java:414) at de.fraunhofer.iosb.ias.flow.assessment.management.monitor.MonitorViewController.testStream(MonitorViewController.java:203) ... 58 more Caused by: com.sun.media.jfxmedia.MediaException: Could not create player! at com.sun.media.jfxmediaimpl.NativeMediaManager.getPlayer(NativeMediaManager.java:274) at com.sun.media.jfxmedia.MediaManager.getPlayer(MediaManager.java:118) at javafx.scene.media.MediaPlayer.init(MediaPlayer.java:467) ... 60 more I debugged the player creation and the error occurs inside the constructor of GSTMediaPlayer when
GSTMediaPlayer.gstInitPlayer()
is called. This native method returns the error code257
, which javafx maps toMediaError.ERROR_MEDIA_NULL
.I used the following ffmpeg command to record the video:
ffmpeg -hide_banner -y -rtbufsize 250MB -f dshow -pixel_format yuv420p -video_size 960x720 -i video="Logitech HD Pro Webcam C920" -c:v libx264 -crf 20 -pix_fmt yuv420p out.m3u8
I'm pretty sure that the encoding matches the requirements of javafx, because if I change the output container from m3u8 to mp4, the video is played without problems using the exact same ffmpeg command.
This is the output of ffprobe for the m3u8 file:
Input #0, hls,applehttp, from 'out.m3u8': Duration: 00:00:24.23, start: 1.466667, bitrate: 0 kb/s Program 0 Metadata: variant_bitrate : 0 Stream #0:0: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 960x720, 30 fps, 30 tbr, 90k tbn, 60 tbc
And for the mp4 file:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'out.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf57.41.100 Duration: 00:01:04.93, start: 0.000000, bitrate: 1676 kb/s Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 960x720, 1673 kb/s, 30 fps, 30 tbr, 10000k tbn, 60 tbc (default) Metadata: handler_name : VideoHandler
The resulting m3u8 file looks like this:
#EXTM3U #EXT-X-VERSION:3 #EXT-X-TARGETDURATION:9 #EXT-X-MEDIA-SEQUENCE:0 #EXTINF:8.333322, out0.ts #EXTINF:8.333333, out1.ts #EXTINF:7.133322, out2.ts #EXTINF:0.433333, out3.ts #EXT-X-ENDLIST
Update: After I found this reference m3u file, I think that the problem is that the file is stored locally and isn't delivered via HTTP. The video plays fine with this:
Media media = new Media("http://download.oracle.com/otndocs/products/javafx/JavaRap/prog_index.m3u8"); MediaPlayer player = new MediaPlayer(media); player.setAutoPlay(true); mediaView.setMediaPlayer(player);
But after I downloaded the reference m3u and all of its segments and tried to open the local file like this, the error occurred again:
File video = new File("H://Projects//Tools//ref//prog_index.m3u8"); Media media = new Media(video.toURI().toString()); MediaPlayer player = new MediaPlayer(media); player.setAutoPlay(true); mediaView.setMediaPlayer(player);
I tried to change my m3u file so that the segements are referenced with absolute paths. I tried different notations (
H:\f\out0.ts
,H:/f/out0.ts
,H://f//out0.ts
,file:/H:/f/out0.ts
,file:///H:/f/out0.ts
), but I couldn't get it to work. -
Trouble with hardware-assisted encoding/decoding via FFmpeg on Azure GPU vm's (ubuntu 16.04)
3 mai 2017, par user3776020I am trying to use NVIDIA hardware acceleration with FFmpeg/libav, but can't get it to work correctly on Azure vm's running Ubuntu 16.04. For a sample case, I am trying to do a simple decoding of an h264 video into a raw YUV file (as detailed here: https:// developer.nvidia.com/ffmpeg).
So far, I've tried it on NC-6, NC-12, and NV-6 machines (in different regions). In each of these instances, it would take about 30-45 seconds to process a single video frame. As a comparison, I also tried it on a P2.xlarge vm on AWS (which has very similar specs to the NC-6), which was able to process about 3000 frames in about 5 seconds. Has anyone else run into this issue with Azure machines, or has any idea why this would be the case?
Here are the commands I used to install the necessary drivers/libraries/etc (I also verified that each machine as the same NVIDIA driver version installed - 375.51):
CUDA_REPO_PKG=cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
wget -O /tmp/${CUDA_REPO_PKG} http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/${CUDA_REPO_PKG}
sudo dpkg -i /tmp/${CUDA_REPO_PKG}
sudo apt-get update
sudo apt-get install -y cuda-drivers
sudo apt-get install -y cuda
sudo apt-get install -y nvidia-cuda-toolkit
[reboot]
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get dist-upgrade -y
[reboot]
git clone https://github.com/FFmpeg/FFmpeg.git
[download the latest video codec SDK from NVIDIA at: https:// developer.nvidia.com/designworks/video_codec_sdk/downloads/v7.1]
[unzipped codec, and copy header files from /Video_Codec_SDK_7.1.9/Samples/common/inc/ into /usr/include/]
cd ~/FFmpeg
./configure --enable-nonfree --disable-shared --enable-nvenc --enable-cuda --enable-cuvid --enable-libnpp --extra-cflags=-Ilocal/include --extra-cflags=-I../nv_sdk --extra-ldflags=-L../nv_sdk
sudo make && sudo make install
For the FFmpeg command that I used to decode a sample movie file, I used the following:
sudo ffmpeg -vsync 0 -c:v h264_cuvid -i sample_vid.mp4 -f rawvideo outputvid.yuv
-
Read, process and save video and audio with FFMPEG
3 mai 2017, par sysseonI want to open a video resource with ffmpeg on Python, get the read frames from the pipe, modify them (e.g. put the timestamp with OpenCV) and write the result to an output video file. I also want to save the audio source with no changes.
My code (with no audio and two processes):
import subprocess as sp import numpy as np # import time # import cv2 FFMPEG_BIN = "C:/ffmpeg/bin/ffmpeg.exe" INPUT_VID = 'input.avi' res = [320, 240] command_in = [FFMPEG_BIN, '-y', # (optional) overwrite output file if it exists '-i', INPUT_VID, '-f', 'image2pipe', # image2pipe or rawvideo? '-pix_fmt', 'bgr24', '-vcodec', 'rawvideo', '-'] command_out = [FFMPEG_BIN, '-y', # (optional) overwrite output file if it exists '-f', 'rawvideo', '-vcodec', 'rawvideo', '-s', '320x240', '-pix_fmt', 'bgr24', '-r', '25', '-i', '-', # '-i', INPUT_VID, # Audio '-vcodec', 'mpeg4', 'output.mp4'] pipe_in = sp.Popen(command_in, stdout=sp.PIPE, stderr=sp.PIPE) pipe_out = sp.Popen(command_out, stdin=sp.PIPE, stderr=sp.PIPE) while True: # Read 320*240*3 bytes (= 1 frame) raw_image = pipe_in.stdout.read(res[0] * res[1] * 3) # Transform the byte read into a numpy array image = np.fromstring(raw_image, dtype=np.uint8) image = image.reshape((res[1], res[0], 3)) # Draw some text in the image # draw_text(image) # Show the image with OpenCV (not working, gray image, why?) # cv2.imshow("VIDEO", image) # Write image to output process pipe_out.stdin.write(image.tostring()) print 'done' pipe_in.kill() pipe_out.kill()
- Could it be done with just a process? (Read the input from a file, put it in the input pipe, get the image, process it, and put it in the output pipe to be saved into a video file)
- How can I save the audio? In this example, I could use '-i INPUT_VID' in the second process to get the audio channel, but my source will be a RTSP, and I don't want to create a connection for each process. Could I put video+audio in the pipe and rescue and separate it with numpy? How?
- I use a loop to process the frames and wait until I get an error. How can I check if all frames are already read?
- Not important, but if I try to show the images with OpenCV (cv2.imshow(...)), I only see a gray screen. Why?