Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • ffmpeg colorspace conversion speed

    5 mai 2017, par Mikhail Novikov

    I am running 2 ffmpeg commands on a fairly fast, GPU-enabled machine (AWS g2.2xlarge instance):

    ffmpeg -i ./in.mp4 -s 1280x720 -r 30 -an -f rawvideo -pix_fmt yuv420p - | cat - >/dev/null
    

    gives 524fps while

    ffmpeg -i ./in.mp4 -s 1280x720 -r 30 -an -f rawvideo -pix_fmt argb - | cat - >/dev/null
    

    just 101... it just shouldn't, couldn't take as much as 8ms per frame on a modern CPU, let alone GPU!

    What am i doing wrong and how can i improve speed of this?

    PS: Now this is truly ridiculous!

    ffmpeg -i ./in.mp4 -s 1280x720 -r 30 -an -f rawvideo -pix_fmt yuv420p - | ffmpeg -s 1280x720 -r 30 -an -f rawvideo -pix_fmt yuv420p -i - -s 1280x720 -r 30 -an -f rawvideo -pix_fmt argb - | cat - >/dev/null
    

    makes 275 fps! which is by far not perfect, but something i can live with.

    why?

    Thanks!

  • ffmpeg chromakey filter : blend parameter does not what it should

    5 mai 2017, par Mat

    I want to remove greenscreen background and am not completely satisfied with what I have achieved because I still have green borders (especially on semi-transparent areas like hair, when I move my head).

    The documentation for the blend-parameter of chromakey filter says

    blend

    Blend percentage.

    0.0 makes pixels either fully transparent, or not transparent at all.

    Higher values result in semi-transparent pixels, with a higher transparency the more similar the pixels color is to the key color.

    So I reckoned, I could use this to minimise greenscreen bleeding (is this the term?) when removing the background with a command like this:

    ffmpeg -i DSCN0015.MOV -vf "[in] hqdn3d=4:4:8:8 [dn]; [dn] scale=iw*3:-1 [sc]; [sc] chromakey=0x005d0b:0.125:0.0 [out]" -r 24 -an -c:v ffvhuff 4.mov
    

    But when I use anything else than 0.0 for blend, it seems to determine some kind of MINIMUM transparency and the entire frame is affected.

    Here are some pics to visualize: the first is the raw material from the camera. the 2nd shows what I get with blend=0.0 and the last one shows the problem: blend=0.5, but the whole frame is almost completely transparent.

    [Raw material as from camera]

    [blend=0.0]

    [blend=0.5]

  • Extracting each individual frame from an H264 stream for real-time analysis with OpenCV

    5 mai 2017, par exclmtnpt

    Problem Outline

    I have an h264 real-time video stream (I'll call this "the stream") being captured in Process1. My goal is to extract each frame from the stream as it comes through and use Process2 to analyze it with OpenCV. (Process1 is nodejs, Process2 is Python)

    Things I've tried, and their failure modes:

    • Send the stream directly from one Process1 to Process2 over a named fifo pipe:

    I succeeded in directing the stream from Process1 into the pipe. However, in Process2 (which is Python) I could not (a) extract individual frames from the stream, and (b) convert any extracted data from h264 into an OpenCV format (e.g. JPEG, numpy array).

    I had hoped to use OpenCV's VideoCapture() method, but it does not allow you to pass a FIFO pipe as an input. I was able to use VideoCapture by saving the h264 stream to a .h264 file, and then passing that as the file path. This doesn't help me, because I need to do my analysis in real time (i.e. I can't save the stream to a file before reading it in to OpenCV).

    • Pipe the stream from Process1 to FFMPEG, use FFMPEG to change the stream format from h264 to MJPEG, then pipe the output to Process2:

    I attempted this using the command:

    cat pipeFromProcess1.fifo | ffmpeg -i pipe:0 -f h264 -f mjpeg pipe:1 | cat > pipeToProcess2.fifo

    The biggest issue with this approach is that FFMPEG takes inputs from Process1 until Process1 is killed, and only then does Process2 begin to receive the data.

    Additionally, on the Process2 side, I still don't understand how to extract individual frames from the data coming over the pipe. I open the pipe for reading (as "f") and then execute data = f.readline(). The size of data varies drastically (some reads have length on the order of 100, others length on the order of 1,000). When I use f.read() instead of f.readline(), the length is much larger, on the order of 100,000.

    If I were to know that I was getting the correct size chunk of data, I would still not know how to transform it into an OpenCV-compatible array because I don't understand the format it's coming over in. It's a string, but when I print it out it looks like this:

    ��_M~0A0����tQ,\%��e���f/�H�#Y�p�f#�Kus�} F����ʳa�G������+$x�%V�� }[����Wo �1'̶A���c����*�&=Z^�o'��Ͽ� SX-ԁ涶V&H|��$ ~��<�E�� ��>�����u���7�����cR� �f�=�9 ��fs�q�ڄߧ�9v�]�Ӷ���& gr]�n�IRܜ�檯���� � ����+ �I��w�}� ��9�o��� �w��M�m���IJ ��� �m�=�Soՙ}S �>j �,�ƙ�'���tad =i ��WY�FeC֓z �2�g�;EXX��S��Ҁ*, ���w� _|�&�y��H��=��)� ���Ɗ3@ �h���Ѻ�Ɋ��ZzR`��)�y�� c�ڋ.��v�!u���� �S�I#�$9R�Ԯ0py z ��8 #��A�q�� �͕� ijc �bp=��۹ c SqH

    Converting from base64 doesn't seem to help. I also tried:

    array = np.fromstring(data, dtype=np.uint8)
    

    which does convert to an array, but not one of a size that makes sense based on the 640x368x3 dimensions of the frames I'm trying to decode.

    • Using decoders such as Broadway.js to convert the h264 stream:

    These seem to be focused on streaming to a website, and I did not have success trying to re-purpose them for my goal.

    Clarification about what I'm NOT trying to do:

    I've found many related questions about streaming h264 video to a website. This is a solved problem, but none of the solutions help me extract individual frames and put them in an OpenCV-compatible format.

    Also, I need to use the extracted frames in real time on a continual basis. So saving each frame as a .jpg is not helpful.

    System Specs

    Raspberry Pi 3 running Raspian Jessie

    Additional Detail

    I've tried to generalize the problem I'm having in my question. If it's useful to know, Process1 is using the node-bebop package to pull down the h264 stream (using drone.getVideoStream()) from a Parrot Bebop 2.0. I tried using the other video stream available through node-bebop (getMjpegStream()). This worked, but was not nearly real-time; I was getting very intermittent data streams. I've entered that specific problem as an Issue in the node-bebop repository.

    Thanks for reading; I really appreciate any help anyone can give!

  • How to cut the videos per every 2MB using FFMpeg

    5 mai 2017, par Krish

    Hi i have 20MB size video,I need to split this video as each part should be 2MB, For this i google it and found FFMpeg library for splitting videos.

    But this library splitting the videos based on time-limit suppose (00:00:02 to 00:00:06 seconds,Between this time period video splitted)

    My requirement is, I want to cut the video per every 2MB that's what my exact requirement not based on time limit.

    I searched for this lot in google but i did not get solution can some one help me please.

    FFMpeg Command i used for splitting:-

    String cmd[] = new String[]{"-i", inputFileUrl, "-ss", "00:00:02", "-c", "copy", "-t", "00:00:06",
                    outputFileUrl};
            executeBinaryCommand(fFmpeg, cmd);
    
     public void executeBinaryCommand(FFmpeg ffmpeg, String[] command) {
    
            try {
    
                if (ffmpeg != null) {
    
                    ffmpeg.execute(command,
                            new ExecuteBinaryResponseHandler() {
    
                                @Override
                                public void onFailure(String response) {
                                    System.out.println("failure====>" + response.toString());
                                }
    
                                @Override
                                public void onSuccess(String response) {
                                    System.out.println("resposense====>" + response.toString());
                                }
    
                                @Override
                                public void onProgress(String response) {
                                    System.out.println("on progress");
                                }
    
                                @Override
                                public void onStart() {
                                    System.out.println("start");
                                }
    
                                @Override
                                public void onFinish() {
                                    System.out.println("Finish");
                                }
                            });
                }
            } catch (FFmpegCommandAlreadyRunningException exception) {
                exception.printStackTrace();
            }
        }
    
  • Using ffmpeg to create looping apng

    5 mai 2017, par Harry

    I'm trying to create looping apng files from mkvs. My problem is that they don't loop. The files play once, but stop.

    This is what I'm doing:

     ffmpeg -ss 16:43 -i ./10.mkv -loop 10 -t 1 -filter:v "setpts=PTS-STARTPTS, crop=1200:800, hqdn3d=1.5:1.5:6:6, scale=600:400"  10-file-2.apng
    

    I've tried -loop -1 -loop 10 -loop 1 but there is no looping done. My version is

    ffmpeg-3.3.el_capitan.bottle.tar.gz