Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • ffmpeg chromakey filter : blend parameter does not what it should

    5 mai 2017, par Mat

    I want to remove greenscreen background and am not completely satisfied with what I have achieved because I still have green borders (especially on semi-transparent areas like hair, when I move my head).

    The documentation for the blend-parameter of chromakey filter says

    blend

    Blend percentage.

    0.0 makes pixels either fully transparent, or not transparent at all.

    Higher values result in semi-transparent pixels, with a higher transparency the more similar the pixels color is to the key color.

    So I reckoned, I could use this to minimise greenscreen bleeding (is this the term?) when removing the background with a command like this:

    ffmpeg -i DSCN0015.MOV -vf "[in] hqdn3d=4:4:8:8 [dn]; [dn] scale=iw*3:-1 [sc]; [sc] chromakey=0x005d0b:0.125:0.0 [out]" -r 24 -an -c:v ffvhuff 4.mov
    

    But when I use anything else than 0.0 for blend, it seems to determine some kind of MINIMUM transparency and the entire frame is affected.

    Here are some pics to visualize: the first is the raw material from the camera. the 2nd shows what I get with blend=0.0 and the last one shows the problem: blend=0.5, but the whole frame is almost completely transparent.

    [Raw material as from camera]

    [blend=0.0]

    [blend=0.5]

  • Extracting each individual frame from an H264 stream for real-time analysis with OpenCV

    5 mai 2017, par exclmtnpt

    Problem Outline

    I have an h264 real-time video stream (I'll call this "the stream") being captured in Process1. My goal is to extract each frame from the stream as it comes through and use Process2 to analyze it with OpenCV. (Process1 is nodejs, Process2 is Python)

    Things I've tried, and their failure modes:

    • Send the stream directly from one Process1 to Process2 over a named fifo pipe:

    I succeeded in directing the stream from Process1 into the pipe. However, in Process2 (which is Python) I could not (a) extract individual frames from the stream, and (b) convert any extracted data from h264 into an OpenCV format (e.g. JPEG, numpy array).

    I had hoped to use OpenCV's VideoCapture() method, but it does not allow you to pass a FIFO pipe as an input. I was able to use VideoCapture by saving the h264 stream to a .h264 file, and then passing that as the file path. This doesn't help me, because I need to do my analysis in real time (i.e. I can't save the stream to a file before reading it in to OpenCV).

    • Pipe the stream from Process1 to FFMPEG, use FFMPEG to change the stream format from h264 to MJPEG, then pipe the output to Process2:

    I attempted this using the command:

    cat pipeFromProcess1.fifo | ffmpeg -i pipe:0 -f h264 -f mjpeg pipe:1 | cat > pipeToProcess2.fifo

    The biggest issue with this approach is that FFMPEG takes inputs from Process1 until Process1 is killed, and only then does Process2 begin to receive the data.

    Additionally, on the Process2 side, I still don't understand how to extract individual frames from the data coming over the pipe. I open the pipe for reading (as "f") and then execute data = f.readline(). The size of data varies drastically (some reads have length on the order of 100, others length on the order of 1,000). When I use f.read() instead of f.readline(), the length is much larger, on the order of 100,000.

    If I were to know that I was getting the correct size chunk of data, I would still not know how to transform it into an OpenCV-compatible array because I don't understand the format it's coming over in. It's a string, but when I print it out it looks like this:

    ��_M~0A0����tQ,\%��e���f/�H�#Y�p�f#�Kus�} F����ʳa�G������+$x�%V�� }[����Wo �1'̶A���c����*�&=Z^�o'��Ͽ� SX-ԁ涶V&H|��$ ~��<�E�� ��>�����u���7�����cR� �f�=�9 ��fs�q�ڄߧ�9v�]�Ӷ���& gr]�n�IRܜ�檯���� � ����+ �I��w�}� ��9�o��� �w��M�m���IJ ��� �m�=�Soՙ}S �>j �,�ƙ�'���tad =i ��WY�FeC֓z �2�g�;EXX��S��Ҁ*, ���w� _|�&�y��H��=��)� ���Ɗ3@ �h���Ѻ�Ɋ��ZzR`��)�y�� c�ڋ.��v�!u���� �S�I#�$9R�Ԯ0py z ��8 #��A�q�� �͕� ijc �bp=��۹ c SqH

    Converting from base64 doesn't seem to help. I also tried:

    array = np.fromstring(data, dtype=np.uint8)
    

    which does convert to an array, but not one of a size that makes sense based on the 640x368x3 dimensions of the frames I'm trying to decode.

    • Using decoders such as Broadway.js to convert the h264 stream:

    These seem to be focused on streaming to a website, and I did not have success trying to re-purpose them for my goal.

    Clarification about what I'm NOT trying to do:

    I've found many related questions about streaming h264 video to a website. This is a solved problem, but none of the solutions help me extract individual frames and put them in an OpenCV-compatible format.

    Also, I need to use the extracted frames in real time on a continual basis. So saving each frame as a .jpg is not helpful.

    System Specs

    Raspberry Pi 3 running Raspian Jessie

    Additional Detail

    I've tried to generalize the problem I'm having in my question. If it's useful to know, Process1 is using the node-bebop package to pull down the h264 stream (using drone.getVideoStream()) from a Parrot Bebop 2.0. I tried using the other video stream available through node-bebop (getMjpegStream()). This worked, but was not nearly real-time; I was getting very intermittent data streams. I've entered that specific problem as an Issue in the node-bebop repository.

    Thanks for reading; I really appreciate any help anyone can give!

  • How to cut the videos per every 2MB using FFMpeg

    5 mai 2017, par Krish

    Hi i have 20MB size video,I need to split this video as each part should be 2MB, For this i google it and found FFMpeg library for splitting videos.

    But this library splitting the videos based on time-limit suppose (00:00:02 to 00:00:06 seconds,Between this time period video splitted)

    My requirement is, I want to cut the video per every 2MB that's what my exact requirement not based on time limit.

    I searched for this lot in google but i did not get solution can some one help me please.

    FFMpeg Command i used for splitting:-

    String cmd[] = new String[]{"-i", inputFileUrl, "-ss", "00:00:02", "-c", "copy", "-t", "00:00:06",
                    outputFileUrl};
            executeBinaryCommand(fFmpeg, cmd);
    
     public void executeBinaryCommand(FFmpeg ffmpeg, String[] command) {
    
            try {
    
                if (ffmpeg != null) {
    
                    ffmpeg.execute(command,
                            new ExecuteBinaryResponseHandler() {
    
                                @Override
                                public void onFailure(String response) {
                                    System.out.println("failure====>" + response.toString());
                                }
    
                                @Override
                                public void onSuccess(String response) {
                                    System.out.println("resposense====>" + response.toString());
                                }
    
                                @Override
                                public void onProgress(String response) {
                                    System.out.println("on progress");
                                }
    
                                @Override
                                public void onStart() {
                                    System.out.println("start");
                                }
    
                                @Override
                                public void onFinish() {
                                    System.out.println("Finish");
                                }
                            });
                }
            } catch (FFmpegCommandAlreadyRunningException exception) {
                exception.printStackTrace();
            }
        }
    
  • Using ffmpeg to create looping apng

    5 mai 2017, par Harry

    I'm trying to create looping apng files from mkvs. My problem is that they don't loop. The files play once, but stop.

    This is what I'm doing:

     ffmpeg -ss 16:43 -i ./10.mkv -loop 10 -t 1 -filter:v "setpts=PTS-STARTPTS, crop=1200:800, hqdn3d=1.5:1.5:6:6, scale=600:400"  10-file-2.apng
    

    I've tried -loop -1 -loop 10 -loop 1 but there is no looping done. My version is

    ffmpeg-3.3.el_capitan.bottle.tar.gz

  • Extract information about video frames

    5 mai 2017, par Sanduni Wickramasinghe

    I wrote a code to read a video in encoded domain and able to retrieve information such as size and duration of frames. AVPacket class consist of a variable as data. I can read it but since it is a bite array I can't use it in readable format. I want to use this data for comparison with another video file. Please help.

    void CFfmpegmethods::VideoRead(){
    av_register_all();
    avformat_network_init();
    ofstream outdata;
    const char *url = "H:\\Sanduni_projects\\Sample_video.mp4";
    AVDictionary *options = NULL;
    AVFormatContext *s = avformat_alloc_context(); //NULL;
    
    AVPacket *pkt = new AVPacket();
    
    //open an input stream and read the header
    int ret = avformat_open_input(&s, url, NULL, NULL);
    
    //avformat_find_stream_info(s, &options); //finding the missing information 
    
    if (ret < 0)
        abort();
    
    av_dict_set(&options, "video_size", "640x480", 0);
    av_dict_set(&options, "pixel_format", "rgb24", 0);
    
    if (avformat_open_input(&s, url, NULL, &options) < 0){
        abort();
    }
    
    av_dict_free(&options);
    
    AVDictionaryEntry *e;
    
    if (e = av_dict_get(options, "", NULL, AV_DICT_IGNORE_SUFFIX)) {
        fprintf(stderr, "Option %s not recognized by the demuxer.\n", e->key);
        abort();
    }
    
    int i = 1;
    int j = 0;
    int64_t duration = 0;
    int size = 0;
    uint8_t *data = 0; //Unsigned integer type with a width of exactly 8 bits.
    int sum = 0;
    
    int total_size = 0;
    int total_duration = 0;
    int packet_size = 0;
    int64_t stream_index = 0;
    int64_t bit_rate = 0;
    
    //writing data to a file
    outdata.open("H:\\Sanduni_projects\\log.txt");
    
    if (!outdata){
        cerr << "Error: file could not be opened" << endl;
        exit(1);
    }
    
    //Split what is stored in the file into frames and return one for each call
    //returns the next frame of the stream
    
    while(1){
        int frame = av_read_frame(s, pkt);
        if (frame < 0) break;
    
        duration = pkt->duration;
        size = pkt->size;
    
        total_size = total_size + size;
        total_duration = total_duration + duration;
    
        cout << "frame:" << i << " " << size << " " << duration << endl;
        data = pkt->data;
        outdata << "Frame: " << i << " ";
        outdata << data<< endl;
    
        for (j = 0; j < size; j++){
    
        }
    
        i++;
        //pkt_no++;
        //outdata << sum << endl;       
    }
    
    //make the packet free
    av_packet_unref(pkt);
    delete pkt;
    
    cout << "total size: " << total_size << endl;
    cout << "total duration:" << total_duration << endl;
    
    outdata.close();
    
    //Close the file after reading
    avformat_close_input(&s);
    

    }