Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • encode video in reverse ?

    7 mars 2017, par bob

    Does anyone know if it is possible to encode a video using ffmpeg in reverse? (So the resulting video plays in reverse?)

    I think I can by generating images for each frame (so a folder of images labelled 1.jpg, 2.jpg etc), then write a script to change the image names, and then re-encode the ivdeo from these files.

    Does anyone know of a quicker way?

    This is an FLV video.

    Thank you

  • Fast seeking ffmpeg multiple times for screenshots

    7 mars 2017, par user3786834

    I have come across http://askubuntu.com/questions/377579/ffmpeg-output-screenshot-gallery/377630#377630, it's perfect. That has done exactly what I wanted.

    However, I'm using remote URLs to generate the screenshot timeline. I do know it's possible to fast seek with remote files using https://trac.ffmpeg.org/wiki/Seeking%20with%20FFmpeg (using -ss before the -i) but this only runs the once.

    I'm looking for a way to use the

    ./ffmpeg -i input -vf "select=gt(scene\,0.4),scale=160:-1,tile,scale=600:-1" \
    -frames:v 1 -qscale:v 3 preview.jpg
    

    command but using the fast seek method as it's currently very slow when used with a remote file. I use PHP but I am aware that a C method exists by using av_seek_frame, I barely know C so I'm unable to implement this into a PHP script I'm writing. So hopefully, it is possible to do this directly with ffmpeg in the PHP system() function.

    Currently, I run seperate ffmpeg commands (with the -ss method) and then combine the screenshots together in PHP. However, with this method it will be refetching the metadata each time and a more optimized method would be to have it all happen in the same command line because I want to reduce the amount of requests made to the remote url so I can run more scripts in sequence with each other.

    Thank you for your help.

  • C++/C FFmpeg artifact build up across video frames

    6 mars 2017, par ChiragRaman

    Context:
    I am building a recorder for capturing video and audio in separate threads (using Boost thread groups) using FFmpeg 2.8.6 on Ubuntu 16.04. I followed the demuxing_decoding example here: https://www.ffmpeg.org/doxygen/2.8/demuxing_decoding_8c-example.html

    Video capture specifics:
    I am reading H264 off a Logitech C920 webcam and writing the video to a raw file. The issue I notice with the video is that there seems to be a build-up of artifacts across frames until a particular frame resets. Here is my frame grabbing, and decoding functions:

    // Used for injecting decoding functions for different media types, allowing
    // for a generic decode loop
    typedef std::function PacketDecoder;
    
    /**
     * Decodes a video packet.
     * If the decoding operation is successful, returns the number of bytes decoded,
     * else returns the result of the decoding process from ffmpeg
     */
    int decode_video_packet(AVPacket *packet,
                            int *got_frame,
                            int cached){
        int ret = 0;
        int decoded = packet->size;
    
        *got_frame = 0;
    
        //Decode video frame
        ret = avcodec_decode_video2(video_decode_context,
                                    video_frame, got_frame, packet);
        if (ret < 0) {
            //FFmpeg users should use av_err2str
            char errbuf[128];
            av_strerror(ret, errbuf, sizeof(errbuf));
            std::cerr << "Error decoding video frame " << errbuf << std::endl;
            decoded = ret;
        } else {
            if (*got_frame) {
                video_frame->pts = av_frame_get_best_effort_timestamp(video_frame);
    
                //Write to log file
                AVRational *time_base = &video_decode_context->time_base;
                log_frame(video_frame, time_base,
                          video_frame->coded_picture_number, video_log_stream);
    
    #if( DEBUG )
                std::cout << "Video frame " << ( cached ? "(cached)" : "" )
                          << " coded:" <<  video_frame->coded_picture_number
                          << " pts:" << pts << std::endl;
    #endif
    
                /*Copy decoded frame to destination buffer:
                 *This is required since rawvideo expects non aligned data*/
                av_image_copy(video_dest_attr.video_destination_data,
                              video_dest_attr.video_destination_linesize,
                              (const uint8_t **)(video_frame->data),
                              video_frame->linesize,
                              video_decode_context->pix_fmt,
                              video_decode_context->width,
                              video_decode_context->height);
    
                //Write to rawvideo file
                fwrite(video_dest_attr.video_destination_data[0],
                       1,
                       video_dest_attr.video_destination_bufsize,
                       video_out_file);
    
                //Unref the refcounted frame
                av_frame_unref(video_frame);
            }
        }
    
        return decoded;
    }
    
    /**
     * Grabs frames in a loop and decodes them using the specified decoding function
     */
    int process_frames(AVFormatContext *context,
                       PacketDecoder packet_decoder) {
        int ret = 0;
        int got_frame;
        AVPacket packet;
    
        //Initialize packet, set data to NULL, let the demuxer fill it
        av_init_packet(&packet);
        packet.data = NULL;
        packet.size = 0;
    
        // read frames from the file
        for (;;) {
            ret = av_read_frame(context, &packet);
            if (ret < 0) {
                if  (ret == AVERROR(EAGAIN)) {
                    continue;
                } else {
                    break;
                }
            }
    
            //Convert timing fields to the decoder timebase
            unsigned int stream_index = packet.stream_index;
            av_packet_rescale_ts(&packet,
                                 context->streams[stream_index]->time_base,
                                 context->streams[stream_index]->codec->time_base);
    
            AVPacket orig_packet = packet;
            do {
                ret = packet_decoder(&packet, &got_frame, 0);
                if (ret < 0) {
                    break;
                }
                packet.data += ret;
                packet.size -= ret;
            } while (packet.size > 0);
            av_free_packet(&orig_packet);
    
            if(stop_recording == true) {
                break;
            }
        }
    
        //Flush cached frames
        std::cout << "Flushing frames" << std::endl;
        packet.data = NULL;
        packet.size = 0;
        do {
            packet_decoder(&packet, &got_frame, 1);
        } while (got_frame);
    
        av_log(0, AV_LOG_INFO, "Done processing frames\n");
        return ret;
    }
    


    Questions:

    1. How do I go about debugging the underlying issue?
    2. Is it possible that running the decoding code in a thread other than the one in which the decoding context was opened is causing the problem?
    3. Am I doing something wrong in the decoding code?

    Things I have tried/found:

    1. I found this thread that is about the same problem here: FFMPEG decoding artifacts between keyframes (I cannot post samples of my corrupted frames due to privacy issues, but the image linked to in that question depicts the same issue I have) However, the answer to the question is posted by the OP without specific details about how the issue was fixed. The OP only mentions that he wasn't 'preserving the packets correctly', but nothing about what was wrong or how to fix it. I do not have enough reputation to post a comment seeking clarification.

    2. I was initially passing the packet into the decoding function by value, but switched to passing by pointer on the off chance that the packet freeing was being done incorrectly.

    3. I found another question about debugging decoding issues, but couldn't find anything conclusive: How is video decoding corruption debugged?

    I'd appreciate any insight. Thanks a lot!

    [EDIT] In response to Ronald's answer, I am adding a little more information that wouldn't fit in a comment:

    1. I am only calling decode_video_packet() from the thread processing video frames; the other thread processing audio frames calls a similar decode_audio_packet() function. So only one thread calls the function. I should mention that I have set the thread_count in the decoding context to 1, failing which I would get a segfault in malloc.c while flushing the cached frames.

    2. I can see this being a problem if the process_frames and the frame decoder function were run on separate threads, which is not the case. Is there a specific reason why it would matter if the freeing is done within the function, or after it returns? I believe the freeing function is passed a copy of the original packet because multiple decode calls would be required for audio packet in case the decoder doesnt decode the entire audio packet.

    3. A general problem is that the corruption does not occur all the time. I can debug better if it is deterministic. Otherwise, I can't even say if a solution works or not.

  • How to create a .command file with unity with execute privileges ?

    6 mars 2017, par EhabSherif

    In my app I'm trying to use ffmpeg through a batch file, windows works great writing .bat files, however when I write .command files on MAC the files don't open and tells me that I don't have permission.

    Also when I run the command

    chmoc u+x [.command File Path]
    

    through the terminal the file works..

    Knowing that when I create a new text file with the command and rename it to .command it works fine.

    I write the command file in unity using File.WriteAll, is there any other way for writing the file that gives the previlage needed? knowing that I run the batch file using a process in unity.

    Thank you

  • Process videos on Azure VM

    6 mars 2017, par sebastian.roibu

    I am building an application that will process uploaded video. Each uploaded video will be trimmed into multiple shorter parts and all the cuts will be concatenated to create a highlight of the original file. All the processing is done with ffmpeg.

    I am using Azure File Storage, to upload the videos and be able to access them via Samba layer. I also have an Azure VM where I mounted the shared folders.

    What is the best approach on the worker?

    • Should I build a console app and run it as a windows server inside an azure VM ?

    • Is there another way of doing things?

    I am looking for a way that can be scaled up in production.

    If there another way of doing everything I described above?