Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Articles published on the website

  • Modifying FFmpeg and OpenCV source code to capture the RTP timestamp for each packet in NTP format

    21 August, by Fr0sty

    I was trying a little experiment in order to get the timestamps of the RTP packets using the VideoCapture class from Opencv's source code in python, also had to modify FFmpeg to accommodate the changes in Opencv.

    Since I read about the RTP packet format.Wanted to fiddle around and see if I could manage to find a way to get the NTP timestamps. Was unable to find any reliable help in trying to get RTP timestamps. So tried out this little hack.

    Credits to ryantheseer on github for the modified code.

    Version of FFmpeg: 3.2.3 Version of Opencv: 3.2.0

    In Opencv source code:

    modules/videoio/include/opencv2/videoio.hpp:

    Added two getters for the RTP timestamp:

    .....   
        /** @brief Gets the upper bytes of the RTP time stamp in NTP format (seconds).
        */
        CV_WRAP virtual int64 getRTPTimeStampSeconds() const;
    
        /** @brief Gets the lower bytes of the RTP time stamp in NTP format (fraction of seconds).
        */
        CV_WRAP virtual int64 getRTPTimeStampFraction() const;
    .....
    

    modules/videoio/src/cap.cpp:

    Added an import and added the implementation of the timestamp getter:

    ....
    #include 
    ....
    ....
    static inline uint64_t icvGetRTPTimeStamp(const CvCapture* capture)
    {
      return capture ? capture->getRTPTimeStamp() : 0;
    }
    ...
    

    Added the C++ timestamp getters in the VideoCapture class:

     ....
    /**@brief Gets the upper bytes of the RTP time stamp in NTP format (seconds).
    */
    int64 VideoCapture::getRTPTimeStampSeconds() const
    {
        int64 seconds = 0;
        uint64_t timestamp = 0;
        //Get the time stamp from the capture object
        if (!icap.empty())
            timestamp = icap->getRTPTimeStamp();
        else
            timestamp = icvGetRTPTimeStamp(cap);
        //Take the top 32 bytes of the time stamp
        seconds = (int64)((timestamp & 0xFFFFFFFF00000000) / 0x100000000);
        return seconds;
    }
    
    /**@brief Gets the lower bytes of the RTP time stamp in NTP format (seconds).
    */
    int64 VideoCapture::getRTPTimeStampFraction() const
    {
        int64 fraction = 0;
        uint64_t timestamp = 0;
        //Get the time stamp from the capture object
        if (!icap.empty())
            timestamp = icap->getRTPTimeStamp();
        else
            timestamp = icvGetRTPTimeStamp(cap);
        //Take the bottom 32 bytes of the time stamp
        fraction = (int64)((timestamp & 0xFFFFFFFF));
        return fraction;
    }
    ...
    

    modules/videoio/src/cap_ffmpeg.cpp:

    Added an import:

    ...
    #include 
    ...
    

    Added a method reference definition:

    ...
    static CvGetRTPTimeStamp_Plugin icvGetRTPTimeStamp_FFMPEG_p = 0;
    ...
    

    Added the method to the module initializer method:

    ...
    if( icvFFOpenCV )
    ...
    ...
      icvGetRTPTimeStamp_FFMPEG_p =
                    (CvGetRTPTimeStamp_Plugin)GetProcAddress(icvFFOpenCV, "cvGetRTPTimeStamp_FFMPEG");
    ...
    ...
    icvWriteFrame_FFMPEG_p != 0 &&
    icvGetRTPTimeStamp_FFMPEG_p !=0)
    ...
    
    icvGetRTPTimeStamp_FFMPEG_p = (CvGetRTPTimeStamp_Plugin)cvGetRTPTimeStamp_FFMPEG;
    

    Implemented the getter interface:

    ...
    virtual uint64_t getRTPTimeStamp() const
        {
            return ffmpegCapture ? icvGetRTPTimeStamp_FFMPEG_p(ffmpegCapture) : 0;
        } 
    ...
    

    In FFmpeg's source code:

    libavcodec/avcodec.h:

    Added the NTP timestamp definition to the AVPacket struct:

    typedef struct AVPacket {
    ...
    ...
    uint64_t rtp_ntp_time_stamp;
    }
    

    libavformat/rtpdec.c:

    Store the ntp time stamp in the struct in the finalize_packet method:

    static void finalize_packet(RTPDemuxContext *s, AVPacket *pkt, uint32_t timestamp)
    {
        uint64_t offsetTime = 0;
        uint64_t rtp_ntp_time_stamp = timestamp;
    ...
    ...
    /*RM: Sets the RTP time stamp in the AVPacket */
        if (!s->last_rtcp_ntp_time || !s->last_rtcp_timestamp)
            offsetTime = 0;
        else
            offsetTime = s->last_rtcp_ntp_time - ((uint64_t)(s->last_rtcp_timestamp) * 65536);
        rtp_ntp_time_stamp = ((uint64_t)(timestamp) * 65536) + offsetTime;
        pkt->rtp_ntp_time_stamp = rtp_ntp_time_stamp;
    

    libavformat/utils.c:

    Copy the ntp time stamp from the packet to the frame in the read_frame_internal method:

    static int read_frame_internal(AVFormatContext *s, AVPacket *pkt)
    {
        ...
        uint64_t rtp_ntp_time_stamp = 0;
    ...
        while (!got_packet && !s->internal->parse_queue) {
              ...
              //COPY OVER the RTP time stamp TODO: just create a local copy
              rtp_ntp_time_stamp = cur_pkt.rtp_ntp_time_stamp;
    
    
              ...
    
    
      #if FF_API_LAVF_AVCTX
        update_stream_avctx(s);
      #endif
    
      if (s->debug & FF_FDEBUG_TS)
          av_log(s, AV_LOG_DEBUG,
               "read_frame_internal stream=%d, pts=%s, dts=%s, "
               "size=%d, duration=%"PRId64", flags=%d\n",
               pkt->stream_index,
               av_ts2str(pkt->pts),
               av_ts2str(pkt->dts),
               pkt->size, pkt->duration, pkt->flags);
    pkt->rtp_ntp_time_stamp = rtp_ntp_time_stamp; #Just added this line in the if statement.
    return ret;
    

    My python code to utilise these changes:

    import cv2
    
    uri = 'rtsp://admin:password@192.168.1.67:554'
    cap = cv2.VideoCapture(uri)
    
    while True:
        frame_exists, curr_frame = cap.read()
        # if frame_exists:
        k = cap.getRTPTimeStampSeconds()
        l = cap.getRTPTimeStampFraction()
        time_shift = 0x100000000
        #because in the getRTPTimeStampSeconds() 
        #function, seconds was multiplied by 0x10000000 
        seconds = time_shift * k
        m = (time_shift * k) + l
        print("Imagetimestamp: %i" % m)
    cap.release()
    

    What I am getting as my output:

        Imagetimestamp: 0
        Imagetimestamp: 212041451700224
        Imagetimestamp: 212041687629824
        Imagetimestamp: 212041923559424
        Imagetimestamp: 212042159489024
        Imagetimestamp: 212042395418624
        Imagetimestamp: 212042631348224
        ...
    

    What astounded me the most was that when i powered off the ip camera and powered it back on, timestamp would start from 0 then quickly increments. I read NTP time format is relative to January 1, 1900 00:00. Even when I tried calculating the offset, and accounting between now and 01-01-1900, I still ended up getting a crazy high number for the date.

    Don't know if I calculated it wrong. I have a feeling it's very off or what I am getting is not the timestamp.

  • ffmpeg memory leak in the avcodec_open2 method

    21 August, by unresolved_external

    I've developed an application which handles live video stream. The problem is that it should run as a service and over time I am noticing some memory increase. When I check the application with valgrind - it did not find any leak related issues. So I've check it with google profile tools. This is a result(substracting the one of the first dumps from the latest) after approximately 6 hour run:

       30.0  35.7%  35.7%     30.0  35.7% av_malloc
        28.9  34.4%  70.2%     28.9  34.4% av_reallocp
        24.5  29.2%  99.4%     24.5  29.2% x264_malloc
    

    When I check the memory on the graph I see, that these allocations are related to avcodec_open2. The client code is:

    `           g_EncoderMutex.lock();
                ffmpeg_encoder_start(OutFileName.c_str(), AV_CODEC_ID_H264, m_FPS, width, height);
                for (pts = 0; pts < VideoImages.size(); pts++) {                
                    m_frame->pts = pts;
                    ffmpeg_encoder_encode_frame(VideoImages[pts].RGBimage[0]);
                }
                ffmpeg_encoder_finish();
                g_EncoderMutex.unlock()
    

    The ffmpeg_encoder_start method is:

     void VideoEncoder::ffmpeg_encoder_start(const char *filename, int codec_id, int fps, int width, int height)
            {
                int ret;
                m_FPS=fps;
                AVOutputFormat * fmt = av_guess_format(filename, NULL, NULL);
                m_oc = NULL;
                avformat_alloc_output_context2(&m_oc, NULL, NULL, filename);
    
                m_stream = avformat_new_stream(m_oc, 0);
                AVCodec *codec=NULL;
    
                codec =  avcodec_find_encoder(codec_id);    
                if (!codec) 
                {
                    fprintf(stderr, "Codec not found\n");
                    return; //-1
                }
    
                m_c=m_stream->codec;
    
                avcodec_get_context_defaults3(m_c, codec);
    
                m_c->bit_rate = 400000;
                m_c->width = width;
                m_c->height = height;
                m_c->time_base.num = 1;
                m_c->time_base.den = m_FPS;
                m_c->gop_size = 10;
                m_c->max_b_frames = 1;
                m_c->pix_fmt = AV_PIX_FMT_YUV420P;
                if (codec_id == AV_CODEC_ID_H264)
                    av_opt_set(m_c->priv_data, "preset", "ultrafast", 0);
    
                if (m_oc->oformat->flags & AVFMT_GLOBALHEADER) 
                    m_c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
                avcodec_open2( m_c, codec, NULL );
    
                m_stream->time_base=(AVRational){1, m_FPS};
    
                if (avio_open(&m_oc->pb, filename, AVIO_FLAG_WRITE) < 0)
                {
                    printf( "Could not open '%s'\n", filename);
                    exit(1);
                }            
    
                avformat_write_header(m_oc, NULL);
                m_frame = av_frame_alloc();
                if (!m_frame) {
                    printf( "Could not allocate video frame\n");
                    exit(1);
                }
                m_frame->format = m_c->pix_fmt;
                m_frame->width  = m_c->width;
                m_frame->height = m_c->height;
                ret = av_image_alloc(m_frame->data, m_frame->linesize, m_c->width, m_c->height, m_c->pix_fmt, 32);
                if (ret < 0) {
                    printf("Could not allocate raw picture buffer\n");
                    exit(1);
                }
            }
    

    The ffmpeg_encoder_encode_frame is:

    void VideoEncoder::ffmpeg_encoder_encode_frame(uint8_t *rgb) 
    {
        int ret, got_output;
        ffmpeg_encoder_set_frame_yuv_from_rgb(rgb);
        av_init_packet(&m_pkt);
        m_pkt.data = NULL;
        m_pkt.size = 0;
    
        ret = avcodec_encode_video2(m_c, &m_pkt, m_frame, &got_output);
        if (ret < 0) {
            printf("Error encoding frame\n");
            exit(1);
        }
        if (got_output) 
        {
    
             av_packet_rescale_ts(&m_pkt,
                            (AVRational){1, m_FPS}, m_stream->time_base);
            m_pkt.stream_index = m_stream->index;
            int ret = av_interleaved_write_frame(m_oc, &m_pkt);
    
            av_packet_unref(&m_pkt);
    
        }
    
    }
    

    ffmpeg_encoder_finish code is:

    void VideoEncoder::ffmpeg_encoder_finish(void) 
            {
                int got_output, ret;
    
                do {
    
                    ret = avcodec_encode_video2(m_c, &m_pkt, NULL, &got_output);
                    if (ret < 0) {
                        printf( "Error encoding frame\n");
                        exit(1);
                    }
                    if (got_output) {
    
                        av_packet_rescale_ts(&m_pkt,
                                    (AVRational){1, m_FPS}, m_stream->time_base);
                        m_pkt.stream_index = m_stream->index;
                        int ret = av_interleaved_write_frame(m_oc, &m_pkt);
    
                        av_packet_unref(&m_pkt);
                    }
                } while (got_output);
    
                av_write_trailer(m_oc);
                avio_closep(&m_oc->pb);
    
                avformat_free_context(m_oc);
    
                av_freep(&m_frame->data[0]);
                av_frame_free(&m_frame);
    
                av_packet_unref(&m_pkt);
                sws_freeContext(m_sws_context);
            }
    

    This code runs multiple times in the loop. So my question is - what am I doing wrong? maybe ffmpeg is using some kind of internal buffering? If so, how to disable it? Because such an increase in memory usage is unacceptable at all.

  • How to secure HLS video file for offline playback in react native

    21 August, by Rakish Frisky

    Hey currently I want to try to make an application to store our video and download it. I already encrypt the HSL Video file but to support full offline playback I would have to give the key along with the HLS video file. So how to make my HLS video only can be played in my apps?

    can someone help me to solve this problem? I already stuck for this problem in 2 weeks

  • Android NDK Cross Compile FFmpeg, dlopen failed: cannot locate symbol

    21 August, by binglingziyu

    Android NDK-r20 cross compile FFmpeg4.2 success, but my app crash with

    UnsatisfiedLinkError: dlopen failed: cannot locate symbol __aeabi_idiv

    This is the ffmpeg4.2 source code with my "build_android.sh"

    ffmpeg-android-build

    config the NDK r20 root path in "build_android.sh"
    $ cd ffmpeg-android-build
    $ ./build_android.sh
    $ make -j 4
    $ make install
    

    This is my android project to test the ffmpeg

    ffmpeg-android-test

    Seemed the solution,don't know how to do in my situation

    #cannot-locate-symbols

    Excepted

    1. NDK-r20 and FFmpeg-4.2 is needed (cant change the version)
    2. My android test project run
  • How to create a video from images with FFmpeg?

    21 August, by user3877422
    ffmpeg -r 1/5 -start_number 2 -i img%03d.png -c:v libx264 -r 30 -pix_fmt yuv420p out.mp4
    

    This line worked fine but I want to create a video file from images in another folder. Image names in my folder are:

    img001.jpg
    img002.jpg
    img003.jpg
    ...
    

    How could I input images files from a different folder? Example: C:\mypics

    I tried this command but ffmpeg generated a video with the first image (img001.jpg) only.

    ffmpeg -r 1/5 -start_number 0 -i C:\myimages\img%03d.png -c:v libx264 -r 30 -pix_fmt yuv420p out.mp4