Recherche avancée

Médias (0)

Mot : - Tags -/formulaire

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (77)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (9630)

  • How to properly wrap H264 into FLV with FFMPEG ?

    9 août 2017, par mOfl

    First of all, the "properly" in the title refers to this related question, of which answer does not solve my problem.

    tl ;dr : There is a difference between encoding a video and directly storing it into FLV and doing this in two separate steps. I need to do it separately, how do I get the same result as doing it directly ?

    Nvidia’s hardware encoder NVENC produces raw H.264 data without a container, which is difficult to play in most video players. For an Adobe AIR application, I need to wrap the video into the FLV format, for which I wanted to use FFMPEG :

    ffmpeg -f h264 -i "input.h264" -c copy -f flv "output.flv"

    This did not work as expected, because the first frame of each video treated this way is simply not shown. Each video is only displayed from the second frame, which is a shame for single-frame videos (using the GPU’s hardware encoder for lightning-fast image compression only).

    For inspection, I now reencode the input video twice : once directly to FLV output

    ffmpeg -f h264 -i "input.h264" -c:v h264_nvenc -f flv "A.flv"

    and once to H.264, then shoving it into an FLV afterwards.

    ffmpeg -f h264 -i "input.h264" -c:v h264_nvenc -f h264 "reencode.h264"
    ffmpeg -f h264 -i "reencode.h264" -c copy -f flv "B.flv"

    The first video plays fine, the second does not. The resulting FLV of the direct approach (A.flv, see below) has a slightly different file structure, especially the NAL unit differs, which I suspect is the reason for the different behavior.

    So, my question is : If I already have a H.264 video and only want it to be copied into an FLV container without being transcoded, but the file and frame headers should be filled in correctly as is done when actually transcoding, how do I tell this to FFMPEG ? Are there commands for this, such as "-c copy butGenerateValidHeader" ?

    Here the relevant portions of the files :

    Direct approach

    ffmpeg -f h264 -i "input.h264" -c:v h264_nvenc -f flv "A.flv"

    A.flv

    46 4C 56 01 01 00 00 00 09 00 00 00 00 12 00 00 // FLV header + metadata
    B8 00 00 00 00 00 00 00 02 00 0A 6F 6E 4D 65 74
    61 44 61 74 61 08 00 00 00 08 00 08 64 75 72 61
    74 69 6F 6E 00 3F A0 E5 60 41 89 37 4C 00 05 77
    69 64 74 68 00 40 93 80 00 00 00 00 00 00 06 68
    65 69 67 68 74 00 40 8E F0 00 00 00 00 00 00 0D
    76 69 64 65 6F 64 61 74 61 72 61 74 65 00 40 9E
    84 80 00 00 00 00 00 09 66 72 61 6D 65 72 61 74
    65 00 40 3E 00 00 00 00 00 00 00 0C 76 69 64 65
    6F 63 6F 64 65 63 69 64 00 40 1C 00 00 00 00 00
    00 00 07 65 6E 63 6F 64 65 72 02 00 0D 4C 61 76
    66 35 37 2E 37 31 2E 31 30 30 00 08 66 69 6C 65
    73 69 7A 65 00 40 F9 5C B0 00 00 00 00 00 00 09

    00 00 00 C3 09 00 00 2B 00 00 00 00 00 00 00 17 // AVC sequence start
    00 00 00 00
               01 4D 40 20 FF E1 00 17             // ?
                                       67 4D 40 20 // Sequence parameter set
    95 A0 13 81 F7 EB 01 10 00 00 3E 80 00 0E A6 08
    F1 C3 2A
            01 00 04                               // ?
                     68 EE 3C 80                   // Picture parameter set
                                 00 00 00 36 09 01 // AVC NALU
    94 9A 00 00 00 00 00 00 00 17 01 00 00 00
                                             00 01 // ?
    94 91
         65                                        // IDR frame
           [B8 04 1D FF ...]
    00 01 94 A5 09 00 00 05 00 00 00 00 00 00 00    // ?
                                                17 // AVC sequence end
    02 00 00 00 00 00 00 10

    Encoding first

    ffmpeg -f h264 -i "input.h264" -c:v h264_nvenc -f h264 "reencode.h264"

    reencode.h264

    00 00 00 01 67 4D 40 20 95 A0 13 81 F7 EB 01 10 // Sequence parameter set
    00 00 3E 80 00 0E A6 08 F1 C3 2A
                                    00 00 00 01 68 // Picture parameter set
    EE 3C 80
            00 00 00 01 65                         // IDR frame
                          [B8 04 1D FF ...]        // Frame data

    Squeeze into container

    ffmpeg -f h264 -i "reencode.h264" -c copy -f flv "B.flv"

    B.flv

    46 4C 56 01 01 00 00 00 09 00 00 00 00 12 00 00 // FLV header + metadata
    A4 00 00 00 00 00 00 00 02 00 0A 6F 6E 4D 65 74
    61 44 61 74 61 08 00 00 00 07 00 08 64 75 72 61
    74 69 6F 6E 00 3F A4 7A E1 47 AE 14 7B 00 05 77
    69 64 74 68 00 40 93 80 00 00 00 00 00 00 06 68
    65 69 67 68 74 00 40 8E F0 00 00 00 00 00 00 0D
    76 69 64 65 6F 64 61 74 61 72 61 74 65 00 00 00
    00 00 00 00 00 00 00 0C 76 69 64 65 6F 63 6F 64
    65 63 69 64 00 40 1C 00 00 00 00 00 00 00 07 65
    6E 63 6F 64 65 72 02 00 0D 4C 61 76 66 35 37 2E
    37 31 2E 31 30 30 00 08 66 69 6C 65 73 69 7A 65
    00 40 F9 5B 40 00 00 00 00 00 00 09
                                       00 00 00 AF // AVC sequence start
    09 00 00 05 00 00 00 00 00 00 00 17 00 00 00 00

    00 00 00 10 09 01 94 BD 00 00 00 00 00 00 00 17 // AVC NALU
    01 00 00
            00 00 00 00 01 67 4D 40 20 95 A0 13 81 // Sequence parameter set
    F7 EB 01 10 00 00 3E 80 00 0E A6 08 F1 C3 2A
                                                00 // Picture parameter set
    00 00 01 68 EE 3C 80
                        00 00 00 01 65             // IDR frame
                                      [B8 04 1D FF // Frame data
    ...]
    00 01 94 C8 09 00 00 05 00 00 00 00 00 00 00    // ?
                                                17 // AVC sequence end
    02 00 00 00 00 00 00 10

    Update 08.08.2017 : Added input and output files for examination

  • dnn_backend_native_layer_mathunary : add abs support

    25 mai 2020, par Ting Fu
    dnn_backend_native_layer_mathunary : add abs support
    

    more math unary operations will be added here

    It can be tested with the model file generated with below python scripy :

    import tensorflow as tf
    import numpy as np
    import imageio

    in_img = imageio.imread('input.jpeg')
    in_img = in_img.astype(np.float32)/255.0
    in_data = in_img[np.newaxis, :]

    x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
    x1 = tf.subtract(x, 0.5)
    x2 = tf.abs(x1)
    y = tf.identity(x2, name='dnn_out')

    sess=tf.Session()
    sess.run(tf.global_variables_initializer())

    graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
    tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

    print("image_process.pb generated, please use \
    path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

    output = sess.run(y, feed_dict=x : in_data)
    imageio.imsave("out.jpg", np.squeeze(output))

    Signed-off-by : Ting Fu <ting.fu@intel.com>
    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>

    • [DH] libavfilter/dnn/Makefile
    • [DH] libavfilter/dnn/dnn_backend_native.h
    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathunary.c
    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathunary.h
    • [DH] libavfilter/dnn/dnn_backend_native_layers.c
    • [DH] tools/python/convert_from_tensorflow.py
    • [DH] tools/python/convert_header.py
  • Modifying FFmpeg and OpenCV source code to capture the RTP timestamp for each packet in NTP format

    22 août 2019, par Fr0sty

    I was trying a little experiment in order to get the timestamps of the RTP packets using the VideoCapture class from Opencv’s source code in python, also had to modify FFmpeg to accommodate the changes in Opencv.

    Since I read about the RTP packet format.Wanted to fiddle around and see if I could manage to find a way to get the NTP timestamps. Was unable to find any reliable help in trying to get RTP timestamps. So tried out this little hack.

    Credits to ryantheseer on github for the modified code.

    Version of FFmpeg : 3.2.3
    Version of Opencv : 3.2.0

    In Opencv source code :

    modules/videoio/include/opencv2/videoio.hpp :

    Added two getters for the RTP timestamp :

    .....  
       /** @brief Gets the upper bytes of the RTP time stamp in NTP format (seconds).
       */
       CV_WRAP virtual int64 getRTPTimeStampSeconds() const;

       /** @brief Gets the lower bytes of the RTP time stamp in NTP format (fraction of seconds).
       */
       CV_WRAP virtual int64 getRTPTimeStampFraction() const;
    .....

    modules/videoio/src/cap.cpp :

    Added an import and added the implementation of the timestamp getter :

    ....
    #include <cstdint>
    ....
    ....
    static inline uint64_t icvGetRTPTimeStamp(const CvCapture* capture)
    {
     return capture ? capture->getRTPTimeStamp() : 0;
    }
    ...
    </cstdint>

    Added the C++ timestamp getters in the VideoCapture class :

    ....
    /**@brief Gets the upper bytes of the RTP time stamp in NTP format (seconds).
    */
    int64 VideoCapture::getRTPTimeStampSeconds() const
    {
       int64 seconds = 0;
       uint64_t timestamp = 0;
       //Get the time stamp from the capture object
       if (!icap.empty())
           timestamp = icap->getRTPTimeStamp();
       else
           timestamp = icvGetRTPTimeStamp(cap);
       //Take the top 32 bytes of the time stamp
       seconds = (int64)((timestamp &amp; 0xFFFFFFFF00000000) / 0x100000000);
       return seconds;
    }

    /**@brief Gets the lower bytes of the RTP time stamp in NTP format (seconds).
    */
    int64 VideoCapture::getRTPTimeStampFraction() const
    {
       int64 fraction = 0;
       uint64_t timestamp = 0;
       //Get the time stamp from the capture object
       if (!icap.empty())
           timestamp = icap->getRTPTimeStamp();
       else
           timestamp = icvGetRTPTimeStamp(cap);
       //Take the bottom 32 bytes of the time stamp
       fraction = (int64)((timestamp &amp; 0xFFFFFFFF));
       return fraction;
    }
    ...

    modules/videoio/src/cap_ffmpeg.cpp :

    Added an import :

    ...
    #include <cstdint>
    ...
    </cstdint>

    Added a method reference definition :

    ...
    static CvGetRTPTimeStamp_Plugin icvGetRTPTimeStamp_FFMPEG_p = 0;
    ...

    Added the method to the module initializer method :

    ...
    if( icvFFOpenCV )
    ...
    ...
     icvGetRTPTimeStamp_FFMPEG_p =
                   (CvGetRTPTimeStamp_Plugin)GetProcAddress(icvFFOpenCV, "cvGetRTPTimeStamp_FFMPEG");
    ...
    ...
    icvWriteFrame_FFMPEG_p != 0 &amp;&amp;
    icvGetRTPTimeStamp_FFMPEG_p !=0)
    ...

    icvGetRTPTimeStamp_FFMPEG_p = (CvGetRTPTimeStamp_Plugin)cvGetRTPTimeStamp_FFMPEG;

    Implemented the getter interface :

    ...
    virtual uint64_t getRTPTimeStamp() const
       {
           return ffmpegCapture ? icvGetRTPTimeStamp_FFMPEG_p(ffmpegCapture) : 0;
       }
    ...

    In FFmpeg’s source code :

    libavcodec/avcodec.h :

    Added the NTP timestamp definition to the AVPacket struct :

    typedef struct AVPacket {
    ...
    ...
    uint64_t rtp_ntp_time_stamp;
    }

    libavformat/rtpdec.c :

    Store the ntp time stamp in the struct in the finalize_packet method :

    static void finalize_packet(RTPDemuxContext *s, AVPacket *pkt, uint32_t timestamp)
    {
       uint64_t offsetTime = 0;
       uint64_t rtp_ntp_time_stamp = timestamp;
    ...
    ...
    /*RM: Sets the RTP time stamp in the AVPacket */
       if (!s->last_rtcp_ntp_time || !s->last_rtcp_timestamp)
           offsetTime = 0;
       else
           offsetTime = s->last_rtcp_ntp_time - ((uint64_t)(s->last_rtcp_timestamp) * 65536);
       rtp_ntp_time_stamp = ((uint64_t)(timestamp) * 65536) + offsetTime;
       pkt->rtp_ntp_time_stamp = rtp_ntp_time_stamp;

    libavformat/utils.c :

    Copy the ntp time stamp from the packet to the frame in the read_frame_internal method :

    static int read_frame_internal(AVFormatContext *s, AVPacket *pkt)
    {
       ...
       uint64_t rtp_ntp_time_stamp = 0;
    ...
       while (!got_packet &amp;&amp; !s->internal->parse_queue) {
             ...
             //COPY OVER the RTP time stamp TODO: just create a local copy
             rtp_ntp_time_stamp = cur_pkt.rtp_ntp_time_stamp;


             ...


     #if FF_API_LAVF_AVCTX
       update_stream_avctx(s);
     #endif

     if (s->debug &amp; FF_FDEBUG_TS)
         av_log(s, AV_LOG_DEBUG,
              "read_frame_internal stream=%d, pts=%s, dts=%s, "
              "size=%d, duration=%"PRId64", flags=%d\n",
              pkt->stream_index,
              av_ts2str(pkt->pts),
              av_ts2str(pkt->dts),
              pkt->size, pkt->duration, pkt->flags);
    pkt->rtp_ntp_time_stamp = rtp_ntp_time_stamp; #Just added this line in the if statement.
    return ret;

    My python code to utilise these changes :

    import cv2

    uri = 'rtsp://admin:password@192.168.1.67:554'
    cap = cv2.VideoCapture(uri)

    while True:
       frame_exists, curr_frame = cap.read()
       # if frame_exists:
       k = cap.getRTPTimeStampSeconds()
       l = cap.getRTPTimeStampFraction()
       time_shift = 0x100000000
       #because in the getRTPTimeStampSeconds()
       #function, seconds was multiplied by 0x10000000
       seconds = time_shift * k
       m = (time_shift * k) + l
       print("Imagetimestamp: %i" % m)
    cap.release()

    What I am getting as my output :

       Imagetimestamp: 0
       Imagetimestamp: 212041451700224
       Imagetimestamp: 212041687629824
       Imagetimestamp: 212041923559424
       Imagetimestamp: 212042159489024
       Imagetimestamp: 212042395418624
       Imagetimestamp: 212042631348224
       ...

    What astounded me the most was that when i powered off the ip camera and powered it back on, timestamp would start from 0 then quickly increments. I read NTP time format is relative to January 1, 1900 00:00. Even when I tried calculating the offset, and accounting between now and 01-01-1900, I still ended up getting a crazy high number for the date.

    Don’t know if I calculated it wrong. I have a feeling it’s very off or what I am getting is not the timestamp.