Recherche avancée

Médias (16)

Mot : - Tags -/mp3

Autres articles (57)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (8432)

  • Dockerized ffmpeg stops for no reason

    2 mai 2023, par Arthur Attout

    I'm trying to fire up a container that reads a video stream via ffmpeg and saves the stream as 30 seconds segments.

    


    When I run the container, it stops after 20-ish seconds and returns with no error.

    


    Here is my Dockerfile

    


    FROM linuxserver/ffmpeg
ENTRYPOINT ffmpeg -i rtsp://192.168.1.85:8554/camera -f v4l2 -c copy -reset_timestamps 1 -map 0 -f segment -segment_time 30 -segment_format mp4 "output/out%03d.mp4" -loglevel debug


    


    Here is the output when I run sudo docker run -it --rm -v /data/camera:/output --name camera_recorder camera_recorder:latest

    


    [+] Building 1.8s (5/5) FINISHED
 => [internal] load build definition from Dockerfile                                                                                                                                                                                                                      0.3s
 => => transferring dockerfile: 777B                                                                                                                                                                                                                                      0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                                                         0.5s
 => => transferring context: 2B                                                                                                                                                                                                                                           0.0s
 => [internal] load metadata for docker.io/linuxserver/ffmpeg:latest                                                                                                                                                                                                      1.2s
 => CACHED [1/1] FROM docker.io/linuxserver/ffmpeg@sha256:823c611e0af82b864608c21d96bf363403310d92f154e238f6d51fe3d783e53b                                                                                                                                                0.0s
 => exporting to image                                                                                                                                                                                                                                                    0.1s
 => => exporting layers                                                                                                                                                                                                                                                   0.0s
 => => writing image sha256:f0509ccf0b07ff53d4aafa0d3b80fd50ed53e96db906c9a1e0e8c44e163dce94                                                                                                                                                                              0.1s
 => => naming to docker.io/library/camera_recorder                                                                                                                                                                                                                        0.0s
ffmpeg version 5.1.2 Copyright (c) 2000-2022 the FFmpeg developers
  built with gcc 11 (Ubuntu 11.3.0-1ubuntu1~22.04)
  configuration: --disable-debug --disable-doc --disable-ffplay --enable-ffprobe --enable-cuvid --enable-gpl --enable-libaom --enable-libass --enable-libfdk_aac --enable-libfreetype --enable-libkvazaar --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libxml2 --enable-libx264 --enable-libx265 --enable-libxvid --enable-nonfree --enable-nvdec --enable-nvenc --enable-opencl --enable-openssl --enable-small --enable-stripping --enable-vaapi --enable-vdpau --enable-version3
  libavutil      57. 28.100 / 57. 28.100
  libavcodec     59. 37.100 / 59. 37.100
  libavformat    59. 27.100 / 59. 27.100
  libavdevice    59.  7.100 / 59.  7.100
  libavfilter     8. 44.100 /  8. 44.100
  libswscale      6.  7.100 /  6.  7.100
  libswresample   4.  7.100 /  4.  7.100
  libpostproc    56.  6.100 / 56.  6.100
Splitting the commandline.
Reading option '-i' ... matched as input url with argument 'rtsp://192.168.1.85:8554/camera'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'v4l2'.
Reading option '-c' ... matched as option 'c' (codec name) with argument 'copy'.
Reading option '-reset_timestamps' ... matched as AVOption 'reset_timestamps' with argument '1'.
Reading option '-map' ... matched as option 'map' (set input stream mapping) with argument '0'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'segment'.
Reading option '-segment_time' ... matched as AVOption 'segment_time' with argument '30'.
Reading option '-segment_format' ... matched as AVOption 'segment_format' with argument 'mp4'.
Reading option 'output/out%03d.mp4' ... matched as output url.
Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option loglevel (set logging level) with argument debug.
Successfully parsed a group of options.
Parsing a group of options: input url rtsp://192.168.1.85:8554/camera.
Successfully parsed a group of options.
Opening an input file: rtsp://192.168.1.85:8554/camera.
[tcp @ 0x55c15e3eb040] No default whitelist set
[tcp @ 0x55c15e3eb040] Original list of addresses:
[tcp @ 0x55c15e3eb040] Address 192.168.1.85 port 8554
[tcp @ 0x55c15e3eb040] Interleaved list of addresses:
[tcp @ 0x55c15e3eb040] Address 192.168.1.85 port 8554
[tcp @ 0x55c15e3eb040] Starting connection attempt to 192.168.1.85 port 8554
[tcp @ 0x55c15e3eb040] Successfully connected to 192.168.1.85 port 8554
[rtsp @ 0x55c15e3e8300] SDP:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=Stream
c=IN IP4 0.0.0.0
t=0 0
m=video 0 RTP/AVP 96
a=control:rtsp://192.168.1.85:8554/camera/trackID=0
a=rtpmap:96 MP4V-ES/90000
a=fmtp:96 config=000001B001000001B58913000001000000012000C48D88002D3C04871443000001B24C61766335392E33372E313030; profile-level-id=1

[rtsp @ 0x55c15e3e8300] video codec set to: mpeg4
[rtp @ 0x55c15e3ef600] No default whitelist set
[udp @ 0x55c15e3f0200] No default whitelist set
[udp @ 0x55c15e3f0200] end receive buffer size reported is 425984
[udp @ 0x55c15e3eff40] No default whitelist set
[udp @ 0x55c15e3eff40] end receive buffer size reported is 425984
[rtsp @ 0x55c15e3e8300] setting jitter buffer size to 500
[rtsp @ 0x55c15e3e8300] hello state=0
[rtsp @ 0x55c15e3e8300] Could not find codec parameters for stream 0 (Video: mpeg4, 1 reference frame, none(left), 1920x1080 [SAR 1:1 DAR 16:9], 1/5): unspecified pixel format
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, rtsp, from 'rtsp://192.168.1.85:8554/camera':
  Metadata:
    title           : Stream
  Duration: N/A, bitrate: N/A
  Stream #0:0, 0, 1/90000: Video: mpeg4, 1 reference frame, none(left), 1920x1080 [SAR 1:1 DAR 16:9], 0/1, 5 tbr, 90k tbn
Successfully opened the file.
Parsing a group of options: output url output/out%03d.mp4.
Applying option f (force format) with argument v4l2.
Applying option c (codec name) with argument copy.
Applying option map (set input stream mapping) with argument 0.
Applying option f (force format) with argument segment.
Successfully parsed a group of options.
Opening an output file: output/out%03d.mp4.
Successfully opened the file.
[segment @ 0x55c15e415a80] Selected stream id:0 type:video
[segment @ 0x55c15e415a80] Opening 'output/out000.mp4' for writing
[file @ 0x55c15e42d840] Setting default whitelist 'file,crypto,data'
Output #0, segment, to 'output/out%03d.mp4':
  Metadata:
    title           : Stream
    encoder         : Lavf59.27.100
  Stream #0:0, 0, 1/10240: Video: mpeg4, 1 reference frame, none(left), 1920x1080 (0x0) [SAR 1:1 DAR 16:9], 0/1, q=2-31, 5 tbr, 10240 tbn
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
No more output streams to write to, finishing.:00.00 bitrate=N/A speed=   0x
[segment @ 0x55c15e415a80] segment:'output/out000.mp4' count:0 ended
[AVIOContext @ 0x55c15e42d8c0] Statistics: 292 bytes written, 2 seeks, 3 writeouts
frame=    0 fps=0.0 q=-1.0 Lsize=N/A time=00:00:00.00 bitrate=N/A speed=   0x
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Input file #0 (rtsp://192.168.1.85:8554/camera):
  Input stream #0:0 (video): 0 packets read (0 bytes);
  Total: 0 packets (0 bytes) demuxed
Output file #0 (output/out%03d.mp4):
  Output stream #0:0 (video): 0 packets muxed (0 bytes);
  Total: 0 packets (0 bytes) muxed
0 frames successfully decoded, 0 decoding errors


    


    Additional info :

    


      

    • The stream is up and running. ffplay rtsp://192.168.1.85:8554/camera opens normally
    • 


    • The exact command (from ENTRYPOINT) on the host, works perfectly fine (it generates files for every 30 seconds).
    • 


    • From inside the container, I can ping 192.168.1.85 (it is actually localhost)
    • 


    • Setting -analyzeduration 1000 does not fix the issue
    • 


    


    Why is the container stopping for no reason ?

    


  • save a serial of images(cv::Mat) to a mp4 file in Variable Frame Rate mode by using ffmpeg library, how to set the pts ?

    2 mai 2023, par ollydbg23

    In C++ code, I can correctly save a serial of images(opencv's cv::Mat) to a mp4 file by using ffmpeg library, see the question and answer here : avformat_write_header() function call crashed when I try to save several RGB data to a output.mp4 file

    


    Now here comes another question :

    


    Rotem' answer in that question can have the output.mp4 saved correctly. When play the mp4 file, I see the frames(OpenCV's cv::Mat image) shows in a const rate.

    


    What I can do if I have got the frames not in a const frequency, for example, the I got the first frame at the 0ms, and the second frame at the 50ms the third frame at 75ms, so that the each frame as associated time stamp, for example, those time stamp array are something like below :

    


    int timestamp[100] = {0, 50, 75, ...};


    


    What is the method to modify the Rotem's answer to reflect this ? It looks like I have to change the pts field of each frame. Because I just test the code, if I change this :

    


    yuvFrame->pts = av_rescale_q(frame_count*frame_count, outCodecCtx->time_base, outStream->time_base); //Set PTS timestamp

// note I change from frame_count to frame_count*frame_count


    


    Then the output.mp4 plays slower and slower, because the later frame has large pts values.

    


    Thanks.

    


    EDIT

    


    This is the code I'm currently used :

    


    #include <iostream>&#xA;#include <vector>&#xA;#include <cstring>&#xA;#include <fstream>&#xA;#include <sstream>&#xA;#include <stdexcept>&#xA;#include <opencv2></opencv2>opencv.hpp>&#xA;extern "C" {&#xA;#include <libavutil></libavutil>imgutils.h>&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;}&#xA;&#xA;#include<cstdlib> // to generate time stamps&#xA;&#xA;using namespace std;&#xA;using namespace cv;&#xA;&#xA;int main()&#xA;{&#xA;    // Set up input frames as BGR byte arrays&#xA;    vector<mat> frames;&#xA;&#xA;    int width = 640;&#xA;    int height = 480;&#xA;    int num_frames = 100;&#xA;    Scalar black(0, 0, 0);&#xA;    Scalar white(255, 255, 255);&#xA;    int font = FONT_HERSHEY_SIMPLEX;&#xA;    double font_scale = 1.0;&#xA;    int thickness = 2;&#xA;&#xA;    for (int i = 0; i &lt; num_frames; i&#x2B;&#x2B;) {&#xA;        Mat frame = Mat::zeros(height, width, CV_8UC3);&#xA;        putText(frame, std::to_string(i), Point(width / 2 - 50, height / 2), font, font_scale, white, thickness);&#xA;        frames.push_back(frame);&#xA;    }&#xA;&#xA;    // generate a serial of time stamps which is used to set the PTS value&#xA;    // suppose they are in ms unit, the time interval is between 30ms to 59ms&#xA;    vector<int> timestamps;&#xA;&#xA;    for (int i = 0; i &lt; num_frames; i&#x2B;&#x2B;) {&#xA;        int timestamp;&#xA;        if (i == 0)&#xA;            timestamp = 0;&#xA;        else&#xA;        {&#xA;            int random = 30 &#x2B; (rand() % 30);&#xA;            timestamp = timestamps[i-0] &#x2B; random;&#xA;        }&#xA;&#xA;        timestamps.push_back(timestamp);&#xA;    }&#xA;&#xA;    // Populate frames with BGR byte arrays&#xA;&#xA;    // Initialize FFmpeg&#xA;    //av_register_all();&#xA;&#xA;    // Set up output file&#xA;    AVFormatContext* outFormatCtx = nullptr;&#xA;    //AVCodec* outCodec = nullptr;&#xA;    AVCodecContext* outCodecCtx = nullptr;&#xA;    //AVStream* outStream = nullptr;&#xA;    //AVPacket outPacket;&#xA;&#xA;    const char* outFile = "output.mp4";&#xA;    int outWidth = frames[0].cols;&#xA;    int outHeight = frames[0].rows;&#xA;    int fps = 25;&#xA;&#xA;    // Open the output file context&#xA;    avformat_alloc_output_context2(&amp;outFormatCtx, nullptr, nullptr, outFile);&#xA;    if (!outFormatCtx) {&#xA;        cerr &lt;&lt; "Error: Could not allocate output format context" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Open the output file&#xA;    if (avio_open(&amp;outFormatCtx->pb, outFile, AVIO_FLAG_WRITE) &lt; 0) {&#xA;        cerr &lt;&lt; "Error opening output file" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Set up output codec&#xA;    const AVCodec* outCodec = avcodec_find_encoder(AV_CODEC_ID_H264);&#xA;    if (!outCodec) {&#xA;        cerr &lt;&lt; "Error: Could not find H.264 codec" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    outCodecCtx = avcodec_alloc_context3(outCodec);&#xA;    if (!outCodecCtx) {&#xA;        cerr &lt;&lt; "Error: Could not allocate output codec context" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;    outCodecCtx->codec_id = AV_CODEC_ID_H264;&#xA;    outCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    outCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;&#xA;    outCodecCtx->width = outWidth;&#xA;    outCodecCtx->height = outHeight;&#xA;    outCodecCtx->time_base = { 1, fps*1000 };   // 25000&#xA;    outCodecCtx->framerate = {fps, 1};          // 25&#xA;    outCodecCtx->bit_rate = 4000000;&#xA;&#xA;    //https://github.com/leandromoreira/ffmpeg-libav-tutorial&#xA;    //We set the flag AV_CODEC_FLAG_GLOBAL_HEADER which tells the encoder that it can use the global headers.&#xA;    if (outFormatCtx->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;    {&#xA;        outCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; //&#xA;    }&#xA;&#xA;    // Open output codec&#xA;    if (avcodec_open2(outCodecCtx, outCodec, nullptr) &lt; 0) {&#xA;        cerr &lt;&lt; "Error: Could not open output codec" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Create output stream&#xA;    AVStream* outStream = avformat_new_stream(outFormatCtx, outCodec);&#xA;    if (!outStream) {&#xA;        cerr &lt;&lt; "Error: Could not allocate output stream" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Configure output stream parameters (e.g., time base, codec parameters, etc.)&#xA;    // ...&#xA;&#xA;    // Connect output stream to format context&#xA;    outStream->codecpar->codec_id = outCodecCtx->codec_id;&#xA;    outStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    outStream->codecpar->width = outCodecCtx->width;&#xA;    outStream->codecpar->height = outCodecCtx->height;&#xA;    outStream->codecpar->format = outCodecCtx->pix_fmt;&#xA;    outStream->time_base = outCodecCtx->time_base;&#xA;&#xA;    int ret = avcodec_parameters_from_context(outStream->codecpar, outCodecCtx);&#xA;    if (ret &lt; 0) {&#xA;        cerr &lt;&lt; "Error: Could not copy codec parameters to output stream" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    outStream->avg_frame_rate = outCodecCtx->framerate;&#xA;    //outStream->id = outFormatCtx->nb_streams&#x2B;&#x2B;;  &lt;--- We shouldn&#x27;t modify outStream->id&#xA;&#xA;    ret = avformat_write_header(outFormatCtx, nullptr);&#xA;    if (ret &lt; 0) {&#xA;        cerr &lt;&lt; "Error: Could not write output header" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Convert frames to YUV format and write to output file&#xA;    int frame_count = -1;&#xA;    for (const auto&amp; frame : frames) {&#xA;        frame_count&#x2B;&#x2B;;&#xA;        AVFrame* yuvFrame = av_frame_alloc();&#xA;        if (!yuvFrame) {&#xA;            cerr &lt;&lt; "Error: Could not allocate YUV frame" &lt;&lt; endl;&#xA;            return -1;&#xA;        }&#xA;        av_image_alloc(yuvFrame->data, yuvFrame->linesize, outWidth, outHeight, AV_PIX_FMT_YUV420P, 32);&#xA;&#xA;        yuvFrame->width = outWidth;&#xA;        yuvFrame->height = outHeight;&#xA;        yuvFrame->format = AV_PIX_FMT_YUV420P;&#xA;&#xA;        // Convert BGR frame to YUV format&#xA;        Mat yuvMat;&#xA;        cvtColor(frame, yuvMat, COLOR_BGR2YUV_I420);&#xA;        memcpy(yuvFrame->data[0], yuvMat.data, outWidth * outHeight);&#xA;        memcpy(yuvFrame->data[1], yuvMat.data &#x2B; outWidth * outHeight, outWidth * outHeight / 4);&#xA;        memcpy(yuvFrame->data[2], yuvMat.data &#x2B; outWidth * outHeight * 5 / 4, outWidth * outHeight / 4);&#xA;&#xA;        // Set up output packet&#xA;        //av_init_packet(&amp;outPacket); //error C4996: &#x27;av_init_packet&#x27;: was declared deprecated&#xA;        AVPacket* outPacket = av_packet_alloc();&#xA;        memset(outPacket, 0, sizeof(outPacket)); //Use memset instead of av_init_packet (probably unnecessary).&#xA;        //outPacket->data = nullptr;&#xA;        //outPacket->size = 0;&#xA;&#xA;        yuvFrame->pts = av_rescale_q(timestamps[frame_count], outCodecCtx->time_base, outStream->time_base); //Set PTS timestamp&#xA;&#xA;        // Encode frame and write to output file&#xA;        int ret = avcodec_send_frame(outCodecCtx, yuvFrame);&#xA;        if (ret &lt; 0) {&#xA;            cerr &lt;&lt; "Error: Could not send frame to output codec" &lt;&lt; endl;&#xA;            return -1;&#xA;        }&#xA;        while (ret >= 0) {&#xA;            ret = avcodec_receive_packet(outCodecCtx, outPacket);&#xA;            if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {&#xA;                break;&#xA;            } else if (ret &lt; 0) {&#xA;                cerr &lt;&lt; "Error: Could not receive packet from output codec" &lt;&lt; endl;&#xA;                return -1;&#xA;            }&#xA;&#xA;            //av_packet_rescale_ts(&amp;outPacket, outCodecCtx->time_base, outStream->time_base);&#xA;&#xA;            outPacket->stream_index = outStream->index;&#xA;&#xA;            outPacket->duration = av_rescale_q(1, outCodecCtx->time_base, outStream->time_base);   // Set packet duration&#xA;&#xA;            ret = av_interleaved_write_frame(outFormatCtx, outPacket);&#xA;            av_packet_unref(outPacket);&#xA;            if (ret &lt; 0) {&#xA;                cerr &lt;&lt; "Error: Could not write packet to output file" &lt;&lt; endl;&#xA;                return -1;&#xA;            }&#xA;        }&#xA;&#xA;        av_frame_free(&amp;yuvFrame);&#xA;    }&#xA;&#xA;    // Flush the encoder&#xA;    ret = avcodec_send_frame(outCodecCtx, nullptr);&#xA;    if (ret &lt; 0) {&#xA;        std::cerr &lt;&lt; "Error flushing encoder: " &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    while (ret >= 0) {&#xA;        AVPacket* pkt = av_packet_alloc();&#xA;        if (!pkt) {&#xA;            std::cerr &lt;&lt; "Error allocating packet" &lt;&lt; std::endl;&#xA;            return -1;&#xA;        }&#xA;        ret = avcodec_receive_packet(outCodecCtx, pkt);&#xA;&#xA;        // Write the packet to the output file&#xA;        if (ret == 0)&#xA;        {&#xA;            pkt->stream_index = outStream->index;&#xA;            pkt->duration = av_rescale_q(1, outCodecCtx->time_base, outStream->time_base);   // &lt;---- Set packet duration&#xA;            ret = av_interleaved_write_frame(outFormatCtx, pkt);&#xA;            av_packet_unref(pkt);&#xA;            if (ret &lt; 0) {&#xA;                std::cerr &lt;&lt; "Error writing packet to output file: " &lt;&lt; std::endl;&#xA;                return -1;&#xA;            }&#xA;        }&#xA;    }&#xA;&#xA;&#xA;    // Write output trailer&#xA;    av_write_trailer(outFormatCtx);&#xA;&#xA;    // Clean up&#xA;    avcodec_close(outCodecCtx);&#xA;    avcodec_free_context(&amp;outCodecCtx);&#xA;    avformat_free_context(outFormatCtx);&#xA;&#xA;    return 0;&#xA;}&#xA;</int></mat></cstdlib></stdexcept></sstream></fstream></cstring></vector></iostream>

    &#xA;

    Especially, I have those changes to the original Rotem's answer :

    &#xA;

    Fist, I have some code to generate a time stamp array :

    &#xA;

        // generate a serial of time stamps which is used to set the PTS value&#xA;    // suppose they are in ms unit, the time interval is between 30ms to 59ms&#xA;    vector<int> timestamps;&#xA;&#xA;    for (int i = 0; i &lt; num_frames; i&#x2B;&#x2B;) {&#xA;        int timestamp;&#xA;        if (i == 0)&#xA;            timestamp = 0;&#xA;        else&#xA;        {&#xA;            int random = 30 &#x2B; (rand() % 30);&#xA;            timestamp = timestamps[i-0] &#x2B; random;&#xA;        }&#xA;&#xA;        timestamps.push_back(timestamp);&#xA;    }&#xA;</int>

    &#xA;

    Second, I just set the PTS by those values :

    &#xA;

            yuvFrame->pts = av_rescale_q(timestamps[frame_count], outCodecCtx->time_base, outStream->time_base); //Set PTS timestamp&#xA;

    &#xA;

    Note that I have set the fps like below :

    &#xA;

        outCodecCtx->time_base = { 1, fps*1000 };   // 25000&#xA;    outCodecCtx->framerate = {fps, 1};          // 25&#xA;

    &#xA;

    Now, when I run the program, I got a lot of warnings in the console :

    &#xA;

    [libx264 @ 0000022e7fa621c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2&#xA;[libx264 @ 0000022e7fa621c0] profile High, level 3.0, 4:2:0, 8-bit&#xA;[libx264 @ 0000022e7fa621c0] 264 - core 164 r3094M bfc87b7 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr mbtree=1 bitrate=4000 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] non-strictly-monotonic PTS&#xA;[libx264 @ 0000022e7fa621c0] invalid DTS: PTS is less than DTS&#xA;[mp4 @ 0000022e090b2300] pts (592) &lt; dts (1129348497) in stream 0&#xA;Error: Could not write packet to output file&#xA;

    &#xA;

    Any ideas ?&#xA;Thanks.

    &#xA;

  • lavfi/dnn : Remove DNN native backend

    27 avril 2023, par Ting Fu
    lavfi/dnn : Remove DNN native backend
    

    According to discussion in
    https://etherpad.mit.edu/p/FF_dev_meeting_20221202 and the proposal in
    http://ffmpeg.org/pipermail/ffmpeg-devel/2022-December/304534.html,
    the DNN native backend should be removed at first step.
    All the DNN native backend related codes are deleted.

    Signed-off-by : Ting Fu <ting.fu@intel.com>

    • [DH] libavfilter/Makefile
    • [DH] libavfilter/dnn/Makefile
    • [DH] libavfilter/dnn/dnn_backend_native.c
    • [DH] libavfilter/dnn/dnn_backend_native.h
    • [DH] libavfilter/dnn/dnn_backend_native_layer_avgpool.c
    • [DH] libavfilter/dnn/dnn_backend_native_layer_avgpool.h
    • [DH] libavfilter/dnn/dnn_backend_native_layer_conv2d.c
    • [DH] libavfilter/dnn/dnn_backend_native_layer_conv2d.h
    • [DH] libavfilter/dnn/dnn_backend_native_layer_dense.c
    • [DH] libavfilter/dnn/dnn_backend_native_layer_dense.h
    • [DH] libavfilter/dnn/dnn_backend_native_layer_depth2space.c
    • [DH] libavfilter/dnn/dnn_backend_native_layer_depth2space.h
    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathbinary.c
    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathbinary.h
    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathunary.c
    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathunary.h
    • [DH] libavfilter/dnn/dnn_backend_native_layer_maximum.c
    • [DH] libavfilter/dnn/dnn_backend_native_layer_maximum.h
    • [DH] libavfilter/dnn/dnn_backend_native_layer_pad.c
    • [DH] libavfilter/dnn/dnn_backend_native_layer_pad.h
    • [DH] libavfilter/dnn/dnn_backend_native_layers.c
    • [DH] libavfilter/dnn/dnn_backend_native_layers.h
    • [DH] libavfilter/dnn/dnn_backend_tf.c
    • [DH] libavfilter/dnn_interface.h
    • [DH] libavfilter/tests/dnn-layer-avgpool.c
    • [DH] libavfilter/tests/dnn-layer-conv2d.c
    • [DH] libavfilter/tests/dnn-layer-dense.c
    • [DH] libavfilter/tests/dnn-layer-depth2space.c
    • [DH] libavfilter/tests/dnn-layer-mathbinary.c
    • [DH] libavfilter/tests/dnn-layer-mathunary.c
    • [DH] libavfilter/tests/dnn-layer-maximum.c
    • [DH] libavfilter/tests/dnn-layer-pad.c
    • [DH] libavfilter/vf_derain.c
    • [DH] libavfilter/vf_dnn_processing.c
    • [DH] libavfilter/vf_sr.c
    • [DH] tests/Makefile
    • [DH] tests/fate/dnn.mak