Recherche avancée

Médias (39)

Mot : - Tags -/audio

Autres articles (72)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

Sur d’autres sites (6241)

  • FFMPEG Presentation Time Stamps (PTS) calculation in RTSP stream

    8 décembre 2020, par BadaBudaBudu

    Below please find en raw example of my code for your better understanding of what it does. Please note that this is an updated (removed deprecated methods, etc.) example code by myself from the official FFMPEG documentation complemented by my encoder.

    


    /// STD&#xA;#include <iostream>&#xA;#include <string>&#xA;&#xA;/// FFMPEG&#xA;extern "C"&#xA;{&#xA;    #include <libavformat></libavformat>avformat.h>&#xA;    #include <libswscale></libswscale>swscale.h>&#xA;    #include <libavutil></libavutil>imgutils.h>&#xA;}&#xA;&#xA;/// VideoLib&#xA;#include <tools></tools>multimediaprocessing.h>&#xA;#include &#xA;#include &#xA;#include <enums></enums>codec.h>&#xA;#include <enums></enums>pixelformat.h>&#xA;&#xA;/// OpenCV&#xA;#include <opencv2></opencv2>opencv.hpp>&#xA;&#xA;inline static const char *inputRtspAddress = "rtsp://192.168.0.186:8080/video/h264";&#xA;&#xA;int main()&#xA;{&#xA;    AVFormatContext* formatContext = nullptr;&#xA;&#xA;    AVStream* audioStream = nullptr;&#xA;    AVStream* videoStream = nullptr;&#xA;    AVCodec* audioCodec = nullptr;&#xA;    AVCodec* videoCodec = nullptr;&#xA;    AVCodecContext* audioCodecContext = nullptr;&#xA;    AVCodecContext* videoCodecContext = nullptr;&#xA;    vl::AudioSettings audioSettings;&#xA;    vl::VideoSettings videoSettings;&#xA;&#xA;    int audioIndex = -1;&#xA;    int videoIndex = -1;&#xA;&#xA;    SwsContext* swsContext = nullptr;&#xA;    std::vector frameBuffer;&#xA;    AVFrame* frame = av_frame_alloc();&#xA;    AVFrame* decoderFrame = av_frame_alloc();&#xA;&#xA;    AVPacket packet;&#xA;    cv::Mat mat;&#xA;&#xA;    vl::tools::MultimediaProcessing multimediaProcessing("rtsp://127.0.0.1:8080/stream", vl::configs::rtspStream, 0, vl::enums::EPixelFormat::ABGR);&#xA;&#xA;    // *** OPEN STREAM *** //&#xA;    if(avformat_open_input(&amp;formatContext, inputRtspAddress, nullptr, nullptr) &lt; 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Failed to open input." &lt;&lt; std::endl;&#xA;        return EXIT_FAILURE;&#xA;    }&#xA;&#xA;    if(avformat_find_stream_info(formatContext, nullptr) &lt; 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Failed to find stream info." &lt;&lt; std::endl;&#xA;        return EXIT_FAILURE;&#xA;    }&#xA;&#xA;    // *** FIND DECODER FOR BOTH AUDIO AND VIDEO STREAM *** //&#xA;    audioCodec = avcodec_find_decoder(AVCodecID::AV_CODEC_ID_AAC);&#xA;    videoCodec = avcodec_find_decoder(AVCodecID::AV_CODEC_ID_H264);&#xA;&#xA;    if(audioCodec == nullptr || videoCodec == nullptr)&#xA;    {&#xA;        std::cout &lt;&lt; "No AUDIO or VIDEO in stream." &lt;&lt; std::endl;&#xA;        return EXIT_FAILURE;&#xA;    }&#xA;&#xA;    // *** FIND STREAM FOR BOTH AUDIO AND VIDEO STREAM *** //&#xA;&#xA;    audioIndex = av_find_best_stream(formatContext, AVMEDIA_TYPE_AUDIO, -1, -1, &amp;audioCodec, 0);&#xA;    videoIndex = av_find_best_stream(formatContext, AVMEDIA_TYPE_VIDEO, -1, -1, &amp;videoCodec, 0);&#xA;&#xA;    if(audioIndex &lt; 0 || videoIndex &lt; 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Failed to find AUDIO or VIDEO stream." &lt;&lt; std::endl;&#xA;        return EXIT_FAILURE;&#xA;    }&#xA;&#xA;    audioStream = formatContext->streams[audioIndex];&#xA;    videoStream = formatContext->streams[videoIndex];&#xA;&#xA;    // *** ALLOC CODEC CONTEXT FOR BOTH AUDIO AND VIDEO STREAM *** //&#xA;    audioCodecContext = avcodec_alloc_context3(audioCodec);&#xA;    videoCodecContext = avcodec_alloc_context3(videoCodec);&#xA;&#xA;    if(audioCodecContext == nullptr || videoCodecContext == nullptr)&#xA;    {&#xA;        std::cout &lt;&lt; "Can not allocate AUDIO or VIDEO context." &lt;&lt; std::endl;&#xA;        return EXIT_FAILURE;&#xA;    }&#xA;&#xA;    if(avcodec_parameters_to_context(audioCodecContext, formatContext->streams[audioIndex]->codecpar) &lt; 0 || avcodec_parameters_to_context(videoCodecContext, formatContext->streams[videoIndex]->codecpar) &lt; 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Can not fill AUDIO or VIDEO codec context." &lt;&lt; std::endl;&#xA;        return EXIT_FAILURE;&#xA;    }&#xA;&#xA;    if(avcodec_open2(audioCodecContext, audioCodec, nullptr) &lt; 0 || avcodec_open2(videoCodecContext, videoCodec, nullptr) &lt; 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Failed to open AUDIO codec" &lt;&lt; std::endl;&#xA;        return EXIT_FAILURE;&#xA;    }&#xA;&#xA;    // *** INITIALIZE MULTIMEDIA PROCESSING *** //&#xA;    std::vector<unsigned char="char"> extraData(audioStream->codecpar->extradata_size);&#xA;    std::copy_n(audioStream->codecpar->extradata, extraData.size(), extraData.begin());&#xA;&#xA;    audioSettings.sampleRate         = audioStream->codecpar->sample_rate,&#xA;    audioSettings.bitrate            = audioStream->codecpar->bit_rate,&#xA;    audioSettings.codec              = vl::enums::EAudioCodec::AAC,&#xA;    audioSettings.channels           = audioStream->codecpar->channels,&#xA;    audioSettings.bitsPerCodedSample = audioStream->codecpar->bits_per_coded_sample,&#xA;    audioSettings.bitsPerRawSample   = audioStream->codecpar->bits_per_raw_sample,&#xA;    audioSettings.blockAlign         = audioStream->codecpar->block_align,&#xA;    audioSettings.channelLayout      = audioStream->codecpar->channel_layout,&#xA;    audioSettings.format             = audioStream->codecpar->format,&#xA;    audioSettings.frameSize          = audioStream->codecpar->frame_size,&#xA;    audioSettings.codecExtraData     = std::move(extraData);&#xA;&#xA;    videoSettings.width              = 1920;&#xA;    videoSettings.height             = 1080;&#xA;    videoSettings.framerate          = 25;&#xA;    videoSettings.pixelFormat        = vl::enums::EPixelFormat::ARGB;&#xA;    videoSettings.bitrate            = 8000 * 1000;&#xA;    videoSettings.codec              = vl::enums::EVideoCodec::H264;&#xA;&#xA;    multimediaProcessing.initEncoder(videoSettings, audioSettings);&#xA;&#xA;    // *** INITIALIZE SWS CONTEXT *** //&#xA;    swsContext = sws_getCachedContext(nullptr, videoCodecContext->width, videoCodecContext->height, videoCodecContext->pix_fmt, videoCodecContext->width, videoCodecContext->height, AV_PIX_FMT_RGBA, SWS_FAST_BILINEAR, nullptr, nullptr, nullptr);&#xA;&#xA;    if (const auto inReturn = av_image_get_buffer_size(AV_PIX_FMT_RGBA, videoCodecContext->width, videoCodecContext->height, 1); inReturn > 0)&#xA;    {&#xA;        frameBuffer.reserve(inReturn);&#xA;    }&#xA;    else&#xA;    {&#xA;        std::cout &lt;&lt; "Can not get buffer size." &lt;&lt; std::endl;&#xA;        return EXIT_FAILURE;&#xA;    }&#xA;&#xA;    if (const auto inReturn = av_image_fill_arrays(frame->data, frame->linesize, frameBuffer.data(), AV_PIX_FMT_RGBA, videoCodecContext->width, videoCodecContext->height, 1); inReturn &lt; 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Can not fill buffer arrays." &lt;&lt; std::endl;&#xA;        return EXIT_FAILURE;&#xA;    }&#xA;&#xA;    // *** MAIN LOOP *** //&#xA;    while(true)&#xA;    {&#xA;        // Return the next frame of a stream.&#xA;        if(av_read_frame(formatContext, &amp;packet) == 0)&#xA;        {&#xA;            if(packet.stream_index == videoIndex) // Check if it is video packet.&#xA;            {&#xA;                // Send packet to decoder.&#xA;                if(avcodec_send_packet(videoCodecContext, &amp;packet) == 0)&#xA;                {&#xA;                    int returnCode = avcodec_receive_frame(videoCodecContext, decoderFrame); // Get Frame from decoder.&#xA;&#xA;                    if (returnCode == 0) // Transform frame and send it to encoder. And re-stream that.&#xA;                    {&#xA;                        sws_scale(swsContext, decoderFrame->data, decoderFrame->linesize, 0, decoderFrame->height, frame->data, frame->linesize);&#xA;&#xA;                        mat = cv::Mat(videoCodecContext->height, videoCodecContext->width, CV_8UC4, frameBuffer.data(), frame->linesize[0]);&#xA;&#xA;                        cv::resize(mat, mat, cv::Size(1920, 1080), cv::INTER_NEAREST);&#xA;&#xA;                        multimediaProcessing.encode(mat.data, packet.dts, packet.dts, packet.flags == AV_PKT_FLAG_KEY); // Thise line sends cv::Mat to encoder and re-streams it.&#xA;&#xA;                        av_packet_unref(&amp;packet);&#xA;                    }&#xA;                    else if(returnCode == AVERROR(EAGAIN))&#xA;                    {&#xA;                        av_frame_unref(decoderFrame);&#xA;                        av_freep(decoderFrame);&#xA;                    }&#xA;                    else&#xA;                    {&#xA;                        av_frame_unref(decoderFrame);&#xA;                        av_freep(decoderFrame);&#xA;&#xA;                        std::cout &lt;&lt; "Error during decoding." &lt;&lt; std::endl;&#xA;                        return EXIT_FAILURE;&#xA;                    }&#xA;                }&#xA;            }&#xA;            else if(packet.stream_index == audioIndex) // Check if it is audio packet.&#xA;            {&#xA;                std::vector vectorPacket(packet.data, packet.data &#x2B; packet.size);&#xA;&#xA;                multimediaProcessing.addAudioPacket(vectorPacket, packet.dts, packet.dts);&#xA;            }&#xA;            else&#xA;            {&#xA;                av_packet_unref(&amp;packet);&#xA;            }&#xA;        }&#xA;        else&#xA;        {&#xA;            std::cout &lt;&lt; "Can not send video packet to decoder." &lt;&lt; std::endl;&#xA;            std::this_thread::sleep_for(std::chrono::seconds(1));&#xA;        }&#xA;    }&#xA;&#xA;    return EXIT_SUCCESS;&#xA;    }&#xA;</unsigned></string></iostream>

    &#xA;

    What does It do ?

    &#xA;

    It takes a single RTSP stream to decode its data so I can, for example, draw something to its frames or whatever, and then stream it under a different address.

    &#xA;

    Basically, I am opening the RTSP stream, check if it does contain both audio and video streams, and find a decoder for them. Then I create an encoder to which I will tell how the output stream should look like and that's it.

    &#xA;

    And this point I will create an endless loop Where I will read all packets coming from the input stream, then decode it does something to it and again encode it and re=stream it.

    &#xA;

    What is the issue ?

    &#xA;

    If you take a closer look I am sending both video and audio frame together with lastly received PTS and DTS contained in AVPacket, to the encoder.

    &#xA;

    The PTS and DTS from the point when I receive the first AVPacket looks for example like this.

    &#xA;

    IN AUDIO STREAM :

    &#xA;

    &#xA;

    -22783, -21759, -20735, -19711, -18687, -17663, -16639, -15615, -14591, -13567, -12543, -11519, -10495, -9471, -8447, -7423, -6399, -5375, -4351, -3327, -2303, -1279, -255, 769, 1793, 2817, 3841, 4865, 5889, 6913, 7937, 8961, 9985, 11009, 12033, 13057, 14081, 15105, 16129, 17153

    &#xA;

    &#xA;

    As you can see it is every time incremented by 1024 and that is a sample rate of the audio stream. Quite clear here.

    &#xA;

    IN VIDEO STREAM :

    &#xA;

    &#xA;

    86400, 90000, 93600, 97200, 100800, 104400, 108000, 111600, 115200, 118800, 122400, 126000, 129600, 133200, 136800, 140400, 144000, 147600, 151200, 154800, 158400, 162000, 165600

    &#xA;

    &#xA;

    As you can see it is every time incremented by 3600 but WHY ?. What this number actually mean ?

    &#xA;

    From what I can understand, those received PTS and DTS are for the following :

    &#xA;

    DTS should tell the encoder when it should start encoding the frame so the frame in time are in the correct order and not mishmashed.

    &#xA;

    PTS should say the correct time when the frame should be played/displayed in the output stream so the frame in time are in the correct order and not mishmashed.

    &#xA;

    What I am trying to achieve ?

    &#xA;

    As I said I need to restream a RTSP stream. I can not use PTS and DTS which comes from received AVPackets, because at some point it can happen that the input stream can randomly close and I need to open it again. The problem is that when I actually do it, then the PTS and DTS start to generate again from the minus values same as you could see in the samples. I CAN NOT send those "new" PTS and DTS to the encoder because they are now lower than the encoder/muxer expects.

    &#xA;

    I need to continually stream something (both audio and video), even it is a blank black screen or silent audio. And each frame the PTS and DTS should rise by a specific number. I need to figure out how the increment is calculated.

    &#xA;

    ----------------------------------

    &#xA;

    The final result should look like a mosaic of multiple input streams in a single output stream. A single input stream (main) has both audio and video and the rest (side) has just video. Some of those streams can randomly close in time and I need to ensure that it will be back again once it is possible.

    &#xA;

  • Encoding of raw frames (D3D11Texture2D) to an rtsp stream using libav*

    16 juillet 2021, par uzer

    I have managed to create a rtsp stream using libav* and directX texture (which I am obtaining from GDI API using Bitblit method). Here's my approach for creating live rtsp stream :

    &#xA;

      &#xA;
    1. Create output context and stream (skipping the checks here)

      &#xA;

        &#xA;
      • avformat_alloc_output_context2(&ofmt_ctx, NULL, "rtsp", rtsp_url) ; //RTSP
      • &#xA;

      • vid_codec = avcodec_find_encoder(ofmt_ctx->oformat->video_codec) ;
      • &#xA;

      • vid_stream = avformat_new_stream(ofmt_ctx,vid_codec) ;
      • &#xA;

      • vid_codec_ctx = avcodec_alloc_context3(vid_codec) ;
      • &#xA;

      &#xA;

    2. &#xA;

    3. Set codec params

      &#xA;

      codec_ctx->codec_tag = 0;&#xA;codec_ctx->codec_id = ofmt_ctx->oformat->video_codec;&#xA;//codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;codec_ctx->width = width;   codec_ctx->height = height;&#xA;codec_ctx->gop_size = 12;&#xA; //codec_ctx->gop_size = 40;&#xA; //codec_ctx->max_b_frames = 3;&#xA;codec_ctx->pix_fmt = target_pix_fmt; // AV_PIX_FMT_YUV420P&#xA;codec_ctx->framerate = { stream_fps, 1 };&#xA;codec_ctx->time_base = { 1, stream_fps};&#xA;if (fctx->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA; {&#xA;     codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA; }&#xA;

      &#xA;

    4. &#xA;

    5. Initialize video stream

      &#xA;

      if (avcodec_parameters_from_context(stream->codecpar, codec_ctx) &lt; 0)&#xA;{&#xA; Debug::Error("Could not initialize stream codec parameters!");&#xA; return false;&#xA;}&#xA;&#xA;AVDictionary* codec_options = nullptr;&#xA;if (codec->id == AV_CODEC_ID_H264) {&#xA; av_dict_set(&amp;codec_options, "profile", "high", 0);&#xA; av_dict_set(&amp;codec_options, "preset", "fast", 0);&#xA; av_dict_set(&amp;codec_options, "tune", "zerolatency", 0);&#xA;}&#xA;// open video encoder&#xA;int ret = avcodec_open2(codec_ctx, codec, &amp;codec_options);&#xA;if (ret&lt;0) {&#xA; Debug::Error("Could not open video encoder: ", avcodec_get_name(codec->id), " error ret: ", AVERROR(ret));&#xA; return false;&#xA;}&#xA;&#xA;stream->codecpar->extradata = codec_ctx->extradata;&#xA;stream->codecpar->extradata_size = codec_ctx->extradata_size;&#xA;

      &#xA;

    6. &#xA;

    7. Start streaming

      &#xA;

      // Create new frame and allocate buffer&#xA;AVFrame* AllocateFrameBuffer(AVCodecContext* codec_ctx, double width, double height)&#xA;{&#xA; AVFrame* frame = av_frame_alloc();&#xA; std::vector framebuf(av_image_get_buffer_size(codec_ctx->pix_fmt, width, height, 1));&#xA; av_image_fill_arrays(frame->data, frame->linesize, framebuf.data(), codec_ctx->pix_fmt, width, height, 1);&#xA; frame->width = width;&#xA; frame->height = height;&#xA; frame->format = static_cast<int>(codec_ctx->pix_fmt);&#xA; //Debug::Log("framebuf size: ", framebuf.size(), "  frame format: ", frame->format);&#xA; return frame;&#xA;}&#xA;&#xA;void RtspStream(AVFormatContext* ofmt_ctx, AVStream* vid_stream, AVCodecContext* vid_codec_ctx, char* rtsp_url)&#xA;{&#xA; printf("Output stream info:\n");&#xA; av_dump_format(ofmt_ctx, 0, rtsp_url, 1);&#xA;&#xA; const int width = WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetTextureWidth();&#xA; const int height = WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetTextureHeight();&#xA;&#xA; //DirectX BGRA to h264 YUV420p&#xA; SwsContext* conversion_ctx = sws_getContext(width, height, src_pix_fmt,&#xA;     vid_stream->codecpar->width, vid_stream->codecpar->height, target_pix_fmt, &#xA;     SWS_BICUBIC | SWS_BITEXACT, nullptr, nullptr, nullptr);&#xA;if (!conversion_ctx)&#xA;{&#xA;     Debug::Error("Could not initialize sample scaler!");&#xA;     return;&#xA;}&#xA;&#xA; AVFrame* frame = AllocateFrameBuffer(vid_codec_ctx,vid_codec_ctx->width,vid_codec_ctx->height);&#xA; if (!frame) {&#xA;     Debug::Error("Could not allocate video frame\n");&#xA;     return;&#xA; }&#xA;&#xA;&#xA; if (avformat_write_header(ofmt_ctx, NULL) &lt; 0) {&#xA;     Debug::Error("Error occurred when writing header");&#xA;     return;&#xA; }&#xA; if (av_frame_get_buffer(frame, 0) &lt; 0) {&#xA;     Debug::Error("Could not allocate the video frame data\n");&#xA;     return;&#xA; }&#xA;&#xA; int frame_cnt = 0;&#xA; //av start time in microseconds&#xA; int64_t start_time_av = av_gettime();&#xA; AVRational time_base = vid_stream->time_base;&#xA; AVRational time_base_q = { 1, AV_TIME_BASE };&#xA;&#xA; // frame pixel data info&#xA; int data_size = width * height * 4;&#xA; uint8_t* data = new uint8_t[data_size];&#xA;//    AVPacket* pkt = av_packet_alloc();&#xA;&#xA; while (RtspStreaming::IsStreaming())&#xA; {&#xA;     /* make sure the frame data is writable */&#xA;     if (av_frame_make_writable(frame) &lt; 0)&#xA;     {&#xA;         Debug::Error("Can&#x27;t make frame writable");&#xA;         break;&#xA;     }&#xA;&#xA;     //get copy/ref of the texture&#xA;     //uint8_t* data = WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetBuffer();&#xA;     if (!WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetPixels(data, 0, 0, width, height))&#xA;     {&#xA;         Debug::Error("Failed to get frame buffer. ID: ", RtspStreaming::WindowId());&#xA;         std::this_thread::sleep_for (std::chrono::seconds(2));&#xA;         continue;&#xA;     }&#xA;     //printf("got pixels data\n");&#xA;     // convert BGRA to yuv420 pixel format&#xA;     int srcStrides[1] = { 4 * width };&#xA;     if (sws_scale(conversion_ctx, &amp;data, srcStrides, 0, height, frame->data, frame->linesize) &lt; 0)&#xA;     {&#xA;         Debug::Error("Unable to scale d3d11 texture to frame. ", frame_cnt);&#xA;         break;&#xA;     }&#xA;     //Debug::Log("frame pts: ", frame->pts, "  time_base:", av_rescale_q(1, vid_codec_ctx->time_base, vid_stream->time_base));&#xA;     frame->pts = frame_cnt&#x2B;&#x2B;; &#xA;     //frame_cnt&#x2B;&#x2B;;&#xA;     //printf("scale conversion done\n");&#xA;&#xA;     //encode to the video stream&#xA;     int ret = avcodec_send_frame(vid_codec_ctx, frame);&#xA;     if (ret &lt; 0)&#xA;     {&#xA;         Debug::Error("Error sending frame to codec context! ",frame_cnt);&#xA;         break;&#xA;     }&#xA;&#xA;     AVPacket* pkt = av_packet_alloc();&#xA;     //av_init_packet(pkt);&#xA;     ret = avcodec_receive_packet(vid_codec_ctx, pkt);&#xA;     if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;     {&#xA;         //av_packet_unref(pkt);&#xA;         av_packet_free(&amp;pkt);&#xA;         continue;&#xA;     }&#xA;     else if (ret &lt; 0)&#xA;     {&#xA;         Debug::Error("Error during receiving packet: ",AVERROR(ret));&#xA;         //av_packet_unref(pkt);&#xA;         av_packet_free(&amp;pkt);&#xA;         break;&#xA;     }&#xA;&#xA;     if (pkt->pts == AV_NOPTS_VALUE)&#xA;     {&#xA;         //Write PTS&#xA;         //Duration between 2 frames (us)&#xA;         int64_t calc_duration = (double)AV_TIME_BASE / av_q2d(vid_stream->r_frame_rate);&#xA;         //Parameters&#xA;         pkt->pts = (double)(frame_cnt * calc_duration) / (double)(av_q2d(time_base) * AV_TIME_BASE);&#xA;         pkt->dts = pkt->pts;&#xA;         pkt->duration = (double)calc_duration / (double)(av_q2d(time_base) * AV_TIME_BASE);&#xA;     }&#xA;     int64_t pts_time = av_rescale_q(pkt->dts, time_base, time_base_q);&#xA;     int64_t now_time = av_gettime() - start_time_av;&#xA;&#xA;     if (pts_time > now_time)&#xA;         av_usleep(pts_time - now_time);&#xA;&#xA;     //pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));&#xA;     //pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));&#xA;     //pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);&#xA;     //pkt->pos = -1;&#xA;&#xA;     //write frame and send&#xA;     if (av_interleaved_write_frame(ofmt_ctx, pkt)&lt;0)&#xA;     {&#xA;         Debug::Error("Error muxing packet, frame number:",frame_cnt);&#xA;         break;&#xA;     }&#xA;&#xA;     //Debug::Log("RTSP streaming...");&#xA;     //sstd::this_thread::sleep_for(std::chrono::milliseconds(1000/20));&#xA;     //av_packet_unref(pkt);&#xA;     av_packet_free(&amp;pkt);&#xA; }&#xA;&#xA; //av_free_packet(pkt);&#xA; delete[] data;&#xA;&#xA; /* Write the trailer, if any. The trailer must be written before you&#xA;  * close the CodecContexts open when you wrote the header; otherwise&#xA;  * av_write_trailer() may try to use memory that was freed on&#xA;  * av_codec_close(). */&#xA; av_write_trailer(ofmt_ctx);&#xA; av_frame_unref(frame);&#xA; av_frame_free(&amp;frame);&#xA; printf("streaming thread CLOSED!\n");&#xA;}&#xA;</int>

      &#xA;

    8. &#xA;

    &#xA;

    Now, this allows me to connect to my rtsp server and maintain the connection. However, on the rtsp client side I am getting either gray or single static frame as shown below :

    &#xA;

    static frame on client side

    &#xA;

    Would appreciate if you can help with following questions :

    &#xA;

      &#xA;
    1. Firstly, why the stream is not working in spite of continued connection to the server and updating frames ?
    2. &#xA;

    3. Video codec. By default rtsp format uses Mpeg4 codec, is it possible to use h264 ? When I manually set it to AV_CODEC_ID_H264 the program fails at avcodec_open2 with return value of -22.
    4. &#xA;

    5. Do I need to create and allocate new "AVFrame" and "AVPacket" for every frame ? Or can I just reuse global variable for this ?
    6. &#xA;

    7. Do I need to explicitly define some code for real-time streaming ? (Like in ffmpeg we use "-re" flag).
    8. &#xA;

    &#xA;

    Would be great if you can point out some example code for creating livestream. I have checked following resources :

    &#xA;

    &#xA;

    Update

    &#xA;

    While test I found that I am able to play the stream using ffplay, while it's getting stuck on VLC player. Here is snapshot on the ffplay log

    &#xA;

    ffplay log

    &#xA;

  • i want to control on bitrate for 720p

    10 décembre 2020, par shadymelad

    i want to control on bitrate for my files&#xA;i use this code

    &#xA;

    for %i in (Fargo.*.mkv) do echo %~ni.mkv &amp;&amp; ffmpeg -i %~ni.mkv -i shady.png -filter_complex "[0:v][1:v]overlay=main_w-overlay_w-10:10,subtitles=%~ni.srt" -codec:a copy -s hd720 -b:v 800k -maxrate 800k -bufsize 800k -preset medium %~ni.new.mp4&#xA;

    &#xA;

    by this code always the bitrate come to high like average 2000-2500 the finiel files come to high for watching online for ex 1 EP for series come like 1 Giga&#xA;i want to control it to get out average 1000-1200 to make the size come not to high for watching online

    &#xA;

    anyone can help me ?

    &#xA;