Recherche avancée

Médias (2)

Mot : - Tags -/map

Autres articles (16)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

Sur d’autres sites (4942)

  • Record audio data to an existing media file using FFMPEG API

    3 février 2021, par bbdd

    My task is to record the received audio data in a media file. I have no problem with this, everything works fine. But, when closing the audio file, I will no longer be able to re-open it and write the audio data to the end of the audio file. How do I solve this problem ? And in general, is it possible to write new data to the end of an existing media file ?

    


    This is a piece of code where I record the trailer and close the media file :

    


    // Writing the stream trailer to an output
// media file and free the file private data.
av_write_trailer(p_oFrmCtx);
avformat_close_input(&p_oFrmCtx);



    


  • FFMPEG keeping quality when reducing FPS and streaming over RTSP with rtsp-simple-server

    25 janvier 2021, par Jens Schneider

    I'm using the rtsp-simple-server (https://github.com/aler9/rtsp-simple-server) and feed the RTSP Server with a FFMPEG stream.

    


    I use a docker compose file to start the stream :

    


    version: "3.8"

services:

  ffmpeg:
    container_name: ffmpeg-base
    restart: always
    image: "jenssgb/rtspffmpeg:base"
    depends_on:
      - rtsp-server
    volumes:
      - $PWD/:/video
    network_mode: "host"
    command: "ffmpeg -re -stream_loop -1 -i /video/footage-1-b.mp4 -c copy -f rtsp rtsp://localhost:8554/compose-rtsp"
  
  rtsp-server:
    container_name: rtsp-server-base
    restart: always
    image: "aler9/rtsp-simple-server"
    network_mode: "host"


    


    Now I'm trying to reduce the FPS of my video with transcoding it :

    


    command: -re -stream_loop -1 -i ${VIDEO_FILE} -vf "fps=${FPS_COMPOSE}" -f rtsp rtsp://localhost:8554/compose-rtsp


    


    This is basically working, but the quality of the output video becomes pretty bad. I tried a lot of things like -c:v libx264 which did help for a minute but let ffmpeg crash then.

    


    av_interleaved_write_frame(): Broken pipe0:00:09.99 bitrate=N/A speed=0.985x    
[rtsp @ 0x5563b1755640] Packets poorly interleaved, failed to avoid negative timestamp -33660 in stream 0.
Try -max_interleave_delta 0 as a possible workaround.
av_interleaved_write_frame(): Broken pipe
Error writing trailer of rtsp://localhost:8554/compose-rtsp: Broken pipe


    


    Any idea how I can reduce the FPS send the stream to the server but keep the video quality ? Later I'm going to reduce the resolution as well - but for now I want to keep resolution and quality but only reduce the FPS.

    


    Full logs from my test with -c:v libx264 :

    


        ffmpeg -re -stream_loop -1 -i footage-1-b.mp4 -vf "fps=5" -c:v libx264 -f rtsp rtsp://localhost:8554/compose-rtsp
ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)
  configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'footage-1-b.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    title           : Session streamed by "nessyMediaServer"
    encoder         : Lavf58.29.100
    comment         : h264_3
  Duration: 00:59:59.63, start: 0.000000, bitrate: 2099 kb/s
    Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuvj420p(pc), 1280x720 [SAR 1:1 DAR 16:9], 2061 kb/s, 24.96 fps, 25 tbr, 12800 tbn, 25 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, mono, fltp, 35 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[aac @ 0x56277bc7f840] Too many bits 8832.000000 > 6144 per frame requested, clamping to max
[libx264 @ 0x56277bbc33c0] using SAR=1/1
[libx264 @ 0x56277bbc33c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x56277bbc33c0] profile High, level 3.1
[libx264 @ 0x56277bbc33c0] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=5 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, rtsp, to 'rtsp://localhost:8554/compose-rtsp':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    title           : Session streamed by "nessyMediaServer"
    comment         : h264_3
    encoder         : Lavf58.29.100
    Stream #0:0(und): Video: h264 (libx264), yuvj420p(pc), 1280x720 [SAR 1:1 DAR 16:9], q=-1--1, 5 fps, 90k tbn, 5 tbc (default)
    Metadata:
      handler_name    : VideoHandler
      encoder         : Lavc58.54.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    Stream #0:1(und): Audio: aac (LC), 8000 Hz, mono, fltp, 48 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
      encoder         : Lavc58.54.100 aac
av_interleaved_write_frame(): Broken pipe0:00:09.87 bitrate=N/A speed=0.978x    
[rtsp @ 0x56277bba0640] Packets poorly interleaved, failed to avoid negative timestamp -33660 in stream 0.
Try -max_interleave_delta 0 as a possible workaround.
av_interleaved_write_frame(): Broken pipe
Error writing trailer of rtsp://localhost:8554/compose-rtsp: Broken pipe
frame=   50 fps=4.6 q=23.0 Lsize=N/A time=00:00:10.21 bitrate=N/A speed=0.947x    
video:162kB audio:8kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[libx264 @ 0x56277bbc33c0] frame I:1     Avg QP:19.85  size:165667
[libx264 @ 0x56277bbc33c0] frame P:13    Avg QP:20.88  size:140481
[libx264 @ 0x56277bbc33c0] frame B:36    Avg QP:24.58  size: 55445
[libx264 @ 0x56277bbc33c0] consecutive B-frames:  4.0%  0.0%  0.0% 96.0%
[libx264 @ 0x56277bbc33c0] mb I  I16..4:  4.4% 30.8% 64.8%
[libx264 @ 0x56277bbc33c0] mb P  I16..4:  4.1% 10.6% 20.0%  P16..4: 24.4% 24.8% 13.3%  0.0%  0.0%    skip: 2.6%
[libx264 @ 0x56277bbc33c0] mb B  I16..4:  0.8%  2.0%  4.0%  B16..8: 40.3% 14.5%  5.2%  direct:11.8%  skip:21.4%  L0:77.1% L1: 7.9% BI:14.9%
[libx264 @ 0x56277bbc33c0] 8x8 transform intra:30.1% inter:11.9%
[libx264 @ 0x56277bbc33c0] coded y,uvDC,uvAC intra: 82.5% 60.9% 26.6% inter: 55.0% 42.4% 2.7%
[libx264 @ 0x56277bbc33c0] i16 v,h,dc,p: 17% 26% 34% 23%
[libx264 @ 0x56277bbc33c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 33% 15%  3%  4%  5%  4%  3%  9%
[libx264 @ 0x56277bbc33c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 22% 13%  3% 17% 11%  5%  3%  6%
[libx264 @ 0x56277bbc33c0] i8c dc,h,v,p: 54% 25% 16%  5%
[libx264 @ 0x56277bbc33c0] Weighted P-Frames: Y:7.7% UV:7.7%
[libx264 @ 0x56277bbc33c0] ref P L0: 33.2% 11.6% 29.0% 23.9%  2.4%
[libx264 @ 0x56277bbc33c0] ref B L0: 79.6% 11.9%  8.5%
[libx264 @ 0x56277bbc33c0] ref B L1: 95.9%  4.1%
[libx264 @ 0x56277bbc33c0] kb/s:3190.34
[aac @ 0x56277bc7f840] Qavg: 65536.000
Conversion failed!


    


    Thank you,
J

    


  • Encoding of raw frames (D3D11Texture2D) to an rtsp stream using libav*

    16 juillet 2021, par uzer

    I have managed to create a rtsp stream using libav* and directX texture (which I am obtaining from GDI API using Bitblit method). Here's my approach for creating live rtsp stream :

    


      

    1. Create output context and stream (skipping the checks here)

      


        

      • avformat_alloc_output_context2(&ofmt_ctx, NULL, "rtsp", rtsp_url) ; //RTSP
      • 


      • vid_codec = avcodec_find_encoder(ofmt_ctx->oformat->video_codec) ;
      • 


      • vid_stream = avformat_new_stream(ofmt_ctx,vid_codec) ;
      • 


      • vid_codec_ctx = avcodec_alloc_context3(vid_codec) ;
      • 


      


    2. 


    3. Set codec params

      


      codec_ctx->codec_tag = 0;
codec_ctx->codec_id = ofmt_ctx->oformat->video_codec;
//codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
codec_ctx->width = width;   codec_ctx->height = height;
codec_ctx->gop_size = 12;
 //codec_ctx->gop_size = 40;
 //codec_ctx->max_b_frames = 3;
codec_ctx->pix_fmt = target_pix_fmt; // AV_PIX_FMT_YUV420P
codec_ctx->framerate = { stream_fps, 1 };
codec_ctx->time_base = { 1, stream_fps};
if (fctx->oformat->flags & AVFMT_GLOBALHEADER)
 {
     codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 }


      


    4. 


    5. Initialize video stream

      


      if (avcodec_parameters_from_context(stream->codecpar, codec_ctx) < 0)
{
 Debug::Error("Could not initialize stream codec parameters!");
 return false;
}

AVDictionary* codec_options = nullptr;
if (codec->id == AV_CODEC_ID_H264) {
 av_dict_set(&codec_options, "profile", "high", 0);
 av_dict_set(&codec_options, "preset", "fast", 0);
 av_dict_set(&codec_options, "tune", "zerolatency", 0);
}
// open video encoder
int ret = avcodec_open2(codec_ctx, codec, &codec_options);
if (ret<0) {
 Debug::Error("Could not open video encoder: ", avcodec_get_name(codec->id), " error ret: ", AVERROR(ret));
 return false;
}

stream->codecpar->extradata = codec_ctx->extradata;
stream->codecpar->extradata_size = codec_ctx->extradata_size;


      


    6. 


    7. Start streaming

      


      // Create new frame and allocate buffer&#xA;AVFrame* AllocateFrameBuffer(AVCodecContext* codec_ctx, double width, double height)&#xA;{&#xA; AVFrame* frame = av_frame_alloc();&#xA; std::vector framebuf(av_image_get_buffer_size(codec_ctx->pix_fmt, width, height, 1));&#xA; av_image_fill_arrays(frame->data, frame->linesize, framebuf.data(), codec_ctx->pix_fmt, width, height, 1);&#xA; frame->width = width;&#xA; frame->height = height;&#xA; frame->format = static_cast<int>(codec_ctx->pix_fmt);&#xA; //Debug::Log("framebuf size: ", framebuf.size(), "  frame format: ", frame->format);&#xA; return frame;&#xA;}&#xA;&#xA;void RtspStream(AVFormatContext* ofmt_ctx, AVStream* vid_stream, AVCodecContext* vid_codec_ctx, char* rtsp_url)&#xA;{&#xA; printf("Output stream info:\n");&#xA; av_dump_format(ofmt_ctx, 0, rtsp_url, 1);&#xA;&#xA; const int width = WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetTextureWidth();&#xA; const int height = WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetTextureHeight();&#xA;&#xA; //DirectX BGRA to h264 YUV420p&#xA; SwsContext* conversion_ctx = sws_getContext(width, height, src_pix_fmt,&#xA;     vid_stream->codecpar->width, vid_stream->codecpar->height, target_pix_fmt, &#xA;     SWS_BICUBIC | SWS_BITEXACT, nullptr, nullptr, nullptr);&#xA;if (!conversion_ctx)&#xA;{&#xA;     Debug::Error("Could not initialize sample scaler!");&#xA;     return;&#xA;}&#xA;&#xA; AVFrame* frame = AllocateFrameBuffer(vid_codec_ctx,vid_codec_ctx->width,vid_codec_ctx->height);&#xA; if (!frame) {&#xA;     Debug::Error("Could not allocate video frame\n");&#xA;     return;&#xA; }&#xA;&#xA;&#xA; if (avformat_write_header(ofmt_ctx, NULL) &lt; 0) {&#xA;     Debug::Error("Error occurred when writing header");&#xA;     return;&#xA; }&#xA; if (av_frame_get_buffer(frame, 0) &lt; 0) {&#xA;     Debug::Error("Could not allocate the video frame data\n");&#xA;     return;&#xA; }&#xA;&#xA; int frame_cnt = 0;&#xA; //av start time in microseconds&#xA; int64_t start_time_av = av_gettime();&#xA; AVRational time_base = vid_stream->time_base;&#xA; AVRational time_base_q = { 1, AV_TIME_BASE };&#xA;&#xA; // frame pixel data info&#xA; int data_size = width * height * 4;&#xA; uint8_t* data = new uint8_t[data_size];&#xA;//    AVPacket* pkt = av_packet_alloc();&#xA;&#xA; while (RtspStreaming::IsStreaming())&#xA; {&#xA;     /* make sure the frame data is writable */&#xA;     if (av_frame_make_writable(frame) &lt; 0)&#xA;     {&#xA;         Debug::Error("Can&#x27;t make frame writable");&#xA;         break;&#xA;     }&#xA;&#xA;     //get copy/ref of the texture&#xA;     //uint8_t* data = WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetBuffer();&#xA;     if (!WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetPixels(data, 0, 0, width, height))&#xA;     {&#xA;         Debug::Error("Failed to get frame buffer. ID: ", RtspStreaming::WindowId());&#xA;         std::this_thread::sleep_for (std::chrono::seconds(2));&#xA;         continue;&#xA;     }&#xA;     //printf("got pixels data\n");&#xA;     // convert BGRA to yuv420 pixel format&#xA;     int srcStrides[1] = { 4 * width };&#xA;     if (sws_scale(conversion_ctx, &amp;data, srcStrides, 0, height, frame->data, frame->linesize) &lt; 0)&#xA;     {&#xA;         Debug::Error("Unable to scale d3d11 texture to frame. ", frame_cnt);&#xA;         break;&#xA;     }&#xA;     //Debug::Log("frame pts: ", frame->pts, "  time_base:", av_rescale_q(1, vid_codec_ctx->time_base, vid_stream->time_base));&#xA;     frame->pts = frame_cnt&#x2B;&#x2B;; &#xA;     //frame_cnt&#x2B;&#x2B;;&#xA;     //printf("scale conversion done\n");&#xA;&#xA;     //encode to the video stream&#xA;     int ret = avcodec_send_frame(vid_codec_ctx, frame);&#xA;     if (ret &lt; 0)&#xA;     {&#xA;         Debug::Error("Error sending frame to codec context! ",frame_cnt);&#xA;         break;&#xA;     }&#xA;&#xA;     AVPacket* pkt = av_packet_alloc();&#xA;     //av_init_packet(pkt);&#xA;     ret = avcodec_receive_packet(vid_codec_ctx, pkt);&#xA;     if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;     {&#xA;         //av_packet_unref(pkt);&#xA;         av_packet_free(&amp;pkt);&#xA;         continue;&#xA;     }&#xA;     else if (ret &lt; 0)&#xA;     {&#xA;         Debug::Error("Error during receiving packet: ",AVERROR(ret));&#xA;         //av_packet_unref(pkt);&#xA;         av_packet_free(&amp;pkt);&#xA;         break;&#xA;     }&#xA;&#xA;     if (pkt->pts == AV_NOPTS_VALUE)&#xA;     {&#xA;         //Write PTS&#xA;         //Duration between 2 frames (us)&#xA;         int64_t calc_duration = (double)AV_TIME_BASE / av_q2d(vid_stream->r_frame_rate);&#xA;         //Parameters&#xA;         pkt->pts = (double)(frame_cnt * calc_duration) / (double)(av_q2d(time_base) * AV_TIME_BASE);&#xA;         pkt->dts = pkt->pts;&#xA;         pkt->duration = (double)calc_duration / (double)(av_q2d(time_base) * AV_TIME_BASE);&#xA;     }&#xA;     int64_t pts_time = av_rescale_q(pkt->dts, time_base, time_base_q);&#xA;     int64_t now_time = av_gettime() - start_time_av;&#xA;&#xA;     if (pts_time > now_time)&#xA;         av_usleep(pts_time - now_time);&#xA;&#xA;     //pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));&#xA;     //pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));&#xA;     //pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);&#xA;     //pkt->pos = -1;&#xA;&#xA;     //write frame and send&#xA;     if (av_interleaved_write_frame(ofmt_ctx, pkt)&lt;0)&#xA;     {&#xA;         Debug::Error("Error muxing packet, frame number:",frame_cnt);&#xA;         break;&#xA;     }&#xA;&#xA;     //Debug::Log("RTSP streaming...");&#xA;     //sstd::this_thread::sleep_for(std::chrono::milliseconds(1000/20));&#xA;     //av_packet_unref(pkt);&#xA;     av_packet_free(&amp;pkt);&#xA; }&#xA;&#xA; //av_free_packet(pkt);&#xA; delete[] data;&#xA;&#xA; /* Write the trailer, if any. The trailer must be written before you&#xA;  * close the CodecContexts open when you wrote the header; otherwise&#xA;  * av_write_trailer() may try to use memory that was freed on&#xA;  * av_codec_close(). */&#xA; av_write_trailer(ofmt_ctx);&#xA; av_frame_unref(frame);&#xA; av_frame_free(&amp;frame);&#xA; printf("streaming thread CLOSED!\n");&#xA;}&#xA;</int>

      &#xA;

    8. &#xA;

    &#xA;

    Now, this allows me to connect to my rtsp server and maintain the connection. However, on the rtsp client side I am getting either gray or single static frame as shown below :

    &#xA;

    static frame on client side

    &#xA;

    Would appreciate if you can help with following questions :

    &#xA;

      &#xA;
    1. Firstly, why the stream is not working in spite of continued connection to the server and updating frames ?
    2. &#xA;

    3. Video codec. By default rtsp format uses Mpeg4 codec, is it possible to use h264 ? When I manually set it to AV_CODEC_ID_H264 the program fails at avcodec_open2 with return value of -22.
    4. &#xA;

    5. Do I need to create and allocate new "AVFrame" and "AVPacket" for every frame ? Or can I just reuse global variable for this ?
    6. &#xA;

    7. Do I need to explicitly define some code for real-time streaming ? (Like in ffmpeg we use "-re" flag).
    8. &#xA;

    &#xA;

    Would be great if you can point out some example code for creating livestream. I have checked following resources :

    &#xA;

    &#xA;

    Update

    &#xA;

    While test I found that I am able to play the stream using ffplay, while it's getting stuck on VLC player. Here is snapshot on the ffplay log

    &#xA;

    ffplay log

    &#xA;