
Recherche avancée
Médias (1)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (57)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (8233)
-
Segmentation Analytics : How to Leverage It on Your Site
27 octobre 2023, par Erin — Analytics Tips -
How to Stream RTP (IP camera) Into React App setup
10 novembre 2024, par sharon2469I am trying to transfer a live broadcast from an IP camera or any other broadcast coming from an RTP/RTSP source to my REACT application. BUT MUST BE LIVE


My setup at the moment is :


IP Camera -> (RTP) -> FFmpeg -> (udp) -> Server(nodeJs) -> (WebRTC) -> React app


In the current situation, There is almost no delay, but there are some things here that I can't avoid and I can't understand why, and here is my question :


1) First, is the SETUP even correct and this is the only way to Stream RTP video in Web app ?


2) Is it possible to avoid re-encode the stream , RTP transmission necessarily comes in H.264, hence I don't really need to execute the following command :


return spawn('ffmpeg', [
 '-re', // Read input at its native frame rate Important for live-streaming
 '-probesize', '32', // Set probing size to 32 bytes (32 is minimum)
 '-analyzeduration', '1000000', // An input duration of 1 second
 '-c:v', 'h264', // Video codec of input video
 '-i', 'rtp://238.0.0.2:48888', // Input stream URL
 '-map', '0:v?', // Select video from input stream
 '-c:v', 'libx264', // Video codec of output stream
 '-preset', 'ultrafast', // Faster encoding for lower latency
 '-tune', 'zerolatency', // Optimize for zero latency
 // '-s', '768x480', // Adjust the resolution (experiment with values)
 '-f', 'rtp', `rtp://127.0.0.1:${udpPort}` // Output stream URL
]);



As you can se in this command I re-encode to libx264, But if I set FFMPEG a parameter '-c:v' :'copy' instead of '-c:v', 'libx264' then FFMPEG throw an error says : that it doesn't know how to encode h264 and only knows what is libx264-> Basically, I want to stop the re-encode because there is really no need for it, because the stream is already encoded to H264. Are there certain recommendations that can be made ?


3) I thought about giving up the FFMPEG completely, but the RTP packets arrive at a size of 1200+ BYTES when WEBRTC is limited to up to 1280 BYTE. Is there a way to manage these sabotages without damaging the video and is it to enter this world ? I guess there is the whole story with the JITTER BUFFER here


This is my server side code (THIS IS JUST A TEST CODE)


import {
 MediaStreamTrack,
 randomPort,
 RTCPeerConnection,
 RTCRtpCodecParameters,
 RtpPacket,
} from 'werift'
import {Server} from "ws";
import {createSocket} from "dgram";
import {spawn} from "child_process";
import LoggerFactory from "./logger/loggerFactory";

//

const log = LoggerFactory.getLogger('ServerMedia')

// Websocket server -> WebRTC
const serverPort = 8888
const server = new Server({port: serverPort});
log.info(`Server Media start om port: ${serverPort}`);

// UDP server -> ffmpeg
const udpPort = 48888
const udp = createSocket("udp4");
// udp.bind(udpPort, () => {
// udp.addMembership("238.0.0.2");
// })
udp.bind(udpPort)
log.info(`UDP port: ${udpPort}`)


const createFFmpegProcess = () => {
 log.info(`Start ffmpeg process`)
 return spawn('ffmpeg', [
 '-re', // Read input at its native frame rate Important for live-streaming
 '-probesize', '32', // Set probing size to 32 bytes (32 is minimum)
 '-analyzeduration', '1000000', // An input duration of 1 second
 '-c:v', 'h264', // Video codec of input video
 '-i', 'rtp://238.0.0.2:48888', // Input stream URL
 '-map', '0:v?', // Select video from input stream
 '-c:v', 'libx264', // Video codec of output stream
 '-preset', 'ultrafast', // Faster encoding for lower latency
 '-tune', 'zerolatency', // Optimize for zero latency
 // '-s', '768x480', // Adjust the resolution (experiment with values)
 '-f', 'rtp', `rtp://127.0.0.1:${udpPort}` // Output stream URL
 ]);

}

let ffmpegProcess = createFFmpegProcess();


const attachFFmpegListeners = () => {
 // Capture standard output and print it
 ffmpegProcess.stdout.on('data', (data) => {
 log.info(`FFMPEG process stdout: ${data}`);
 });

 // Capture standard error and print it
 ffmpegProcess.stderr.on('data', (data) => {
 console.error(`ffmpeg stderr: ${data}`);
 });

 // Listen for the exit event
 ffmpegProcess.on('exit', (code, signal) => {
 if (code !== null) {
 log.info(`ffmpeg process exited with code ${code}`);
 } else if (signal !== null) {
 log.info(`ffmpeg process killed with signal ${signal}`);
 }
 });
};


attachFFmpegListeners();


server.on("connection", async (socket) => {
 const payloadType = 96; // It is a numerical value that is assigned to each codec in the SDP offer/answer exchange -> for H264
 // Create a peer connection with the codec parameters set in advance.
 const pc = new RTCPeerConnection({
 codecs: {
 audio: [],
 video: [
 new RTCRtpCodecParameters({
 mimeType: "video/H264",
 clockRate: 90000, // 90000 is the default value for H264
 payloadType: payloadType,
 }),
 ],
 },
 });

 const track = new MediaStreamTrack({kind: "video"});


 udp.on("message", (data) => {
 console.log(data)
 const rtp = RtpPacket.deSerialize(data);
 rtp.header.payloadType = payloadType;
 track.writeRtp(rtp);
 });

 udp.on("error", (err) => {
 console.log(err)

 });

 udp.on("close", () => {
 console.log("close")
 });

 pc.addTransceiver(track, {direction: "sendonly"});

 await pc.setLocalDescription(await pc.createOffer());
 const sdp = JSON.stringify(pc.localDescription);
 socket.send(sdp);

 socket.on("message", (data: any) => {
 if (data.toString() === 'resetFFMPEG') {
 ffmpegProcess.kill('SIGINT');
 log.info(`FFMPEG process killed`)
 setTimeout(() => {
 ffmpegProcess = createFFmpegProcess();
 attachFFmpegListeners();
 }, 5000)
 } else {
 pc.setRemoteDescription(JSON.parse(data));
 }
 });
});



And this fronted :





 
 
 <code class="echappe-js"><script&#xA; crossorigin&#xA; src="https://unpkg.com/react@16/umd/react.development.js"&#xA; ></script>

<script&#xA; crossorigin&#xA; src="https://unpkg.com/react-dom@16/umd/react-dom.development.js"&#xA; ></script>

<script&#xA; crossorigin&#xA; src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.34/browser.min.js"&#xA; ></script>

<script src="https://cdn.jsdelivr.net/npm/babel-regenerator-runtime@6.5.0/runtime.min.js"></script>








<script type="text/babel">&#xA; let rtc;&#xA;&#xA; const App = () => {&#xA; const [log, setLog] = React.useState([]);&#xA; const videoRef = React.useRef();&#xA; const socket = new WebSocket("ws://localhost:8888");&#xA; const [peer, setPeer] = React.useState(null); // Add state to keep track of the peer connection&#xA;&#xA; React.useEffect(() => {&#xA; (async () => {&#xA; await new Promise((r) => (socket.onopen = r));&#xA; console.log("open websocket");&#xA;&#xA; const handleOffer = async (offer) => {&#xA; console.log("new offer", offer.sdp);&#xA;&#xA; const updatedPeer = new RTCPeerConnection({&#xA; iceServers: [],&#xA; sdpSemantics: "unified-plan",&#xA; });&#xA;&#xA; updatedPeer.onicecandidate = ({ candidate }) => {&#xA; if (!candidate) {&#xA; const sdp = JSON.stringify(updatedPeer.localDescription);&#xA; console.log(sdp);&#xA; socket.send(sdp);&#xA; }&#xA; };&#xA;&#xA; updatedPeer.oniceconnectionstatechange = () => {&#xA; console.log(&#xA; "oniceconnectionstatechange",&#xA; updatedPeer.iceConnectionState&#xA; );&#xA; };&#xA;&#xA; updatedPeer.ontrack = (e) => {&#xA; console.log("ontrack", e);&#xA; videoRef.current.srcObject = e.streams[0];&#xA; };&#xA;&#xA; await updatedPeer.setRemoteDescription(offer);&#xA; const answer = await updatedPeer.createAnswer();&#xA; await updatedPeer.setLocalDescription(answer);&#xA;&#xA; setPeer(updatedPeer);&#xA; };&#xA;&#xA; socket.onmessage = (ev) => {&#xA; const data = JSON.parse(ev.data);&#xA; if (data.type === "offer") {&#xA; handleOffer(data);&#xA; } else if (data.type === "resetFFMPEG") {&#xA; // Handle the resetFFMPEG message&#xA; console.log("FFmpeg reset requested");&#xA; }&#xA; };&#xA; })();&#xA; }, []); // Added socket as a dependency to the useEffect hook&#xA;&#xA; const sendRequestToResetFFmpeg = () => {&#xA; socket.send("resetFFMPEG");&#xA; };&#xA;&#xA; return (&#xA; <div>&#xA; Video: &#xA; <video ref={videoRef} autoPlay muted />&#xA; <button onClick={() => sendRequestToResetFFmpeg()}>Reset FFMPEG</button>&#xA; </div>&#xA; );&#xA; };&#xA;&#xA; ReactDOM.render(<App />, document.getElementById("app1"));&#xA;</script>





-
closed (H264 track 1 is not valid : sprop-parameter-sets is missing (96 packetization-mode=1)
8 janvier 2024, par MMingYI used FFmpeg6.1 to stream RTSP, but I received the following errors on the server :




closed (H264 track 1 is not valid : sprop-parameter-sets is missing (96 packetization-mode=1),client:Error occurred when opening output Server returned 400 Bad Request.




#include 
#include 
#include 

#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>time.h>

static void encode(AVCodecContext *enc_ctx, AVFrame *frame, AVPacket *pkt,
 AVFormatContext *outFormatCtx) {
 int ret;

 /* send the frame to the encoder */
 if (frame)
 printf("Send frame %3"PRId64"\n", frame->pts);

 ret = avcodec_send_frame(enc_ctx, frame);
 if (ret < 0) {
 char errbuf[AV_ERROR_MAX_STRING_SIZE];
 av_strerror(ret, errbuf, AV_ERROR_MAX_STRING_SIZE);
 fprintf(stderr, "Error sending a frame for encoding ,%s\n", errbuf);
 exit(1);
 }

 while (ret >= 0) {
 ret = avcodec_receive_packet(enc_ctx, pkt);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 return;
 else if (ret < 0) {
 fprintf(stderr, "Error during encoding\n");
 exit(1);
 }

 printf("Write packet %3"PRId64" (size=%5d)\n", pkt->pts, pkt->size);
 av_write_frame(outFormatCtx, pkt); // Write the packet to the RTMP stream
 av_packet_unref(pkt);
 }
}

int main(int argc, char **argv) {
 av_log_set_level(AV_LOG_DEBUG);
 const char *rtmp_url, *codec_name;
 const AVCodec *codec;
 AVCodecContext *codecContext = NULL;
 int i, ret, x, y;
 AVFormatContext *outFormatCtx;
 AVStream *st;
 AVFrame *frame;
 AVPacket *pkt;
 uint8_t endcode[] = {0, 0, 1, 0xb7};

 if (argc <= 3) {
 fprintf(stderr, "Usage: %s <rtmp url="url"> <codec>\n", argv[0]);
 exit(0);
 }
 rtmp_url = argv[1];
 codec_name = argv[2];
 avformat_network_init();
 /* find the mpeg1video encoder */
// codec = avcodec_find_encoder_by_name(codec_name);
// codec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
// codec = avcodec_find_encoder(AV_CODEC_ID_VP9);
// codec = avcodec_find_encoder(AV_CODEC_ID_MPEG2VIDEO);
 codec = avcodec_find_encoder(AV_CODEC_ID_H264);
// codec = avcodec_find_encoder(AV_CODEC_ID_AV1);
// codec = avcodec_find_encoder(AV_CODEC_ID_H265);
 if (!codec) {
 fprintf(stderr, "Codec '%s' not found\n", codec_name);
 exit(1);
 }
 codecContext = avcodec_alloc_context3(codec);
 if (!codecContext) {
 fprintf(stderr, "Could not allocate video codec context\n");
 exit(1);
 }

 /* ... (rest of the setup code) ... */
/* put sample parameters */
 codecContext->bit_rate = 400000;
 /* resolution must be a multiple of two */
 codecContext->width = 352;
 codecContext->height = 288;
 /* frames per second */
 codecContext->time_base = (AVRational) {1, 25};
 codecContext->framerate = (AVRational) {25, 1};

 /* emit one intra frame every ten frames
 * check frame pict_type before passing frame
 * to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
 * then gop_size is ignored and the output of encoder
 * will always be I frame irrespective to gop_size
 */
 codecContext->gop_size = 10;
 codecContext->max_b_frames = 1;
 codecContext->pix_fmt = AV_PIX_FMT_YUV420P;



 /* Open the RTSP output */
// const AVOutputFormat *ofmt = av_guess_format("tcp", NULL, NULL);
 const AVOutputFormat *ofmt = av_guess_format("rtsp", rtmp_url, NULL);
// const AVOutputFormat *ofmt = av_guess_format("flv", NULL, NULL);
// const AVOutputFormat *ofmt = av_guess_format("rtmp", NULL, NULL);
// const AVOutputFormat *ofmt = av_guess_format("mpegts", NULL, NULL);
// const AVOutputFormat *ofmt = av_guess_format("mp4", NULL, NULL);
 if (!ofmt) {
 fprintf(stderr, "Could not find output format\n");
 exit(1);
 }

 /* Allocate the output context */

/* outFormatCtx = avformat_alloc_context();
 if (!outFormatCtx) {
 fprintf(stderr, "Could not allocate output context\n");
 exit(1);
 }*/

 // 打开输出 这个会导致outFormatCtx 中的stream 为空,并且产生这个问题[rtsp @ 00000204f6218b80] No streams to mux were specified
 if (avformat_alloc_output_context2(&outFormatCtx, ofmt, "rtsp", rtmp_url) != 0) {
 fprintf(stderr, "Could not allocate output context\n");
 return 1;
 }


 outFormatCtx->oformat = ofmt;
 outFormatCtx->url = av_strdup(rtmp_url);

 /* Add a video stream */
 st = avformat_new_stream(outFormatCtx, codec);
 if (!st) {
 fprintf(stderr, "Could not allocate stream\n");
 exit(1);
 }
 st->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 st->codecpar->codec_id = codec->id;
 st->codecpar->width = 352;
 st->codecpar->height = 288;
 st->codecpar->format = AV_PIX_FMT_YUV420P;
// st->codecpar = c;
// st->codecpar->format = AV_PIX_FMT_YUV420P;
 // Set video stream parameters
// st->codecpar->framerate = (AVRational) {25, 1};
// st->id=outFormatCtx->nb_streams-1;
 /* Set the output URL */
 av_dict_set(&outFormatCtx->metadata, "url", rtmp_url, 0);


 pkt = av_packet_alloc();
 if (!pkt)
 exit(1);


 if (codec->id == AV_CODEC_ID_H264)
 av_opt_set(codecContext->priv_data, "preset", "slow", 0);

 AVDictionary *opt = NULL;
/* av_dict_set(&opt, "rtsp_transport", "udp", 0);
 av_dict_set(&opt, "announce_port", "1935", 0);
 av_dict_set(&opt, "enable-protocol", "rtsp", 0);
 av_dict_set(&opt, "protocol_whitelist", "file,udp,tcp,rtp,rtsp", 0);
 av_dict_set(&opt, "enable-protocol", "rtp", 0);
 av_dict_set(&opt, "enable-protocol", "rtsp", 0);
 av_dict_set(&opt, "enable-protocol", "udp", 0);
 av_dict_set(&opt, "enable-muxer", "rtsp", 0);
 av_dict_set(&opt, "enable-muxer", "rtp", 0);*/
 av_dict_set(&opt, "rtsp_transport", "tcp", 0);
 av_dict_set(&opt, "stimeout", "2000000", 0);
 av_dict_set(&opt, "max_delay", "500000", 0);
 av_dict_set(&opt, "sprop-parameter-sets", "asdgasdfs", AV_DICT_APPEND);
 /* open it */
 ret = avcodec_open2(codecContext, codec, &opt);
 if (ret < 0) {
 fprintf(stderr, "Could not open codec: %s\n", av_err2str(ret));
 exit(1);
 }
/* // 打开RTSP输出URL 微软AI给出的代码
 if (!(outFormatCtx->oformat->flags & AVFMT_NOFILE)) {
 int ret = avio_open(&outFormatCtx->pb, rtmp_url, AVIO_FLAG_WRITE);
 if (ret < 0) {
// std::cerr << "Could not open output URL " << out_url << std::endl;
 fprintf(stderr, "Could not open output URL %s\n", av_err2str(ret));
 return -1;
 }
 }*/

 avcodec_parameters_to_context(codecContext, st->codecpar);

/* AVDictionary *options = NULL;
 av_dict_set(&options, "rtsp_transport", "tcp", 0);
 av_dict_set(&options, "stimeout", "2000000", 0);
 av_dict_set(&options, "max_delay", "500000", 0);
 // 初始化输出
 av_dict_set(&options, "rtsp_transport", "tcp", 0);
//设置 接收包间隔最大延迟,微秒
 av_dict_set(&options, "max_delay", "200000", 0);
// rtmp、rtsp延迟控制到最小
 av_dict_set(&options, "fflags", "nobuffer", 0);
// 在进行网络操作时允许的最大等待时间。5秒
 av_dict_set(&options, "timeout", "5000000", 0);
//设置 阻塞超时,否则可能在流断开时连接发生阻塞,微秒
 av_dict_set(&options, "stimeout", "3000000", 0);
//设置 find_stream_info 最大时长,微秒
 av_dict_set(&options, "analyzeduration", "1000000", 0);*/
 av_dict_set(&opt, "preset", "medium", 0);
 av_dict_set(&opt, "tune", "zerolatency", 0);
 av_dict_set(&opt, "profile", "baseline", 0);
 av_dump_format(outFormatCtx, 0, rtmp_url, 1);

 if (avformat_init_output(outFormatCtx, &opt) != 0) {
 fprintf(stderr, "Error initializing output\n");
 return 1;
 }
 if (!(ofmt->flags & AVFMT_NOFILE)) {
 ret = avio_open(&outFormatCtx->pb, rtmp_url, AVIO_FLAG_WRITE);
 if (ret < 0) {
 fprintf(stderr, "Could not open output file '%s'", rtmp_url);
 exit(1);
 }
 }
 /* 这种方式修改没有效果,无法添加修改SDP
 * av_dict_set(&st->metadata, "title", "Cool video", 0);
 av_dict_set(&st->metadata, "Content-Base", " rtsp://10.45.12.141/h264/ch1/main/av_stream/", 0);
 av_dict_set(&st->metadata, "sprop-parameter-sets", "sdsfwedeo", 0);*/
 AVCodecParameters *codecParams = st->codecpar;
 const char *spropParameterSets = "Z0IACpZTBYmI,aMlWsA=="; // 替换为实际的sprop-parameter-sets值
 av_dict_set(&st->metadata, "sprop-parameter-sets", spropParameterSets, 0);
 avcodec_parameters_to_context(codecContext, st->codecpar);
 AVFormatContext *avFormatContext[1];
 avFormatContext[0] = outFormatCtx;
 char spd[2048];
 av_sdp_create(avFormatContext, 1, spd, sizeof(spd));
 printf("%s\n", spd);
/* ret = avio_open(&outFormatCtx->pb, rtmp_url, AVIO_FLAG_WRITE);
 if (ret < 0) {
 fprintf(stderr, "Could not open output ,%s\n", av_err2str(ret));
 exit(1);
 }*/

/*// 设置 H264 参数
 AVDictionary *params = NULL;
 av_dict_set(&params, "profile", "main", 0);
 av_dict_set(&params, "level", "3.1", 0);

// 获取 `sprop-parameter-sets` 参数
 AVPacket *extradata = av_packet_alloc();
// avcodec_parameters_from_context(extradata->data, codecContext);

// 获取 `sprop-parameter-sets` 参数的大小
 int sprop_parameter_sets_size = extradata->size;

// 释放资源
 av_packet_free(&extradata);

// 设置 `sprop-parameter-sets` 参数
 uint8_t *sprop_parameter_sets = extradata->data;
 codecContext->extradata = sprop_parameter_sets;
 codecContext->extradata_size = sprop_parameter_sets_size;*/

 /* Write the header */
// ret = avformat_write_header(outFormatCtx, NULL);
 ret = avformat_write_header(outFormatCtx, &opt);
 if (ret != 0) {
 fprintf(stderr, "Error occurred when opening output %s\n", av_err2str(ret));
 exit(1);
 }

 frame = av_frame_alloc();
 if (!frame) {
 fprintf(stderr, "Could not allocate video frame\n");
 exit(1);
 }
// frame->format = c->pix_fmt;
// frame->format = AV_PIX_FMT_YUV420P;
 frame->format = 0;
 frame->width = codecContext->width;
 frame->height = codecContext->height;

 ret = av_frame_get_buffer(frame, 0);
 if (ret < 0) {
 fprintf(stderr, "Could not allocate the video frame data ,%s\n", av_err2str(ret));
 exit(1);
 }

 /* encode 1 second of video */
 for (i = 0; i < 2500; i++) {
 /* ... (rest of the encoding loop) ... */
 fflush(stdout);

 /* make sure the frame data is writable */
 ret = av_frame_make_writable(frame);
 if (ret < 0)
 exit(1);

 /* prepare a dummy image */
 /* Y */
 for (y = 0; y < codecContext->height; y++) {
 for (x = 0; x < codecContext->width; x++) {
 frame->data[0][y * frame->linesize[0] + x] = x + y + i * 3;
 }
 }

 /* Cb and Cr */
 for (y = 0; y < codecContext->height / 2; y++) {
 for (x = 0; x < codecContext->width / 2; x++) {
 frame->data[1][y * frame->linesize[1] + x] = 128 + y + i * 2;
 frame->data[2][y * frame->linesize[2] + x] = 64 + x + i * 5;
 }
 }

 frame->pts = i;

 /* encode the image */
 encode(codecContext, frame, pkt, outFormatCtx);
 }

 /* flush the encoder */
 encode(codecContext, NULL, pkt, outFormatCtx);

 /* Write the trailer */
 av_write_trailer(outFormatCtx);

 /* Close the output */
 avformat_free_context(outFormatCtx);

 avcodec_free_context(&codecContext);
 av_frame_free(&frame);
 av_packet_free(&pkt);

 return 0;
}
</codec></rtmp>


I searched online for how to add "prop parameter sets", but I used their method but none of them worked. I also used WireShark to capture packets, but during the communication process, there was still no "prop parameter sets". Here is the method I tried :


AVDictionary *opt = NULL;
 av_dict_set(&opt, "sprop-parameter-sets", "asdgasdfs", 0);