
Recherche avancée
Autres articles (88)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (13690)
-
Web Analytics : The Quick Start Guide
25 janvier 2024, par Erin -
How to Stream RTP (IP camera) Into React App setup
10 novembre 2024, par sharon2469I am trying to transfer a live broadcast from an IP camera or any other broadcast coming from an RTP/RTSP source to my REACT application. BUT MUST BE LIVE


My setup at the moment is :


IP Camera -> (RTP) -> FFmpeg -> (udp) -> Server(nodeJs) -> (WebRTC) -> React app


In the current situation, There is almost no delay, but there are some things here that I can't avoid and I can't understand why, and here is my question :


1) First, is the SETUP even correct and this is the only way to Stream RTP video in Web app ?


2) Is it possible to avoid re-encode the stream , RTP transmission necessarily comes in H.264, hence I don't really need to execute the following command :


return spawn('ffmpeg', [
 '-re', // Read input at its native frame rate Important for live-streaming
 '-probesize', '32', // Set probing size to 32 bytes (32 is minimum)
 '-analyzeduration', '1000000', // An input duration of 1 second
 '-c:v', 'h264', // Video codec of input video
 '-i', 'rtp://238.0.0.2:48888', // Input stream URL
 '-map', '0:v?', // Select video from input stream
 '-c:v', 'libx264', // Video codec of output stream
 '-preset', 'ultrafast', // Faster encoding for lower latency
 '-tune', 'zerolatency', // Optimize for zero latency
 // '-s', '768x480', // Adjust the resolution (experiment with values)
 '-f', 'rtp', `rtp://127.0.0.1:${udpPort}` // Output stream URL
]);



As you can se in this command I re-encode to libx264, But if I set FFMPEG a parameter '-c:v' :'copy' instead of '-c:v', 'libx264' then FFMPEG throw an error says : that it doesn't know how to encode h264 and only knows what is libx264-> Basically, I want to stop the re-encode because there is really no need for it, because the stream is already encoded to H264. Are there certain recommendations that can be made ?


3) I thought about giving up the FFMPEG completely, but the RTP packets arrive at a size of 1200+ BYTES when WEBRTC is limited to up to 1280 BYTE. Is there a way to manage these sabotages without damaging the video and is it to enter this world ? I guess there is the whole story with the JITTER BUFFER here


This is my server side code (THIS IS JUST A TEST CODE)


import {
 MediaStreamTrack,
 randomPort,
 RTCPeerConnection,
 RTCRtpCodecParameters,
 RtpPacket,
} from 'werift'
import {Server} from "ws";
import {createSocket} from "dgram";
import {spawn} from "child_process";
import LoggerFactory from "./logger/loggerFactory";

//

const log = LoggerFactory.getLogger('ServerMedia')

// Websocket server -> WebRTC
const serverPort = 8888
const server = new Server({port: serverPort});
log.info(`Server Media start om port: ${serverPort}`);

// UDP server -> ffmpeg
const udpPort = 48888
const udp = createSocket("udp4");
// udp.bind(udpPort, () => {
// udp.addMembership("238.0.0.2");
// })
udp.bind(udpPort)
log.info(`UDP port: ${udpPort}`)


const createFFmpegProcess = () => {
 log.info(`Start ffmpeg process`)
 return spawn('ffmpeg', [
 '-re', // Read input at its native frame rate Important for live-streaming
 '-probesize', '32', // Set probing size to 32 bytes (32 is minimum)
 '-analyzeduration', '1000000', // An input duration of 1 second
 '-c:v', 'h264', // Video codec of input video
 '-i', 'rtp://238.0.0.2:48888', // Input stream URL
 '-map', '0:v?', // Select video from input stream
 '-c:v', 'libx264', // Video codec of output stream
 '-preset', 'ultrafast', // Faster encoding for lower latency
 '-tune', 'zerolatency', // Optimize for zero latency
 // '-s', '768x480', // Adjust the resolution (experiment with values)
 '-f', 'rtp', `rtp://127.0.0.1:${udpPort}` // Output stream URL
 ]);

}

let ffmpegProcess = createFFmpegProcess();


const attachFFmpegListeners = () => {
 // Capture standard output and print it
 ffmpegProcess.stdout.on('data', (data) => {
 log.info(`FFMPEG process stdout: ${data}`);
 });

 // Capture standard error and print it
 ffmpegProcess.stderr.on('data', (data) => {
 console.error(`ffmpeg stderr: ${data}`);
 });

 // Listen for the exit event
 ffmpegProcess.on('exit', (code, signal) => {
 if (code !== null) {
 log.info(`ffmpeg process exited with code ${code}`);
 } else if (signal !== null) {
 log.info(`ffmpeg process killed with signal ${signal}`);
 }
 });
};


attachFFmpegListeners();


server.on("connection", async (socket) => {
 const payloadType = 96; // It is a numerical value that is assigned to each codec in the SDP offer/answer exchange -> for H264
 // Create a peer connection with the codec parameters set in advance.
 const pc = new RTCPeerConnection({
 codecs: {
 audio: [],
 video: [
 new RTCRtpCodecParameters({
 mimeType: "video/H264",
 clockRate: 90000, // 90000 is the default value for H264
 payloadType: payloadType,
 }),
 ],
 },
 });

 const track = new MediaStreamTrack({kind: "video"});


 udp.on("message", (data) => {
 console.log(data)
 const rtp = RtpPacket.deSerialize(data);
 rtp.header.payloadType = payloadType;
 track.writeRtp(rtp);
 });

 udp.on("error", (err) => {
 console.log(err)

 });

 udp.on("close", () => {
 console.log("close")
 });

 pc.addTransceiver(track, {direction: "sendonly"});

 await pc.setLocalDescription(await pc.createOffer());
 const sdp = JSON.stringify(pc.localDescription);
 socket.send(sdp);

 socket.on("message", (data: any) => {
 if (data.toString() === 'resetFFMPEG') {
 ffmpegProcess.kill('SIGINT');
 log.info(`FFMPEG process killed`)
 setTimeout(() => {
 ffmpegProcess = createFFmpegProcess();
 attachFFmpegListeners();
 }, 5000)
 } else {
 pc.setRemoteDescription(JSON.parse(data));
 }
 });
});



And this fronted :





 
 
 <code class="echappe-js"><script&#xA; crossorigin&#xA; src="https://unpkg.com/react@16/umd/react.development.js"&#xA; ></script>

<script&#xA; crossorigin&#xA; src="https://unpkg.com/react-dom@16/umd/react-dom.development.js"&#xA; ></script>

<script&#xA; crossorigin&#xA; src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.34/browser.min.js"&#xA; ></script>

<script src="https://cdn.jsdelivr.net/npm/babel-regenerator-runtime@6.5.0/runtime.min.js"></script>








<script type="text/babel">&#xA; let rtc;&#xA;&#xA; const App = () => {&#xA; const [log, setLog] = React.useState([]);&#xA; const videoRef = React.useRef();&#xA; const socket = new WebSocket("ws://localhost:8888");&#xA; const [peer, setPeer] = React.useState(null); // Add state to keep track of the peer connection&#xA;&#xA; React.useEffect(() => {&#xA; (async () => {&#xA; await new Promise((r) => (socket.onopen = r));&#xA; console.log("open websocket");&#xA;&#xA; const handleOffer = async (offer) => {&#xA; console.log("new offer", offer.sdp);&#xA;&#xA; const updatedPeer = new RTCPeerConnection({&#xA; iceServers: [],&#xA; sdpSemantics: "unified-plan",&#xA; });&#xA;&#xA; updatedPeer.onicecandidate = ({ candidate }) => {&#xA; if (!candidate) {&#xA; const sdp = JSON.stringify(updatedPeer.localDescription);&#xA; console.log(sdp);&#xA; socket.send(sdp);&#xA; }&#xA; };&#xA;&#xA; updatedPeer.oniceconnectionstatechange = () => {&#xA; console.log(&#xA; "oniceconnectionstatechange",&#xA; updatedPeer.iceConnectionState&#xA; );&#xA; };&#xA;&#xA; updatedPeer.ontrack = (e) => {&#xA; console.log("ontrack", e);&#xA; videoRef.current.srcObject = e.streams[0];&#xA; };&#xA;&#xA; await updatedPeer.setRemoteDescription(offer);&#xA; const answer = await updatedPeer.createAnswer();&#xA; await updatedPeer.setLocalDescription(answer);&#xA;&#xA; setPeer(updatedPeer);&#xA; };&#xA;&#xA; socket.onmessage = (ev) => {&#xA; const data = JSON.parse(ev.data);&#xA; if (data.type === "offer") {&#xA; handleOffer(data);&#xA; } else if (data.type === "resetFFMPEG") {&#xA; // Handle the resetFFMPEG message&#xA; console.log("FFmpeg reset requested");&#xA; }&#xA; };&#xA; })();&#xA; }, []); // Added socket as a dependency to the useEffect hook&#xA;&#xA; const sendRequestToResetFFmpeg = () => {&#xA; socket.send("resetFFMPEG");&#xA; };&#xA;&#xA; return (&#xA; <div>&#xA; Video: &#xA; <video ref={videoRef} autoPlay muted />&#xA; <button onClick={() => sendRequestToResetFFmpeg()}>Reset FFMPEG</button>&#xA; </div>&#xA; );&#xA; };&#xA;&#xA; ReactDOM.render(<App />, document.getElementById("app1"));&#xA;</script>





-
Problems with outputting stream format as RTMP via FFmpeg C-API
9 janvier 2024, par dongrixinyuI am using FFmpeg's C API to push video streams
rtmp://....
into an SRS server.

The input stream is an MP4 file namedjuren-30s.mp4
.

The output stream is also an MP4 file namedjuren-30s-5.mp4
.

My piece of code (see further down) works fine when used in the following steps :

mp4 -> demux -> decode -> rgb images -> encode -> mux -> mp4
.

Problem :


When I changed the output stream to an online RTMP url named
rtmp://ip:port/live/stream_nb_23
(just an example, you can change it according to your server and rules.)

result : This code would be corrupted
mp4 -> rtmp(flv)
.

What I've tried :


Changing the output format

I changed the output format param to become flv when I initialized theavformat_alloc_output_context2
. But this didn't help.

Debugging the output

When I executedffprobe rtmp://ip:port/live/xxxxxxx
, I got the following errors and did not know why :

[h264 @ 0x55a925e3ba80] luma_log2_weight_denom 12 is out of range
[h264 @ 0x55a925e3ba80] Missing reference picture, default is 2
[h264 @ 0x55a925e3ba80] concealing 8003 DC, 8003 AC, 8003 MV errors in P frame
[h264 @ 0x55a925e3ba80] QP 4294966938 out of range
[h264 @ 0x55a925e3ba80] decode_slice_header error
[h264 @ 0x55a925e3ba80] no frame!
[h264 @ 0x55a925e3ba80] luma_log2_weight_denom 21 is out of range
[h264 @ 0x55a925e3ba80] luma_log2_weight_denom 10 is out of range
[h264 @ 0x55a925e3ba80] chroma_log2_weight_denom 12 is out of range
[h264 @ 0x55a925e3ba80] Missing reference picture, default is 0
[h264 @ 0x55a925e3ba80] decode_slice_header error
[h264 @ 0x55a925e3ba80] QP 4294967066 out of range
[h264 @ 0x55a925e3ba80] decode_slice_header error
[h264 @ 0x55a925e3ba80] no frame!
[h264 @ 0x55a925e3ba80] QP 341 out of range
[h264 @ 0x55a925e3ba80] decode_slice_header error



I am confused about the difference between MP4 and RTMP of how to use FFmpeg C-API to produce a correct output stream format.


Besides, I also wanna learn how to convert video and audio streams into other formats using FFmpeg C-api, such as
flv
,ts
,rtsp
, etc.

Code to reproduce the problem :


- 

-
I have also put the project files (code, video files) on Github.


-
The C code shown below can be specifically found at Main.c which is a minimum version to reproduce. It can be reproduced and run successfully.








So how to make this code output to RTMP without getting issue of an unplayable video ?


#include 
#include "libavformat/avformat.h"
int main()
{
 int ret = 0; int err;

 //Open input file
 char filename[] = "juren-30s.mp4";
 AVFormatContext *fmt_ctx = avformat_alloc_context();
 if (!fmt_ctx) {
 printf("error code %d \n",AVERROR(ENOMEM));
 return ENOMEM;
 }
 if((err = avformat_open_input(&fmt_ctx, filename,NULL,NULL)) < 0){
 printf("can not open file %d \n",err);
 return err;
 }

 //Open the decoder
 AVCodecContext *avctx = avcodec_alloc_context3(NULL);
 ret = avcodec_parameters_to_context(avctx, fmt_ctx->streams[0]->codecpar);
 if (ret < 0){
 printf("error code %d \n",ret);
 return ret;
 }
 AVCodec *codec = avcodec_find_decoder(avctx->codec_id);
 if ((ret = avcodec_open2(avctx, codec, NULL)) < 0) {
 printf("open codec faile %d \n",ret);
 return ret;
 }

 //Open the output file container
 char filename_out[] = "juren-30s-5.mp4";
 AVFormatContext *fmt_ctx_out = NULL;
 err = avformat_alloc_output_context2(&fmt_ctx_out, NULL, NULL, filename_out);
 if (!fmt_ctx_out) {
 printf("error code %d \n",AVERROR(ENOMEM));
 return ENOMEM;
 }
 //Add all the way to the container context
 AVStream *st = avformat_new_stream(fmt_ctx_out, NULL);
 st->time_base = fmt_ctx->streams[0]->time_base;

 AVCodecContext *enc_ctx = NULL;
 
 AVPacket *pt = av_packet_alloc();
 AVFrame *frame = av_frame_alloc();
 AVPacket *pkt_out = av_packet_alloc();

 int frame_num = 0; int read_end = 0;
 
 for(;;){
 if( 1 == read_end ){ break;}

 ret = av_read_frame(fmt_ctx, pkt);
 //Skip and do not process audio packets
 if( 1 == pkt->stream_index ){
 av_packet_unref(pt);
 continue;
 }

 if ( AVERROR_EOF == ret) {
 //After reading the file, the data and size of pkt should be null at this time
 avcodec_send_packet(avctx, NULL);
 }else {
 if( 0 != ret){
 printf("read error code %d \n",ret);
 return ENOMEM;
 }else{
 retry:
 if (avcodec_send_packet(avctx, pkt) == AVERROR(EAGAIN)) {
 printf("Receive_frame and send_packet both returned EAGAIN, which is an API violation.\n");
 //Here you can consider sleeping for 0.1 seconds and returning EAGAIN. This is usually because there is a bug in ffmpeg's internal API.
 goto retry;
 }
 //Release the encoded data in pkt
 av_packet_unref(pt);
 }

 }

 //The loop keeps reading data from the decoder until there is no more data to read.
 for(;;){
 //Read AVFrame
 ret = avcodec_receive_frame(avctx, frame);
 /* Release the YUV data in the frame,
 * Since av_frame_unref is called in the avcodec_receive_frame function, the following code can be commented.
 * So we don't need to manually unref this AVFrame
 * */
 //off_frame_unref(frame);

 if( AVERROR(EAGAIN) == ret ){
 //Prompt EAGAIN means the decoder needs more AVPackets
 //Jump out of the first layer of for and let the decoder get more AVPackets
 break;
 }else if( AVERROR_EOF == ret ){
 /* The prompt AVERROR_EOF means that an AVPacket with both data and size NULL has been sent to the decoder before.
 * Sending NULL AVPacket prompts the decoder to flush out all cached frames.
 * Usually a NULL AVPacket is sent only after reading the input file, or when another video stream needs to be decoded with an existing decoder.
 *
 * */

 /* Send null AVFrame to the encoder and let the encoder flush out the remaining data.
 * */
 ret = avcodec_send_frame(enc_ctx, NULL);
 for(;;){
 ret = avcodec_receive_packet(enc_ctx, pkt_out);
 //It is impossible to return EAGAIN here, if there is any, exit directly.
 if (ret == AVERROR(EAGAIN)){
 printf("avcodec_receive_packet error code %d \n",ret);
 return ret;
 }
 
 if ( AVERROR_EOF == ret ){ break; }
 
 //Encode the AVPacket, print some information first, and then write it to the file.
 printf("pkt_out size : %d \n",pkt_out->size);
 //Set the stream_index of AVPacket so that you know which stream it is.
 pkt_out->stream_index = st->index;
 //Convert the time base of AVPacket to the time base of the output stream.
 pkt_out->pts = av_rescale_q_rnd(pkt_out->pts, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
 pkt_out->dts = av_rescale_q_rnd(pkt_out->dts, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
 pkt_out->duration = av_rescale_q_rnd(pkt_out->duration, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);


 ret = av_interleaved_write_frame(fmt_ctx_out, pkt_out);
 if (ret < 0) {
 printf("av_interleaved_write_frame faile %d \n",ret);
 return ret;
 }
 av_packet_unref(pt_out);
 }
 av_write_trailer(fmt_ctx_out);
 //Jump out of the second layer of for, the file has been decoded.
 read_end = 1;
 break;
 }else if( ret >= 0 ){
 //Only when a frame is decoded can the encoder be initialized.
 if( NULL == enc_ctx ){
 //Open the encoder and set encoding information.
 AVCodec *encode = avcodec_find_encoder(AV_CODEC_ID_H264);
 enc_ctx = avcodec_alloc_context3(encode);
 enc_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
 enc_ctx->bit_rate = 400000;
 enc_ctx->framerate = avctx->framerate;
 enc_ctx->gop_size = 30;
 enc_ctx->max_b_frames = 10;
 enc_ctx->profile = FF_PROFILE_H264_MAIN;
 
 /*
 * In fact, the following information is also available in the container. You can also open the encoder directly in the container at the beginning.
 * I took these encoder parameters from AVFrame because the difference in the container is final.
 * Because the AVFrame you decoded may go through a filter, the information will be transformed after the filter, but this article does not use filters.
 */
 
 //The time base of the encoder should be the time base of AVFrame, because AVFrame is the input. The time base of AVFrame is the time base of the stream.
 enc_ctx->time_base = fmt_ctx->streams[0]->time_base;
 enc_ctx->width = fmt_ctx->streams[0]->codecpar->width;
 enc_ctx->height = fmt_ctx->streams[0]->codecpar->height;
 enc_ctx->sample_aspect_ratio = st->sample_aspect_ratio = frame->sample_aspect_ratio;
 enc_ctx->pix_fmt = frame->format;
 enc_ctx->color_range = frame->color_range;
 enc_ctx->color_primaries = frame->color_primaries;
 enc_ctx->color_trc = frame->color_trc;
 enc_ctx->colorspace = frame->colorspace;
 enc_ctx->chroma_sample_location = frame->chroma_location;

 /* Note that the value of this field_order is different for different videos. I have written it here.
 * Because the video in this article is AV_FIELD_PROGRESSIVE
 * The production environment needs to process different videos
 */
 enc_ctx->field_order = AV_FIELD_PROGRESSIVE;

 /* Now we need to copy the encoder parameters to the stream. When decoding, assign parameters from the stream to the decoder.
 * Now let’s do it in reverse.
 * */
 ret = avcodec_parameters_from_context(st->codecpar,enc_ctx);
 if (ret < 0){
 printf("error code %d \n",ret);
 return ret;
 }
 if ((ret = avcodec_open2(enc_ctx, encode, NULL)) < 0) {
 printf("open codec faile %d \n",ret);
 return ret;
 }

 //Formally open the output file
 if ((ret = avio_open2(&fmt_ctx_out->pb, filename_out, AVIO_FLAG_WRITE,&fmt_ctx_out->interrupt_callback,NULL)) < 0) {
 printf("avio_open2 fail %d \n",ret);
 return ret;
 }

 //Write the file header first.
 ret = avformat_write_header(fmt_ctx_out,NULL);
 if (ret < 0) {
 printf("avformat_write_header fail %d \n",ret);
 return ret;
 }

 }

 //Send AVFrame to the encoder, and then continuously read AVPacket
 ret = avcodec_send_frame(enc_ctx, frame);
 if (ret < 0) {
 printf("avcodec_send_frame fail %d \n",ret);
 return ret;
 }
 for(;;){
 ret = avcodec_receive_packet(enc_ctx, pkt_out);
 if (ret == AVERROR(EAGAIN)){ break; }
 
 if (ret < 0){
 printf("avcodec_receive_packet fail %d \n",ret);
 return ret;
 }
 
 //Encode the AVPacket, print some information first, and then write it to the file.
 printf("pkt_out size : %d \n",pkt_out->size);

 //Set the stream_index of AVPacket so that you know which stream it is.
 pkt_out->stream_index = st->index;
 
 //Convert the time base of AVPacket to the time base of the output stream.
 pkt_out->pts = av_rescale_q_rnd(pkt_out->pts, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
 pkt_out->dts = av_rescale_q_rnd(pkt_out->dts, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
 pkt_out->duration = av_rescale_q_rnd(pkt_out->duration, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);

 ret = av_interleaved_write_frame(fmt_ctx_out, pkt_out);
 if (ret < 0) {
 printf("av_interleaved_write_frame faile %d \n",ret);
 return ret;
 }
 av_packet_unref(pt_out);
 }

 }
 else{ printf("other fail \n"); return ret;}
 }
 }
 
 av_frame_free(&frame); av_packet_free(&pt); av_packet_free(&pkt_out);
 
 //Close the encoder and decoder.
 avcodec_close(avctx); avcodec_close(enc_ctx);

 //Release container memory.
 avformat_free_context(fmt_ctx);

 //Must adjust avio_closep, otherwise the data may not be written in, it will be 0kb
 avio_closep(&fmt_ctx_out->pb);
 avformat_free_context(fmt_ctx_out);
 printf("done \n");

 return 0;
}



This problem has haunted over my head for about three weeks. I still have no idea where the key bug exists. Really appreciate it if any FFmpeg expert could help me.


-