
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (12)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation" -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (2604)
-
Using ffserver to stream older IP cam MJPEG to RTSP
26 mai 2016, par tmar89I have an older Sony IP camera that has an MJPEG stream. I need to connect this to an NVR that only takes ONVIP or RTSP and I’m trying to use ffserver and ffmpeg to convert the MJPEG stream to RTSP but it’s not working. Any have some idea of what I may be doing wrong ? Saw an error in the attempted playback about an unsupported Protocol.
Here is my ffserver config :Port 8090
RTSPPort 5544
BindAddress 0.0.0.0
RTSPBindAddress 0.0.0.0
MaxClients 100
MaxBandwidth 10000
<feed>
File /tmp/feed27.ffm
FileMaxSize 5M
ACL allow 127.0.0.1
</feed>
<stream>
Format rtp
Feed feed27.ffm
NoAudio
VideoCodec mjpeg
VideoFrameRate 30
VideoSize 736x480
</stream>And here is the ffmpeg command I am using :
[tm@tele ffserver-rtsp]# ffmpeg -f mjpeg -r 30 -s 736x480 -i http://[CAMIP]/image http://localhost:8090/feed27.ffm
FFmpeg version 0.6.5, Copyright (c) 2000-2010 the FFmpeg developers
built on Jan 29 2012 17:52:15 with gcc 4.4.5 20110214 (Red Hat 4.4.5-6)
configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --disable-avisynth --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --enable-avfilter --enable-avfilter-lavf --enable-libdc1394 --enable-libdirac --enable-libfaac --enable-libfaad --enable-libfaadbin --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-vdpau --enable-version3 --enable-x11grab
libavutil 50.15. 1 / 50.15. 1
libavcodec 52.72. 2 / 52.72. 2
libavformat 52.64. 2 / 52.64. 2
libavdevice 52. 2. 0 / 52. 2. 0
libavfilter 1.19. 0 / 1.19. 0
libswscale 0.11. 0 / 0.11. 0
libpostproc 51. 2. 0 / 51. 2. 0
[mjpeg @ 0x1ece670]Estimating duration from bitrate, this may be inaccurate
Input #0, mjpeg, from 'http://[CAMIP]/image':
Duration: N/A, bitrate: N/A
Stream #0.0: Video: mjpeg, yuvj422p, 736x480, 30 fps, 30 tbr, 1200k tbn, 30 tbc
Output #0, ffm, to 'http://localhost:8090/feed27.ffm':
Metadata:
encoder : Lavf52.64.2
Stream #0.0: Video: mjpeg, yuvj420p, 736x480, q=2-31, 200 kb/s, 1000k tbn, 30 tbc
Stream mapping:
Stream #0.0 -> #0.0
Press [q] to stop encoding
[mjpeg @ 0x222d110]rc buffer underflow
frame= 640 fps= 17 q=31.4 size= 12884kB time=21.33 bitrate=4947.5kbits/sWhen I use VLC to open the stream, it cannot be found :
Your input can't be opened:
VLC is unable to open the MRL 'rtsp://localhost:5544/stream27.mpg'. Check the log for details.Finally, using ffplay on the same machine :
[tm@tele tmp]# ffplay rtsp://localhost:5544/stream27.sdp
FFplay version 0.6.5, Copyright (c) 2003-2010 the FFmpeg developers
built on Jan 29 2012 17:52:15 with gcc 4.4.5 20110214 (Red Hat 4.4.5-6)
configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --disable-avisynth --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --enable-avfilter --enable-avfilter-lavf --enable-libdc1394 --enable-libdirac --enable-libfaac --enable-libfaad --enable-libfaadbin --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-vdpau --enable-version3 --enable-x11grab
libavutil 50.15. 1 / 50.15. 1
libavcodec 52.72. 2 / 52.72. 2
libavformat 52.64. 2 / 52.64. 2
libavdevice 52. 2. 0 / 52. 2. 0
libavfilter 1.19. 0 / 1.19. 0
libswscale 0.11. 0 / 0.11. 0
libpostproc 51. 2. 0 / 51. 2. 0
ALSA lib pulse.c:229:(pulse_connect) PulseAudio: Unable to connect: Connection refused
rtsp://localhost:5544/stream27.sdp: Protocol not supportedAnd here was the log from ffserver :
127.0.0.1:5000 - - "PLAY stream27.mpg/streamid=0 RTP/UDP"
[rtp @ 0x721dc0]Unsupported codec 8
127.0.0.1:0 - - "PLAY stream27.mpg/streamid=0 RTP/TCP"
[rtp @ 0x728cb0]Unsupported codec 8
127.0.0.1 - - [SETUP] "rtsp://localhost:5544/stream27.mpg/streamid=0 RTSP/1.0" 200 641 -
How to Stream RTP (IP camera) Into React App setup
10 novembre 2024, par sharon2469I am trying to transfer a live broadcast from an IP camera or any other broadcast coming from an RTP/RTSP source to my REACT application. BUT MUST BE LIVE


My setup at the moment is :


IP Camera -> (RTP) -> FFmpeg -> (udp) -> Server(nodeJs) -> (WebRTC) -> React app


In the current situation, There is almost no delay, but there are some things here that I can't avoid and I can't understand why, and here is my question :


1) First, is the SETUP even correct and this is the only way to Stream RTP video in Web app ?


2) Is it possible to avoid re-encode the stream , RTP transmission necessarily comes in H.264, hence I don't really need to execute the following command :


return spawn('ffmpeg', [
 '-re', // Read input at its native frame rate Important for live-streaming
 '-probesize', '32', // Set probing size to 32 bytes (32 is minimum)
 '-analyzeduration', '1000000', // An input duration of 1 second
 '-c:v', 'h264', // Video codec of input video
 '-i', 'rtp://238.0.0.2:48888', // Input stream URL
 '-map', '0:v?', // Select video from input stream
 '-c:v', 'libx264', // Video codec of output stream
 '-preset', 'ultrafast', // Faster encoding for lower latency
 '-tune', 'zerolatency', // Optimize for zero latency
 // '-s', '768x480', // Adjust the resolution (experiment with values)
 '-f', 'rtp', `rtp://127.0.0.1:${udpPort}` // Output stream URL
]);



As you can se in this command I re-encode to libx264, But if I set FFMPEG a parameter '-c:v' :'copy' instead of '-c:v', 'libx264' then FFMPEG throw an error says : that it doesn't know how to encode h264 and only knows what is libx264-> Basically, I want to stop the re-encode because there is really no need for it, because the stream is already encoded to H264. Are there certain recommendations that can be made ?


3) I thought about giving up the FFMPEG completely, but the RTP packets arrive at a size of 1200+ BYTES when WEBRTC is limited to up to 1280 BYTE. Is there a way to manage these sabotages without damaging the video and is it to enter this world ? I guess there is the whole story with the JITTER BUFFER here


This is my server side code (THIS IS JUST A TEST CODE)


import {
 MediaStreamTrack,
 randomPort,
 RTCPeerConnection,
 RTCRtpCodecParameters,
 RtpPacket,
} from 'werift'
import {Server} from "ws";
import {createSocket} from "dgram";
import {spawn} from "child_process";
import LoggerFactory from "./logger/loggerFactory";

//

const log = LoggerFactory.getLogger('ServerMedia')

// Websocket server -> WebRTC
const serverPort = 8888
const server = new Server({port: serverPort});
log.info(`Server Media start om port: ${serverPort}`);

// UDP server -> ffmpeg
const udpPort = 48888
const udp = createSocket("udp4");
// udp.bind(udpPort, () => {
// udp.addMembership("238.0.0.2");
// })
udp.bind(udpPort)
log.info(`UDP port: ${udpPort}`)


const createFFmpegProcess = () => {
 log.info(`Start ffmpeg process`)
 return spawn('ffmpeg', [
 '-re', // Read input at its native frame rate Important for live-streaming
 '-probesize', '32', // Set probing size to 32 bytes (32 is minimum)
 '-analyzeduration', '1000000', // An input duration of 1 second
 '-c:v', 'h264', // Video codec of input video
 '-i', 'rtp://238.0.0.2:48888', // Input stream URL
 '-map', '0:v?', // Select video from input stream
 '-c:v', 'libx264', // Video codec of output stream
 '-preset', 'ultrafast', // Faster encoding for lower latency
 '-tune', 'zerolatency', // Optimize for zero latency
 // '-s', '768x480', // Adjust the resolution (experiment with values)
 '-f', 'rtp', `rtp://127.0.0.1:${udpPort}` // Output stream URL
 ]);

}

let ffmpegProcess = createFFmpegProcess();


const attachFFmpegListeners = () => {
 // Capture standard output and print it
 ffmpegProcess.stdout.on('data', (data) => {
 log.info(`FFMPEG process stdout: ${data}`);
 });

 // Capture standard error and print it
 ffmpegProcess.stderr.on('data', (data) => {
 console.error(`ffmpeg stderr: ${data}`);
 });

 // Listen for the exit event
 ffmpegProcess.on('exit', (code, signal) => {
 if (code !== null) {
 log.info(`ffmpeg process exited with code ${code}`);
 } else if (signal !== null) {
 log.info(`ffmpeg process killed with signal ${signal}`);
 }
 });
};


attachFFmpegListeners();


server.on("connection", async (socket) => {
 const payloadType = 96; // It is a numerical value that is assigned to each codec in the SDP offer/answer exchange -> for H264
 // Create a peer connection with the codec parameters set in advance.
 const pc = new RTCPeerConnection({
 codecs: {
 audio: [],
 video: [
 new RTCRtpCodecParameters({
 mimeType: "video/H264",
 clockRate: 90000, // 90000 is the default value for H264
 payloadType: payloadType,
 }),
 ],
 },
 });

 const track = new MediaStreamTrack({kind: "video"});


 udp.on("message", (data) => {
 console.log(data)
 const rtp = RtpPacket.deSerialize(data);
 rtp.header.payloadType = payloadType;
 track.writeRtp(rtp);
 });

 udp.on("error", (err) => {
 console.log(err)

 });

 udp.on("close", () => {
 console.log("close")
 });

 pc.addTransceiver(track, {direction: "sendonly"});

 await pc.setLocalDescription(await pc.createOffer());
 const sdp = JSON.stringify(pc.localDescription);
 socket.send(sdp);

 socket.on("message", (data: any) => {
 if (data.toString() === 'resetFFMPEG') {
 ffmpegProcess.kill('SIGINT');
 log.info(`FFMPEG process killed`)
 setTimeout(() => {
 ffmpegProcess = createFFmpegProcess();
 attachFFmpegListeners();
 }, 5000)
 } else {
 pc.setRemoteDescription(JSON.parse(data));
 }
 });
});



And this fronted :





 
 
 <code class="echappe-js"><script&#xA; crossorigin&#xA; src="https://unpkg.com/react@16/umd/react.development.js"&#xA; ></script>

<script&#xA; crossorigin&#xA; src="https://unpkg.com/react-dom@16/umd/react-dom.development.js"&#xA; ></script>

<script&#xA; crossorigin&#xA; src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.34/browser.min.js"&#xA; ></script>

<script src="https://cdn.jsdelivr.net/npm/babel-regenerator-runtime@6.5.0/runtime.min.js"></script>








<script type="text/babel">&#xA; let rtc;&#xA;&#xA; const App = () => {&#xA; const [log, setLog] = React.useState([]);&#xA; const videoRef = React.useRef();&#xA; const socket = new WebSocket("ws://localhost:8888");&#xA; const [peer, setPeer] = React.useState(null); // Add state to keep track of the peer connection&#xA;&#xA; React.useEffect(() => {&#xA; (async () => {&#xA; await new Promise((r) => (socket.onopen = r));&#xA; console.log("open websocket");&#xA;&#xA; const handleOffer = async (offer) => {&#xA; console.log("new offer", offer.sdp);&#xA;&#xA; const updatedPeer = new RTCPeerConnection({&#xA; iceServers: [],&#xA; sdpSemantics: "unified-plan",&#xA; });&#xA;&#xA; updatedPeer.onicecandidate = ({ candidate }) => {&#xA; if (!candidate) {&#xA; const sdp = JSON.stringify(updatedPeer.localDescription);&#xA; console.log(sdp);&#xA; socket.send(sdp);&#xA; }&#xA; };&#xA;&#xA; updatedPeer.oniceconnectionstatechange = () => {&#xA; console.log(&#xA; "oniceconnectionstatechange",&#xA; updatedPeer.iceConnectionState&#xA; );&#xA; };&#xA;&#xA; updatedPeer.ontrack = (e) => {&#xA; console.log("ontrack", e);&#xA; videoRef.current.srcObject = e.streams[0];&#xA; };&#xA;&#xA; await updatedPeer.setRemoteDescription(offer);&#xA; const answer = await updatedPeer.createAnswer();&#xA; await updatedPeer.setLocalDescription(answer);&#xA;&#xA; setPeer(updatedPeer);&#xA; };&#xA;&#xA; socket.onmessage = (ev) => {&#xA; const data = JSON.parse(ev.data);&#xA; if (data.type === "offer") {&#xA; handleOffer(data);&#xA; } else if (data.type === "resetFFMPEG") {&#xA; // Handle the resetFFMPEG message&#xA; console.log("FFmpeg reset requested");&#xA; }&#xA; };&#xA; })();&#xA; }, []); // Added socket as a dependency to the useEffect hook&#xA;&#xA; const sendRequestToResetFFmpeg = () => {&#xA; socket.send("resetFFMPEG");&#xA; };&#xA;&#xA; return (&#xA; <div>&#xA; Video: &#xA; <video ref={videoRef} autoPlay muted />&#xA; <button onClick={() => sendRequestToResetFFmpeg()}>Reset FFMPEG</button>&#xA; </div>&#xA; );&#xA; };&#xA;&#xA; ReactDOM.render(<App />, document.getElementById("app1"));&#xA;</script>





-
What's the purpose of the END field in files containing metadata for ffmpeg ?
15 juillet 2023, par rdrg109Table of contents


- 

- The context
- Minimal working example

- 

- no. 1
- no. 2
- no. 3








- The questions










The context


I'm aware that it is possible to insert metadata of chapters into a video using ffmpeg (link to documentation). Minimal working example no. 1 shows this point.


When I insert chapters that are not consecutive, ffmpeg doesn't use the end time that is specified in the metadata file through the
END
field. "Minimal working example no. 2" shows this point. In "Minimal working example no. 3", theEND
field of all chapters is 1 millisecond, yet we get the same result of Minimal working example no. 2. What is worth noting is that theEND
of the last chapter is the only one that is added correctly.



Minimal working example




no. 1


Consider the following ffmpeg metadata file. It creates chapters in the following time ranges


- 

- From
0
to millisecond9999
, the chapter isThis is my chapter one
- From
10000
to millisecond19999
, the chapter isThis is my chapter two
- From
20000
to millisecond29999
, the chapter isThis is my chapter three
- From
30000
to millisecond39999
, the chapter isThis is my chapter four










;FFMETADATA1
title=The title of my video

[CHAPTER]
TIMEBASE=1/1000
START=0
END=10000
title=This is my chapter one

[CHAPTER]
TIMEBASE=1/1000
START=10000
END=20000
title=This is my chapter two

[CHAPTER]
TIMEBASE=1/1000
START=20000
END=30000
title=This is my chapter three



The following command creates a simple video and inserts the metadata shown above into that video.


ffmpeg -v error -y -f lavfi -i 'testsrc=d=120:size=100x100' -i metadata.txt -map_metadata 1 output.mp4



The following command shows information about the metadata in the video file that was created by the previous command.


ffprobe -v error -show_chapters -print_format json output.mp4 | jq



{
 "chapters": [
 {
 "id": 0,
 "time_base": "1/1000",
 "start": 0,
 "start_time": "0.000000",
 "end": 10000,
 "end_time": "10.000000",
 "tags": {
 "title": "This is my chapter one"
 }
 },
 {
 "id": 1,
 "time_base": "1/1000",
 "start": 10000,
 "start_time": "10.000000",
 "end": 20000,
 "end_time": "20.000000",
 "tags": {
 "title": "This is my chapter two"
 }
 },
 {
 "id": 2,
 "time_base": "1/1000",
 "start": 20000,
 "start_time": "20.000000",
 "end": 30000,
 "end_time": "30.000000",
 "tags": {
 "title": "This is my chapter three"
 }
 }
 ]
}



No problem until here.




no. 2


;FFMETADATA1
title=The title of my video

[CHAPTER]
TIMEBASE=1/1000
START=0
END=5000
title=This is my chapter one

[CHAPTER]
TIMEBASE=1/1000
START=10000
END=15000
title=This is my chapter two

[CHAPTER]
TIMEBASE=1/1000
START=20000
END=25000
title=This is my chapter three



The following command creates a simple video and inserts the metadata shown above into that video.


ffmpeg -v error -y -f lavfi -i 'testsrc=d=120:size=100x100' -i metadata.txt -map_metadata 1 output.mp4



The following command shows information about the metadata in the video file that was created by the previous command.


ffprobe -v error -show_chapters -print_format json output.mp4 | jq



{
 "chapters": [
 {
 "id": 0,
 "time_base": "1/1000",
 "start": 0,
 "start_time": "0.000000",
 "end": 10000,
 "end_time": "10.000000",
 "tags": {
 "title": "This is my chapter one"
 }
 },
 {
 "id": 1,
 "time_base": "1/1000",
 "start": 10000,
 "start_time": "10.000000",
 "end": 20000,
 "end_time": "20.000000",
 "tags": {
 "title": "This is my chapter two"
 }
 },
 {
 "id": 2,
 "time_base": "1/1000",
 "start": 20000,
 "start_time": "20.000000",
 "end": 25000,
 "end_time": "25.000000",
 "tags": {
 "title": "This is my chapter three"
 }
 }
 ]
}



From the output above, we can notice the following :


- 

- The end time of the chapter with title
This is my chapter one
is 10000 which should be 15000, since that is the value we specified in theEND
field - The end time of the chapter with title
This is my chapter two
is 20000 which should be 25000, since that is the value we specified in theEND
field - The end time of the last chapter is the only one that is the same to the value that we wrote in the metadata file








I consider this unexpected behavior because ffmpeg should use the end time that we have specified in the metadata file.




no. 3


;FFMETADATA1
title=The title of my video

[CHAPTER]
TIMEBASE=1/1000
START=0
END=1
title=This is my chapter one

[CHAPTER]
TIMEBASE=1/1000
START=10000
END=10001
title=This is my chapter two

[CHAPTER]
TIMEBASE=1/1000
START=20000
END=20001
title=This is my chapter three



The following command creates a simple video and inserts the metadata shown above into that video.


ffmpeg -v error -y -f lavfi -i 'testsrc=d=120:size=100x100' -i metadata.txt -map_metadata 1 output.mp4



The following command shows information about the metadata in the video file that was created by the previous command.


ffprobe -v error -show_chapters -print_format json output.mp4 | jq



{
 "chapters": [
 {
 "id": 0,
 "time_base": "1/1000",
 "start": 0,
 "start_time": "0.000000",
 "end": 10000,
 "end_time": "10.000000",
 "tags": {
 "title": "This is my chapter one"
 }
 },
 {
 "id": 1,
 "time_base": "1/1000",
 "start": 10000,
 "start_time": "10.000000",
 "end": 20000,
 "end_time": "20.000000",
 "tags": {
 "title": "This is my chapter two"
 }
 },
 {
 "id": 2,
 "time_base": "1/1000",
 "start": 20000,
 "start_time": "20.000000",
 "end": 20001,
 "end_time": "20.001000",
 "tags": {
 "title": "This is my chapter three"
 }
 }
 ]
}



From the output above, we can notice the following :


- 

- The end time of the chapter with title
This is my chapter one
is10000
, but it should be15000
, since that is the value we specified in theEND
field in the metadata file - The end time of the chapter with title
This is my chapter two
is20000
, but it should be25000
, since that is the value we specified in theEND
field in the metadata file - The end time of the last chapter is the only one that is the same to the value that we wrote in the metadata file








I consider this unexpected behavior because ffmpeg should use the end time that we have specified in the metadata file.




The questions


In minimal working example no. 2 and minimal working example no. 3, we could see that although the metadata file sets
END
for each chapter, a different time is used (except for the last chapter). My questions are :

- 

- What's the purpose of having a field called
END
when it only serves a purpose in the last chapter ? - Given that
ffmpeg
only adds the correct value forEND
for the last chapter ? Wouldn't a better name forEND
beEND_ONLY_FOR_LAST_CHAPTER
? - Am I doing something wrong that is casuing ffmpeg to use a different value for
END
in all chapters but the last one ?