Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (63)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (9366)

  • How to Stream RTP (IP camera) Into React App setup

    10 novembre 2024, par sharon2469

    I am trying to transfer a live broadcast from an IP camera or any other broadcast coming from an RTP/RTSP source to my REACT application. BUT MUST BE LIVE

    


    My setup at the moment is :

    


    IP Camera -> (RTP) -> FFmpeg -> (udp) -> Server(nodeJs) -> (WebRTC) -> React app

    


    In the current situation, There is almost no delay, but there are some things here that I can't avoid and I can't understand why, and here is my question :

    


    1) First, is the SETUP even correct and this is the only way to Stream RTP video in Web app ?

    


    2) Is it possible to avoid re-encode the stream , RTP transmission necessarily comes in H.264, hence I don't really need to execute the following command :

    


        return spawn('ffmpeg', [
    '-re',                              // Read input at its native frame rate Important for live-streaming
    '-probesize', '32',                 // Set probing size to 32 bytes (32 is minimum)
    '-analyzeduration', '1000000',      // An input duration of 1 second
    '-c:v', 'h264',                     // Video codec of input video
    '-i', 'rtp://238.0.0.2:48888',      // Input stream URL
    '-map', '0:v?',                     // Select video from input stream
    '-c:v', 'libx264',                  // Video codec of output stream
    '-preset', 'ultrafast',             // Faster encoding for lower latency
    '-tune', 'zerolatency',             // Optimize for zero latency
    // '-s', '768x480',                    // Adjust the resolution (experiment with values)
    '-f', 'rtp', `rtp://127.0.0.1:${udpPort}` // Output stream URL
]);


    


    As you can se in this command I re-encode to libx264, But if I set FFMPEG a parameter '-c:v' :'copy' instead of '-c:v', 'libx264' then FFMPEG throw an error says : that it doesn't know how to encode h264 and only knows what is libx264-> Basically, I want to stop the re-encode because there is really no need for it, because the stream is already encoded to H264. Are there certain recommendations that can be made ?

    


    3) I thought about giving up the FFMPEG completely, but the RTP packets arrive at a size of 1200+ BYTES when WEBRTC is limited to up to 1280 BYTE. Is there a way to manage these sabotages without damaging the video and is it to enter this world ? I guess there is the whole story with the JITTER BUFFER here

    


    This is my server side code (THIS IS JUST A TEST CODE)

    


    import {
    MediaStreamTrack,
    randomPort,
    RTCPeerConnection,
    RTCRtpCodecParameters,
    RtpPacket,
} from 'werift'
import {Server} from "ws";
import {createSocket} from "dgram";
import {spawn} from "child_process";
import LoggerFactory from "./logger/loggerFactory";

//

const log = LoggerFactory.getLogger('ServerMedia')

// Websocket server -> WebRTC
const serverPort = 8888
const server = new Server({port: serverPort});
log.info(`Server Media start om port: ${serverPort}`);

// UDP server -> ffmpeg
const udpPort = 48888
const udp = createSocket("udp4");
// udp.bind(udpPort, () => {
//     udp.addMembership("238.0.0.2");
// })
udp.bind(udpPort)
log.info(`UDP port: ${udpPort}`)


const createFFmpegProcess = () => {
    log.info(`Start ffmpeg process`)
    return spawn('ffmpeg', [
        '-re',                              // Read input at its native frame rate Important for live-streaming
        '-probesize', '32',                 // Set probing size to 32 bytes (32 is minimum)
        '-analyzeduration', '1000000',      // An input duration of 1 second
        '-c:v', 'h264',                     // Video codec of input video
        '-i', 'rtp://238.0.0.2:48888',      // Input stream URL
        '-map', '0:v?',                     // Select video from input stream
        '-c:v', 'libx264',                  // Video codec of output stream
        '-preset', 'ultrafast',             // Faster encoding for lower latency
        '-tune', 'zerolatency',             // Optimize for zero latency
        // '-s', '768x480',                    // Adjust the resolution (experiment with values)
        '-f', 'rtp', `rtp://127.0.0.1:${udpPort}` // Output stream URL
    ]);

}

let ffmpegProcess = createFFmpegProcess();


const attachFFmpegListeners = () => {
    // Capture standard output and print it
    ffmpegProcess.stdout.on('data', (data) => {
        log.info(`FFMPEG process stdout: ${data}`);
    });

    // Capture standard error and print it
    ffmpegProcess.stderr.on('data', (data) => {
        console.error(`ffmpeg stderr: ${data}`);
    });

    // Listen for the exit event
    ffmpegProcess.on('exit', (code, signal) => {
        if (code !== null) {
            log.info(`ffmpeg process exited with code ${code}`);
        } else if (signal !== null) {
            log.info(`ffmpeg process killed with signal ${signal}`);
        }
    });
};


attachFFmpegListeners();


server.on("connection", async (socket) => {
    const payloadType = 96; // It is a numerical value that is assigned to each codec in the SDP offer/answer exchange -> for H264
    // Create a peer connection with the codec parameters set in advance.
    const pc = new RTCPeerConnection({
        codecs: {
            audio: [],
            video: [
                new RTCRtpCodecParameters({
                    mimeType: "video/H264",
                    clockRate: 90000, // 90000 is the default value for H264
                    payloadType: payloadType,
                }),
            ],
        },
    });

    const track = new MediaStreamTrack({kind: "video"});


    udp.on("message", (data) => {
        console.log(data)
        const rtp = RtpPacket.deSerialize(data);
        rtp.header.payloadType = payloadType;
        track.writeRtp(rtp);
    });

    udp.on("error", (err) => {
        console.log(err)

    });

    udp.on("close", () => {
        console.log("close")
    });

    pc.addTransceiver(track, {direction: "sendonly"});

    await pc.setLocalDescription(await pc.createOffer());
    const sdp = JSON.stringify(pc.localDescription);
    socket.send(sdp);

    socket.on("message", (data: any) => {
        if (data.toString() === 'resetFFMPEG') {
            ffmpegProcess.kill('SIGINT');
            log.info(`FFMPEG process killed`)
            setTimeout(() => {
                ffmpegProcess = createFFmpegProcess();
                attachFFmpegListeners();
            }, 5000)
        } else {
            pc.setRemoteDescription(JSON.parse(data));
        }
    });
});


    


    And this fronted :

    


    &#xA;&#xA;&#xA;    &#xA;    &#xA;    <code class="echappe-js">&lt;script&amp;#xA;            crossorigin&amp;#xA;            src=&quot;https://unpkg.com/react@16/umd/react.development.js&quot;&amp;#xA;    &gt;&lt;/script&gt;&#xA;    &lt;script&amp;#xA;            crossorigin&amp;#xA;            src=&quot;https://unpkg.com/react-dom@16/umd/react-dom.development.js&quot;&amp;#xA;    &gt;&lt;/script&gt;&#xA;    &lt;script&amp;#xA;            crossorigin&amp;#xA;            src=&quot;https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.34/browser.min.js&quot;&amp;#xA;    &gt;&lt;/script&gt;&#xA;    &lt;script src=&quot;https://cdn.jsdelivr.net/npm/babel-regenerator-runtime@6.5.0/runtime.min.js&quot;&gt;&lt;/script&gt;&#xA;&#xA;&#xA;
    &#xA;

    &#xA;

    &#xA;&lt;script type=&quot;text/babel&quot;&gt;&amp;#xA;    let rtc;&amp;#xA;&amp;#xA;    const App = () =&gt; {&amp;#xA;        const [log, setLog] = React.useState([]);&amp;#xA;        const videoRef = React.useRef();&amp;#xA;        const socket = new WebSocket(&quot;ws://localhost:8888&quot;);&amp;#xA;        const [peer, setPeer] = React.useState(null); // Add state to keep track of the peer connection&amp;#xA;&amp;#xA;        React.useEffect(() =&gt; {&amp;#xA;            (async () =&gt; {&amp;#xA;                await new Promise((r) =&gt; (socket.onopen = r));&amp;#xA;                console.log(&quot;open websocket&quot;);&amp;#xA;&amp;#xA;                const handleOffer = async (offer) =&gt; {&amp;#xA;                    console.log(&quot;new offer&quot;, offer.sdp);&amp;#xA;&amp;#xA;                    const updatedPeer = new RTCPeerConnection({&amp;#xA;                        iceServers: [],&amp;#xA;                        sdpSemantics: &quot;unified-plan&quot;,&amp;#xA;                    });&amp;#xA;&amp;#xA;                    updatedPeer.onicecandidate = ({ candidate }) =&gt; {&amp;#xA;                        if (!candidate) {&amp;#xA;                            const sdp = JSON.stringify(updatedPeer.localDescription);&amp;#xA;                            console.log(sdp);&amp;#xA;                            socket.send(sdp);&amp;#xA;                        }&amp;#xA;                    };&amp;#xA;&amp;#xA;                    updatedPeer.oniceconnectionstatechange = () =&gt; {&amp;#xA;                        console.log(&amp;#xA;                            &quot;oniceconnectionstatechange&quot;,&amp;#xA;                            updatedPeer.iceConnectionState&amp;#xA;                        );&amp;#xA;                    };&amp;#xA;&amp;#xA;                    updatedPeer.ontrack = (e) =&gt; {&amp;#xA;                        console.log(&quot;ontrack&quot;, e);&amp;#xA;                        videoRef.current.srcObject = e.streams[0];&amp;#xA;                    };&amp;#xA;&amp;#xA;                    await updatedPeer.setRemoteDescription(offer);&amp;#xA;                    const answer = await updatedPeer.createAnswer();&amp;#xA;                    await updatedPeer.setLocalDescription(answer);&amp;#xA;&amp;#xA;                    setPeer(updatedPeer);&amp;#xA;                };&amp;#xA;&amp;#xA;                socket.onmessage = (ev) =&gt; {&amp;#xA;                    const data = JSON.parse(ev.data);&amp;#xA;                    if (data.type === &quot;offer&quot;) {&amp;#xA;                        handleOffer(data);&amp;#xA;                    } else if (data.type === &quot;resetFFMPEG&quot;) {&amp;#xA;                        // Handle the resetFFMPEG message&amp;#xA;                        console.log(&quot;FFmpeg reset requested&quot;);&amp;#xA;                    }&amp;#xA;                };&amp;#xA;            })();&amp;#xA;        }, []); // Added socket as a dependency to the useEffect hook&amp;#xA;&amp;#xA;        const sendRequestToResetFFmpeg = () =&gt; {&amp;#xA;            socket.send(&quot;resetFFMPEG&quot;);&amp;#xA;        };&amp;#xA;&amp;#xA;        return (&amp;#xA;            &lt;div&gt;&amp;#xA;                Video: &amp;#xA;                &lt;video ref={videoRef} autoPlay muted /&gt;&amp;#xA;                &lt;button onClick={() =&gt; sendRequestToResetFFmpeg()}&gt;Reset FFMPEG&lt;/button&gt;&amp;#xA;            &lt;/div&gt;&amp;#xA;        );&amp;#xA;    };&amp;#xA;&amp;#xA;    ReactDOM.render(&lt;App /&gt;, document.getElementById(&quot;app1&quot;));&amp;#xA;&lt;/script&gt;&#xA;&#xA;&#xA;

    &#xA;

  • What's the purpose of the END field in files containing metadata for ffmpeg ?

    15 juillet 2023, par rdrg109

    Table of contents

    &#xA;

      &#xA;
    • The context
    • &#xA;

    • Minimal working example&#xA;
        &#xA;
      • no. 1
      • &#xA;

      • no. 2
      • &#xA;

      • no. 3
      • &#xA;

      &#xA;

    • &#xA;

    • The questions
    • &#xA;

    &#xA;

    &#xA;

    The context

    &#xA;

    I'm aware that it is possible to insert metadata of chapters into a video using ffmpeg (link to documentation). Minimal working example no. 1 shows this point.

    &#xA;

    When I insert chapters that are not consecutive, ffmpeg doesn't use the end time that is specified in the metadata file through the END field. "Minimal working example no. 2" shows this point. In "Minimal working example no. 3", the END field of all chapters is 1 millisecond, yet we get the same result of Minimal working example no. 2. What is worth noting is that the END of the last chapter is the only one that is added correctly.

    &#xA;

    &#xA;

    Minimal working example

    &#xA;

    &#xA;

    no. 1

    &#xA;

    Consider the following ffmpeg metadata file. It creates chapters in the following time ranges

    &#xA;

      &#xA;
    • From 0 to millisecond 9999, the chapter is This is my chapter one
    • &#xA;

    • From 10000 to millisecond 19999, the chapter is This is my chapter two
    • &#xA;

    • From 20000 to millisecond 29999, the chapter is This is my chapter three
    • &#xA;

    • From 30000 to millisecond 39999, the chapter is This is my chapter four
    • &#xA;

    &#xA;

    ;FFMETADATA1&#xA;title=The title of my video&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=0&#xA;END=10000&#xA;title=This is my chapter one&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=10000&#xA;END=20000&#xA;title=This is my chapter two&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=20000&#xA;END=30000&#xA;title=This is my chapter three&#xA;

    &#xA;

    The following command creates a simple video and inserts the metadata shown above into that video.

    &#xA;

    ffmpeg -v error -y -f lavfi -i &#x27;testsrc=d=120:size=100x100&#x27; -i metadata.txt -map_metadata 1 output.mp4&#xA;

    &#xA;

    The following command shows information about the metadata in the video file that was created by the previous command.

    &#xA;

    ffprobe -v error -show_chapters -print_format json output.mp4 | jq&#xA;

    &#xA;

    {&#xA;  "chapters": [&#xA;    {&#xA;      "id": 0,&#xA;      "time_base": "1/1000",&#xA;      "start": 0,&#xA;      "start_time": "0.000000",&#xA;      "end": 10000,&#xA;      "end_time": "10.000000",&#xA;      "tags": {&#xA;        "title": "This is my chapter one"&#xA;      }&#xA;    },&#xA;    {&#xA;      "id": 1,&#xA;      "time_base": "1/1000",&#xA;      "start": 10000,&#xA;      "start_time": "10.000000",&#xA;      "end": 20000,&#xA;      "end_time": "20.000000",&#xA;      "tags": {&#xA;        "title": "This is my chapter two"&#xA;      }&#xA;    },&#xA;    {&#xA;      "id": 2,&#xA;      "time_base": "1/1000",&#xA;      "start": 20000,&#xA;      "start_time": "20.000000",&#xA;      "end": 30000,&#xA;      "end_time": "30.000000",&#xA;      "tags": {&#xA;        "title": "This is my chapter three"&#xA;      }&#xA;    }&#xA;  ]&#xA;}&#xA;

    &#xA;

    No problem until here.

    &#xA;

    &#xA;

    no. 2

    &#xA;

    ;FFMETADATA1&#xA;title=The title of my video&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=0&#xA;END=5000&#xA;title=This is my chapter one&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=10000&#xA;END=15000&#xA;title=This is my chapter two&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=20000&#xA;END=25000&#xA;title=This is my chapter three&#xA;

    &#xA;

    The following command creates a simple video and inserts the metadata shown above into that video.

    &#xA;

    ffmpeg -v error -y -f lavfi -i &#x27;testsrc=d=120:size=100x100&#x27; -i metadata.txt -map_metadata 1 output.mp4&#xA;

    &#xA;

    The following command shows information about the metadata in the video file that was created by the previous command.

    &#xA;

    ffprobe -v error -show_chapters -print_format json output.mp4 | jq&#xA;

    &#xA;

    {&#xA;  "chapters": [&#xA;    {&#xA;      "id": 0,&#xA;      "time_base": "1/1000",&#xA;      "start": 0,&#xA;      "start_time": "0.000000",&#xA;      "end": 10000,&#xA;      "end_time": "10.000000",&#xA;      "tags": {&#xA;        "title": "This is my chapter one"&#xA;      }&#xA;    },&#xA;    {&#xA;      "id": 1,&#xA;      "time_base": "1/1000",&#xA;      "start": 10000,&#xA;      "start_time": "10.000000",&#xA;      "end": 20000,&#xA;      "end_time": "20.000000",&#xA;      "tags": {&#xA;        "title": "This is my chapter two"&#xA;      }&#xA;    },&#xA;    {&#xA;      "id": 2,&#xA;      "time_base": "1/1000",&#xA;      "start": 20000,&#xA;      "start_time": "20.000000",&#xA;      "end": 25000,&#xA;      "end_time": "25.000000",&#xA;      "tags": {&#xA;        "title": "This is my chapter three"&#xA;      }&#xA;    }&#xA;  ]&#xA;}&#xA;

    &#xA;

    From the output above, we can notice the following :

    &#xA;

      &#xA;
    • The end time of the chapter with title This is my chapter one is 10000 which should be 15000, since that is the value we specified in the END field
    • &#xA;

    • The end time of the chapter with title This is my chapter two is 20000 which should be 25000, since that is the value we specified in the END field
    • &#xA;

    • The end time of the last chapter is the only one that is the same to the value that we wrote in the metadata file
    • &#xA;

    &#xA;

    I consider this unexpected behavior because ffmpeg should use the end time that we have specified in the metadata file.

    &#xA;

    &#xA;

    no. 3

    &#xA;

    ;FFMETADATA1&#xA;title=The title of my video&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=0&#xA;END=1&#xA;title=This is my chapter one&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=10000&#xA;END=10001&#xA;title=This is my chapter two&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=20000&#xA;END=20001&#xA;title=This is my chapter three&#xA;

    &#xA;

    The following command creates a simple video and inserts the metadata shown above into that video.

    &#xA;

    ffmpeg -v error -y -f lavfi -i &#x27;testsrc=d=120:size=100x100&#x27; -i metadata.txt -map_metadata 1 output.mp4&#xA;

    &#xA;

    The following command shows information about the metadata in the video file that was created by the previous command.

    &#xA;

    ffprobe -v error -show_chapters -print_format json output.mp4 | jq&#xA;

    &#xA;

    {&#xA;  "chapters": [&#xA;    {&#xA;      "id": 0,&#xA;      "time_base": "1/1000",&#xA;      "start": 0,&#xA;      "start_time": "0.000000",&#xA;      "end": 10000,&#xA;      "end_time": "10.000000",&#xA;      "tags": {&#xA;        "title": "This is my chapter one"&#xA;      }&#xA;    },&#xA;    {&#xA;      "id": 1,&#xA;      "time_base": "1/1000",&#xA;      "start": 10000,&#xA;      "start_time": "10.000000",&#xA;      "end": 20000,&#xA;      "end_time": "20.000000",&#xA;      "tags": {&#xA;        "title": "This is my chapter two"&#xA;      }&#xA;    },&#xA;    {&#xA;      "id": 2,&#xA;      "time_base": "1/1000",&#xA;      "start": 20000,&#xA;      "start_time": "20.000000",&#xA;      "end": 20001,&#xA;      "end_time": "20.001000",&#xA;      "tags": {&#xA;        "title": "This is my chapter three"&#xA;      }&#xA;    }&#xA;  ]&#xA;}&#xA;

    &#xA;

    From the output above, we can notice the following :

    &#xA;

      &#xA;
    • The end time of the chapter with title This is my chapter one is 10000, but it should be 15000, since that is the value we specified in the END field in the metadata file
    • &#xA;

    • The end time of the chapter with title This is my chapter two is 20000, but it should be 25000, since that is the value we specified in the END field in the metadata file
    • &#xA;

    • The end time of the last chapter is the only one that is the same to the value that we wrote in the metadata file
    • &#xA;

    &#xA;

    I consider this unexpected behavior because ffmpeg should use the end time that we have specified in the metadata file.

    &#xA;

    &#xA;

    The questions

    &#xA;

    In minimal working example no. 2 and minimal working example no. 3, we could see that although the metadata file sets END for each chapter, a different time is used (except for the last chapter). My questions are :

    &#xA;

      &#xA;
    • What's the purpose of having a field called END when it only serves a purpose in the last chapter ?
    • &#xA;

    • Given that ffmpeg only adds the correct value for END for the last chapter ? Wouldn't a better name for END be END_ONLY_FOR_LAST_CHAPTER ?
    • &#xA;

    • Am I doing something wrong that is casuing ffmpeg to use a different value for END in all chapters but the last one ?
    • &#xA;

    &#xA;

  • ffmpeg HLS not append -hls_base_url to m3u8

    22 juin 2023, par user3378326

    When i use -hls_base_url http://10.10.10.1:5000/hls it's not append in the m3u8 Playlist.

    &#xA;

    My command i use for generating the HLS Output on Windows cmd with ffmpeg 6.0-essentials_build :

    &#xA;

    ffmpeg -y -i  Movie.ts  -c:v copy  -c:a:1 copy  -bsf:v h264_mp4toannexb  -f segment  -hls_base_url http:/10.10.10.1:5000/hls/  -segment_time 10  -segment_list  Movie.m3u8 hls\Movie-%03d.ts

    &#xA;

    I try with :

    &#xA;

    -hls_base_url http://10.10.10.1:5000/hls -hls_base_url &#x27;http://10.10.10.1:5000/hls&#x27;   -hls_base_url "http://10.10.10.1:5000/hls"

    &#xA;