Recherche avancée

Médias (1)

Mot : - Tags -/epub

Autres articles (12)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (2604)

  • Using ffserver to stream older IP cam MJPEG to RTSP

    26 mai 2016, par tmar89

    I have an older Sony IP camera that has an MJPEG stream. I need to connect this to an NVR that only takes ONVIP or RTSP and I’m trying to use ffserver and ffmpeg to convert the MJPEG stream to RTSP but it’s not working. Any have some idea of what I may be doing wrong ? Saw an error in the attempted playback about an unsupported Protocol.
    Here is my ffserver config :

    Port 8090
    RTSPPort 5544
    BindAddress 0.0.0.0
    RTSPBindAddress 0.0.0.0
    MaxClients 100
    MaxBandwidth 10000

    <feed>
    File /tmp/feed27.ffm
    FileMaxSize 5M
    ACL allow 127.0.0.1
    </feed>

    <stream>
    Format rtp
    Feed feed27.ffm
    NoAudio
    VideoCodec mjpeg
    VideoFrameRate 30
    VideoSize 736x480
    </stream>

    And here is the ffmpeg command I am using :

    [tm@tele ffserver-rtsp]# ffmpeg -f mjpeg -r 30 -s 736x480 -i http://[CAMIP]/image http://localhost:8090/feed27.ffm
       FFmpeg version 0.6.5, Copyright (c) 2000-2010 the FFmpeg developers
         built on Jan 29 2012 17:52:15 with gcc 4.4.5 20110214 (Red Hat 4.4.5-6)
         configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --disable-avisynth --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --enable-avfilter --enable-avfilter-lavf --enable-libdc1394 --enable-libdirac --enable-libfaac --enable-libfaad --enable-libfaadbin --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-vdpau --enable-version3 --enable-x11grab
         libavutil     50.15. 1 / 50.15. 1
         libavcodec    52.72. 2 / 52.72. 2
         libavformat   52.64. 2 / 52.64. 2
         libavdevice   52. 2. 0 / 52. 2. 0
         libavfilter    1.19. 0 /  1.19. 0
         libswscale     0.11. 0 /  0.11. 0
         libpostproc   51. 2. 0 / 51. 2. 0
       [mjpeg @ 0x1ece670]Estimating duration from bitrate, this may be inaccurate
       Input #0, mjpeg, from 'http://[CAMIP]/image':
         Duration: N/A, bitrate: N/A
           Stream #0.0: Video: mjpeg, yuvj422p, 736x480, 30 fps, 30 tbr, 1200k tbn, 30 tbc
       Output #0, ffm, to 'http://localhost:8090/feed27.ffm':
         Metadata:
           encoder         : Lavf52.64.2
           Stream #0.0: Video: mjpeg, yuvj420p, 736x480, q=2-31, 200 kb/s, 1000k tbn, 30 tbc
       Stream mapping:
         Stream #0.0 -> #0.0
       Press [q] to stop encoding
       [mjpeg @ 0x222d110]rc buffer underflow
       frame=  640 fps= 17 q=31.4 size=   12884kB time=21.33 bitrate=4947.5kbits/s

    When I use VLC to open the stream, it cannot be found :

    Your input can't be opened:
       VLC is unable to open the MRL 'rtsp://localhost:5544/stream27.mpg'. Check the log for details.

    Finally, using ffplay on the same machine :

    [tm@tele tmp]# ffplay rtsp://localhost:5544/stream27.sdp
    FFplay version 0.6.5, Copyright (c) 2003-2010 the FFmpeg developers
     built on Jan 29 2012 17:52:15 with gcc 4.4.5 20110214 (Red Hat 4.4.5-6)
     configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --disable-avisynth --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --enable-avfilter --enable-avfilter-lavf --enable-libdc1394 --enable-libdirac --enable-libfaac --enable-libfaad --enable-libfaadbin --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-vdpau --enable-version3 --enable-x11grab
     libavutil     50.15. 1 / 50.15. 1
     libavcodec    52.72. 2 / 52.72. 2
     libavformat   52.64. 2 / 52.64. 2
     libavdevice   52. 2. 0 / 52. 2. 0
     libavfilter    1.19. 0 /  1.19. 0
     libswscale     0.11. 0 /  0.11. 0
     libpostproc   51. 2. 0 / 51. 2. 0
    ALSA lib pulse.c:229:(pulse_connect) PulseAudio: Unable to connect: Connection refused

    rtsp://localhost:5544/stream27.sdp: Protocol not supported

    And here was the log from ffserver :

    127.0.0.1:5000 - - "PLAY stream27.mpg/streamid=0 RTP/UDP"
    [rtp @ 0x721dc0]Unsupported codec 8
    127.0.0.1:0 - - "PLAY stream27.mpg/streamid=0 RTP/TCP"
    [rtp @ 0x728cb0]Unsupported codec 8
    127.0.0.1 - - [SETUP] "rtsp://localhost:5544/stream27.mpg/streamid=0 RTSP/1.0" 200 641
  • How to Stream RTP (IP camera) Into React App setup

    10 novembre 2024, par sharon2469

    I am trying to transfer a live broadcast from an IP camera or any other broadcast coming from an RTP/RTSP source to my REACT application. BUT MUST BE LIVE

    &#xA;

    My setup at the moment is :

    &#xA;

    IP Camera -> (RTP) -> FFmpeg -> (udp) -> Server(nodeJs) -> (WebRTC) -> React app

    &#xA;

    In the current situation, There is almost no delay, but there are some things here that I can't avoid and I can't understand why, and here is my question :

    &#xA;

    1) First, is the SETUP even correct and this is the only way to Stream RTP video in Web app ?

    &#xA;

    2) Is it possible to avoid re-encode the stream , RTP transmission necessarily comes in H.264, hence I don't really need to execute the following command :

    &#xA;

        return spawn(&#x27;ffmpeg&#x27;, [&#xA;    &#x27;-re&#x27;,                              // Read input at its native frame rate Important for live-streaming&#xA;    &#x27;-probesize&#x27;, &#x27;32&#x27;,                 // Set probing size to 32 bytes (32 is minimum)&#xA;    &#x27;-analyzeduration&#x27;, &#x27;1000000&#x27;,      // An input duration of 1 second&#xA;    &#x27;-c:v&#x27;, &#x27;h264&#x27;,                     // Video codec of input video&#xA;    &#x27;-i&#x27;, &#x27;rtp://238.0.0.2:48888&#x27;,      // Input stream URL&#xA;    &#x27;-map&#x27;, &#x27;0:v?&#x27;,                     // Select video from input stream&#xA;    &#x27;-c:v&#x27;, &#x27;libx264&#x27;,                  // Video codec of output stream&#xA;    &#x27;-preset&#x27;, &#x27;ultrafast&#x27;,             // Faster encoding for lower latency&#xA;    &#x27;-tune&#x27;, &#x27;zerolatency&#x27;,             // Optimize for zero latency&#xA;    // &#x27;-s&#x27;, &#x27;768x480&#x27;,                    // Adjust the resolution (experiment with values)&#xA;    &#x27;-f&#x27;, &#x27;rtp&#x27;, `rtp://127.0.0.1:${udpPort}` // Output stream URL&#xA;]);&#xA;

    &#xA;

    As you can se in this command I re-encode to libx264, But if I set FFMPEG a parameter '-c:v' :'copy' instead of '-c:v', 'libx264' then FFMPEG throw an error says : that it doesn't know how to encode h264 and only knows what is libx264-> Basically, I want to stop the re-encode because there is really no need for it, because the stream is already encoded to H264. Are there certain recommendations that can be made ?

    &#xA;

    3) I thought about giving up the FFMPEG completely, but the RTP packets arrive at a size of 1200+ BYTES when WEBRTC is limited to up to 1280 BYTE. Is there a way to manage these sabotages without damaging the video and is it to enter this world ? I guess there is the whole story with the JITTER BUFFER here

    &#xA;

    This is my server side code (THIS IS JUST A TEST CODE)

    &#xA;

    import {&#xA;    MediaStreamTrack,&#xA;    randomPort,&#xA;    RTCPeerConnection,&#xA;    RTCRtpCodecParameters,&#xA;    RtpPacket,&#xA;} from &#x27;werift&#x27;&#xA;import {Server} from "ws";&#xA;import {createSocket} from "dgram";&#xA;import {spawn} from "child_process";&#xA;import LoggerFactory from "./logger/loggerFactory";&#xA;&#xA;//&#xA;&#xA;const log = LoggerFactory.getLogger(&#x27;ServerMedia&#x27;)&#xA;&#xA;// Websocket server -> WebRTC&#xA;const serverPort = 8888&#xA;const server = new Server({port: serverPort});&#xA;log.info(`Server Media start om port: ${serverPort}`);&#xA;&#xA;// UDP server -> ffmpeg&#xA;const udpPort = 48888&#xA;const udp = createSocket("udp4");&#xA;// udp.bind(udpPort, () => {&#xA;//     udp.addMembership("238.0.0.2");&#xA;// })&#xA;udp.bind(udpPort)&#xA;log.info(`UDP port: ${udpPort}`)&#xA;&#xA;&#xA;const createFFmpegProcess = () => {&#xA;    log.info(`Start ffmpeg process`)&#xA;    return spawn(&#x27;ffmpeg&#x27;, [&#xA;        &#x27;-re&#x27;,                              // Read input at its native frame rate Important for live-streaming&#xA;        &#x27;-probesize&#x27;, &#x27;32&#x27;,                 // Set probing size to 32 bytes (32 is minimum)&#xA;        &#x27;-analyzeduration&#x27;, &#x27;1000000&#x27;,      // An input duration of 1 second&#xA;        &#x27;-c:v&#x27;, &#x27;h264&#x27;,                     // Video codec of input video&#xA;        &#x27;-i&#x27;, &#x27;rtp://238.0.0.2:48888&#x27;,      // Input stream URL&#xA;        &#x27;-map&#x27;, &#x27;0:v?&#x27;,                     // Select video from input stream&#xA;        &#x27;-c:v&#x27;, &#x27;libx264&#x27;,                  // Video codec of output stream&#xA;        &#x27;-preset&#x27;, &#x27;ultrafast&#x27;,             // Faster encoding for lower latency&#xA;        &#x27;-tune&#x27;, &#x27;zerolatency&#x27;,             // Optimize for zero latency&#xA;        // &#x27;-s&#x27;, &#x27;768x480&#x27;,                    // Adjust the resolution (experiment with values)&#xA;        &#x27;-f&#x27;, &#x27;rtp&#x27;, `rtp://127.0.0.1:${udpPort}` // Output stream URL&#xA;    ]);&#xA;&#xA;}&#xA;&#xA;let ffmpegProcess = createFFmpegProcess();&#xA;&#xA;&#xA;const attachFFmpegListeners = () => {&#xA;    // Capture standard output and print it&#xA;    ffmpegProcess.stdout.on(&#x27;data&#x27;, (data) => {&#xA;        log.info(`FFMPEG process stdout: ${data}`);&#xA;    });&#xA;&#xA;    // Capture standard error and print it&#xA;    ffmpegProcess.stderr.on(&#x27;data&#x27;, (data) => {&#xA;        console.error(`ffmpeg stderr: ${data}`);&#xA;    });&#xA;&#xA;    // Listen for the exit event&#xA;    ffmpegProcess.on(&#x27;exit&#x27;, (code, signal) => {&#xA;        if (code !== null) {&#xA;            log.info(`ffmpeg process exited with code ${code}`);&#xA;        } else if (signal !== null) {&#xA;            log.info(`ffmpeg process killed with signal ${signal}`);&#xA;        }&#xA;    });&#xA;};&#xA;&#xA;&#xA;attachFFmpegListeners();&#xA;&#xA;&#xA;server.on("connection", async (socket) => {&#xA;    const payloadType = 96; // It is a numerical value that is assigned to each codec in the SDP offer/answer exchange -> for H264&#xA;    // Create a peer connection with the codec parameters set in advance.&#xA;    const pc = new RTCPeerConnection({&#xA;        codecs: {&#xA;            audio: [],&#xA;            video: [&#xA;                new RTCRtpCodecParameters({&#xA;                    mimeType: "video/H264",&#xA;                    clockRate: 90000, // 90000 is the default value for H264&#xA;                    payloadType: payloadType,&#xA;                }),&#xA;            ],&#xA;        },&#xA;    });&#xA;&#xA;    const track = new MediaStreamTrack({kind: "video"});&#xA;&#xA;&#xA;    udp.on("message", (data) => {&#xA;        console.log(data)&#xA;        const rtp = RtpPacket.deSerialize(data);&#xA;        rtp.header.payloadType = payloadType;&#xA;        track.writeRtp(rtp);&#xA;    });&#xA;&#xA;    udp.on("error", (err) => {&#xA;        console.log(err)&#xA;&#xA;    });&#xA;&#xA;    udp.on("close", () => {&#xA;        console.log("close")&#xA;    });&#xA;&#xA;    pc.addTransceiver(track, {direction: "sendonly"});&#xA;&#xA;    await pc.setLocalDescription(await pc.createOffer());&#xA;    const sdp = JSON.stringify(pc.localDescription);&#xA;    socket.send(sdp);&#xA;&#xA;    socket.on("message", (data: any) => {&#xA;        if (data.toString() === &#x27;resetFFMPEG&#x27;) {&#xA;            ffmpegProcess.kill(&#x27;SIGINT&#x27;);&#xA;            log.info(`FFMPEG process killed`)&#xA;            setTimeout(() => {&#xA;                ffmpegProcess = createFFmpegProcess();&#xA;                attachFFmpegListeners();&#xA;            }, 5000)&#xA;        } else {&#xA;            pc.setRemoteDescription(JSON.parse(data));&#xA;        }&#xA;    });&#xA;});&#xA;

    &#xA;

    And this fronted :

    &#xA;

    &#xA;&#xA;&#xA;    &#xA;    &#xA;    <code class="echappe-js">&lt;script&amp;#xA;            crossorigin&amp;#xA;            src=&quot;https://unpkg.com/react@16/umd/react.development.js&quot;&amp;#xA;    &gt;&lt;/script&gt;&#xA;    &lt;script&amp;#xA;            crossorigin&amp;#xA;            src=&quot;https://unpkg.com/react-dom@16/umd/react-dom.development.js&quot;&amp;#xA;    &gt;&lt;/script&gt;&#xA;    &lt;script&amp;#xA;            crossorigin&amp;#xA;            src=&quot;https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.34/browser.min.js&quot;&amp;#xA;    &gt;&lt;/script&gt;&#xA;    &lt;script src=&quot;https://cdn.jsdelivr.net/npm/babel-regenerator-runtime@6.5.0/runtime.min.js&quot;&gt;&lt;/script&gt;&#xA;&#xA;&#xA;
    &#xA;

    &#xA;

    &#xA;&lt;script type=&quot;text/babel&quot;&gt;&amp;#xA;    let rtc;&amp;#xA;&amp;#xA;    const App = () =&gt; {&amp;#xA;        const [log, setLog] = React.useState([]);&amp;#xA;        const videoRef = React.useRef();&amp;#xA;        const socket = new WebSocket(&quot;ws://localhost:8888&quot;);&amp;#xA;        const [peer, setPeer] = React.useState(null); // Add state to keep track of the peer connection&amp;#xA;&amp;#xA;        React.useEffect(() =&gt; {&amp;#xA;            (async () =&gt; {&amp;#xA;                await new Promise((r) =&gt; (socket.onopen = r));&amp;#xA;                console.log(&quot;open websocket&quot;);&amp;#xA;&amp;#xA;                const handleOffer = async (offer) =&gt; {&amp;#xA;                    console.log(&quot;new offer&quot;, offer.sdp);&amp;#xA;&amp;#xA;                    const updatedPeer = new RTCPeerConnection({&amp;#xA;                        iceServers: [],&amp;#xA;                        sdpSemantics: &quot;unified-plan&quot;,&amp;#xA;                    });&amp;#xA;&amp;#xA;                    updatedPeer.onicecandidate = ({ candidate }) =&gt; {&amp;#xA;                        if (!candidate) {&amp;#xA;                            const sdp = JSON.stringify(updatedPeer.localDescription);&amp;#xA;                            console.log(sdp);&amp;#xA;                            socket.send(sdp);&amp;#xA;                        }&amp;#xA;                    };&amp;#xA;&amp;#xA;                    updatedPeer.oniceconnectionstatechange = () =&gt; {&amp;#xA;                        console.log(&amp;#xA;                            &quot;oniceconnectionstatechange&quot;,&amp;#xA;                            updatedPeer.iceConnectionState&amp;#xA;                        );&amp;#xA;                    };&amp;#xA;&amp;#xA;                    updatedPeer.ontrack = (e) =&gt; {&amp;#xA;                        console.log(&quot;ontrack&quot;, e);&amp;#xA;                        videoRef.current.srcObject = e.streams[0];&amp;#xA;                    };&amp;#xA;&amp;#xA;                    await updatedPeer.setRemoteDescription(offer);&amp;#xA;                    const answer = await updatedPeer.createAnswer();&amp;#xA;                    await updatedPeer.setLocalDescription(answer);&amp;#xA;&amp;#xA;                    setPeer(updatedPeer);&amp;#xA;                };&amp;#xA;&amp;#xA;                socket.onmessage = (ev) =&gt; {&amp;#xA;                    const data = JSON.parse(ev.data);&amp;#xA;                    if (data.type === &quot;offer&quot;) {&amp;#xA;                        handleOffer(data);&amp;#xA;                    } else if (data.type === &quot;resetFFMPEG&quot;) {&amp;#xA;                        // Handle the resetFFMPEG message&amp;#xA;                        console.log(&quot;FFmpeg reset requested&quot;);&amp;#xA;                    }&amp;#xA;                };&amp;#xA;            })();&amp;#xA;        }, []); // Added socket as a dependency to the useEffect hook&amp;#xA;&amp;#xA;        const sendRequestToResetFFmpeg = () =&gt; {&amp;#xA;            socket.send(&quot;resetFFMPEG&quot;);&amp;#xA;        };&amp;#xA;&amp;#xA;        return (&amp;#xA;            &lt;div&gt;&amp;#xA;                Video: &amp;#xA;                &lt;video ref={videoRef} autoPlay muted /&gt;&amp;#xA;                &lt;button onClick={() =&gt; sendRequestToResetFFmpeg()}&gt;Reset FFMPEG&lt;/button&gt;&amp;#xA;            &lt;/div&gt;&amp;#xA;        );&amp;#xA;    };&amp;#xA;&amp;#xA;    ReactDOM.render(&lt;App /&gt;, document.getElementById(&quot;app1&quot;));&amp;#xA;&lt;/script&gt;&#xA;&#xA;&#xA;

    &#xA;

  • What's the purpose of the END field in files containing metadata for ffmpeg ?

    15 juillet 2023, par rdrg109

    Table of contents

    &#xA;

      &#xA;
    • The context
    • &#xA;

    • Minimal working example&#xA;
        &#xA;
      • no. 1
      • &#xA;

      • no. 2
      • &#xA;

      • no. 3
      • &#xA;

      &#xA;

    • &#xA;

    • The questions
    • &#xA;

    &#xA;

    &#xA;

    The context

    &#xA;

    I'm aware that it is possible to insert metadata of chapters into a video using ffmpeg (link to documentation). Minimal working example no. 1 shows this point.

    &#xA;

    When I insert chapters that are not consecutive, ffmpeg doesn't use the end time that is specified in the metadata file through the END field. "Minimal working example no. 2" shows this point. In "Minimal working example no. 3", the END field of all chapters is 1 millisecond, yet we get the same result of Minimal working example no. 2. What is worth noting is that the END of the last chapter is the only one that is added correctly.

    &#xA;

    &#xA;

    Minimal working example

    &#xA;

    &#xA;

    no. 1

    &#xA;

    Consider the following ffmpeg metadata file. It creates chapters in the following time ranges

    &#xA;

      &#xA;
    • From 0 to millisecond 9999, the chapter is This is my chapter one
    • &#xA;

    • From 10000 to millisecond 19999, the chapter is This is my chapter two
    • &#xA;

    • From 20000 to millisecond 29999, the chapter is This is my chapter three
    • &#xA;

    • From 30000 to millisecond 39999, the chapter is This is my chapter four
    • &#xA;

    &#xA;

    ;FFMETADATA1&#xA;title=The title of my video&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=0&#xA;END=10000&#xA;title=This is my chapter one&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=10000&#xA;END=20000&#xA;title=This is my chapter two&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=20000&#xA;END=30000&#xA;title=This is my chapter three&#xA;

    &#xA;

    The following command creates a simple video and inserts the metadata shown above into that video.

    &#xA;

    ffmpeg -v error -y -f lavfi -i &#x27;testsrc=d=120:size=100x100&#x27; -i metadata.txt -map_metadata 1 output.mp4&#xA;

    &#xA;

    The following command shows information about the metadata in the video file that was created by the previous command.

    &#xA;

    ffprobe -v error -show_chapters -print_format json output.mp4 | jq&#xA;

    &#xA;

    {&#xA;  "chapters": [&#xA;    {&#xA;      "id": 0,&#xA;      "time_base": "1/1000",&#xA;      "start": 0,&#xA;      "start_time": "0.000000",&#xA;      "end": 10000,&#xA;      "end_time": "10.000000",&#xA;      "tags": {&#xA;        "title": "This is my chapter one"&#xA;      }&#xA;    },&#xA;    {&#xA;      "id": 1,&#xA;      "time_base": "1/1000",&#xA;      "start": 10000,&#xA;      "start_time": "10.000000",&#xA;      "end": 20000,&#xA;      "end_time": "20.000000",&#xA;      "tags": {&#xA;        "title": "This is my chapter two"&#xA;      }&#xA;    },&#xA;    {&#xA;      "id": 2,&#xA;      "time_base": "1/1000",&#xA;      "start": 20000,&#xA;      "start_time": "20.000000",&#xA;      "end": 30000,&#xA;      "end_time": "30.000000",&#xA;      "tags": {&#xA;        "title": "This is my chapter three"&#xA;      }&#xA;    }&#xA;  ]&#xA;}&#xA;

    &#xA;

    No problem until here.

    &#xA;

    &#xA;

    no. 2

    &#xA;

    ;FFMETADATA1&#xA;title=The title of my video&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=0&#xA;END=5000&#xA;title=This is my chapter one&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=10000&#xA;END=15000&#xA;title=This is my chapter two&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=20000&#xA;END=25000&#xA;title=This is my chapter three&#xA;

    &#xA;

    The following command creates a simple video and inserts the metadata shown above into that video.

    &#xA;

    ffmpeg -v error -y -f lavfi -i &#x27;testsrc=d=120:size=100x100&#x27; -i metadata.txt -map_metadata 1 output.mp4&#xA;

    &#xA;

    The following command shows information about the metadata in the video file that was created by the previous command.

    &#xA;

    ffprobe -v error -show_chapters -print_format json output.mp4 | jq&#xA;

    &#xA;

    {&#xA;  "chapters": [&#xA;    {&#xA;      "id": 0,&#xA;      "time_base": "1/1000",&#xA;      "start": 0,&#xA;      "start_time": "0.000000",&#xA;      "end": 10000,&#xA;      "end_time": "10.000000",&#xA;      "tags": {&#xA;        "title": "This is my chapter one"&#xA;      }&#xA;    },&#xA;    {&#xA;      "id": 1,&#xA;      "time_base": "1/1000",&#xA;      "start": 10000,&#xA;      "start_time": "10.000000",&#xA;      "end": 20000,&#xA;      "end_time": "20.000000",&#xA;      "tags": {&#xA;        "title": "This is my chapter two"&#xA;      }&#xA;    },&#xA;    {&#xA;      "id": 2,&#xA;      "time_base": "1/1000",&#xA;      "start": 20000,&#xA;      "start_time": "20.000000",&#xA;      "end": 25000,&#xA;      "end_time": "25.000000",&#xA;      "tags": {&#xA;        "title": "This is my chapter three"&#xA;      }&#xA;    }&#xA;  ]&#xA;}&#xA;

    &#xA;

    From the output above, we can notice the following :

    &#xA;

      &#xA;
    • The end time of the chapter with title This is my chapter one is 10000 which should be 15000, since that is the value we specified in the END field
    • &#xA;

    • The end time of the chapter with title This is my chapter two is 20000 which should be 25000, since that is the value we specified in the END field
    • &#xA;

    • The end time of the last chapter is the only one that is the same to the value that we wrote in the metadata file
    • &#xA;

    &#xA;

    I consider this unexpected behavior because ffmpeg should use the end time that we have specified in the metadata file.

    &#xA;

    &#xA;

    no. 3

    &#xA;

    ;FFMETADATA1&#xA;title=The title of my video&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=0&#xA;END=1&#xA;title=This is my chapter one&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=10000&#xA;END=10001&#xA;title=This is my chapter two&#xA;&#xA;[CHAPTER]&#xA;TIMEBASE=1/1000&#xA;START=20000&#xA;END=20001&#xA;title=This is my chapter three&#xA;

    &#xA;

    The following command creates a simple video and inserts the metadata shown above into that video.

    &#xA;

    ffmpeg -v error -y -f lavfi -i &#x27;testsrc=d=120:size=100x100&#x27; -i metadata.txt -map_metadata 1 output.mp4&#xA;

    &#xA;

    The following command shows information about the metadata in the video file that was created by the previous command.

    &#xA;

    ffprobe -v error -show_chapters -print_format json output.mp4 | jq&#xA;

    &#xA;

    {&#xA;  "chapters": [&#xA;    {&#xA;      "id": 0,&#xA;      "time_base": "1/1000",&#xA;      "start": 0,&#xA;      "start_time": "0.000000",&#xA;      "end": 10000,&#xA;      "end_time": "10.000000",&#xA;      "tags": {&#xA;        "title": "This is my chapter one"&#xA;      }&#xA;    },&#xA;    {&#xA;      "id": 1,&#xA;      "time_base": "1/1000",&#xA;      "start": 10000,&#xA;      "start_time": "10.000000",&#xA;      "end": 20000,&#xA;      "end_time": "20.000000",&#xA;      "tags": {&#xA;        "title": "This is my chapter two"&#xA;      }&#xA;    },&#xA;    {&#xA;      "id": 2,&#xA;      "time_base": "1/1000",&#xA;      "start": 20000,&#xA;      "start_time": "20.000000",&#xA;      "end": 20001,&#xA;      "end_time": "20.001000",&#xA;      "tags": {&#xA;        "title": "This is my chapter three"&#xA;      }&#xA;    }&#xA;  ]&#xA;}&#xA;

    &#xA;

    From the output above, we can notice the following :

    &#xA;

      &#xA;
    • The end time of the chapter with title This is my chapter one is 10000, but it should be 15000, since that is the value we specified in the END field in the metadata file
    • &#xA;

    • The end time of the chapter with title This is my chapter two is 20000, but it should be 25000, since that is the value we specified in the END field in the metadata file
    • &#xA;

    • The end time of the last chapter is the only one that is the same to the value that we wrote in the metadata file
    • &#xA;

    &#xA;

    I consider this unexpected behavior because ffmpeg should use the end time that we have specified in the metadata file.

    &#xA;

    &#xA;

    The questions

    &#xA;

    In minimal working example no. 2 and minimal working example no. 3, we could see that although the metadata file sets END for each chapter, a different time is used (except for the last chapter). My questions are :

    &#xA;

      &#xA;
    • What's the purpose of having a field called END when it only serves a purpose in the last chapter ?
    • &#xA;

    • Given that ffmpeg only adds the correct value for END for the last chapter ? Wouldn't a better name for END be END_ONLY_FOR_LAST_CHAPTER ?
    • &#xA;

    • Am I doing something wrong that is casuing ffmpeg to use a different value for END in all chapters but the last one ?
    • &#xA;

    &#xA;