Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (34)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (5385)

  • How to broadcast a livestream from YouTube to Telegram using vlc from the command-line ?

    22 février 2024, par shackra

    I want to re-transmit a live stream from YouTube (with streamlink) and broadcast it on a Telegram channel using VLC on the command-line. I think VLC is my best option for getting data from the source and sending it to the new destination.

    


    The thing is, I don't know how to configure the output correctly for the Telegram channel, nor how to put the transmission key.

    


    I was reading this answer https://stackoverflow.com/a/40461349/2020214 and using it as a guide unsuccessfully (Telegram does not detect any livestream)

    


  • How to Stream RTP (IP camera) Into React App setup

    10 novembre 2024, par sharon2469

    I am trying to transfer a live broadcast from an IP camera or any other broadcast coming from an RTP/RTSP source to my REACT application. BUT MUST BE LIVE

    


    My setup at the moment is :

    


    IP Camera -> (RTP) -> FFmpeg -> (udp) -> Server(nodeJs) -> (WebRTC) -> React app

    


    In the current situation, There is almost no delay, but there are some things here that I can't avoid and I can't understand why, and here is my question :

    


    1) First, is the SETUP even correct and this is the only way to Stream RTP video in Web app ?

    


    2) Is it possible to avoid re-encode the stream , RTP transmission necessarily comes in H.264, hence I don't really need to execute the following command :

    


        return spawn('ffmpeg', [
    '-re',                              // Read input at its native frame rate Important for live-streaming
    '-probesize', '32',                 // Set probing size to 32 bytes (32 is minimum)
    '-analyzeduration', '1000000',      // An input duration of 1 second
    '-c:v', 'h264',                     // Video codec of input video
    '-i', 'rtp://238.0.0.2:48888',      // Input stream URL
    '-map', '0:v?',                     // Select video from input stream
    '-c:v', 'libx264',                  // Video codec of output stream
    '-preset', 'ultrafast',             // Faster encoding for lower latency
    '-tune', 'zerolatency',             // Optimize for zero latency
    // '-s', '768x480',                    // Adjust the resolution (experiment with values)
    '-f', 'rtp', `rtp://127.0.0.1:${udpPort}` // Output stream URL
]);


    


    As you can se in this command I re-encode to libx264, But if I set FFMPEG a parameter '-c:v' :'copy' instead of '-c:v', 'libx264' then FFMPEG throw an error says : that it doesn't know how to encode h264 and only knows what is libx264-> Basically, I want to stop the re-encode because there is really no need for it, because the stream is already encoded to H264. Are there certain recommendations that can be made ?

    


    3) I thought about giving up the FFMPEG completely, but the RTP packets arrive at a size of 1200+ BYTES when WEBRTC is limited to up to 1280 BYTE. Is there a way to manage these sabotages without damaging the video and is it to enter this world ? I guess there is the whole story with the JITTER BUFFER here

    


    This is my server side code (THIS IS JUST A TEST CODE)

    


    import {
    MediaStreamTrack,
    randomPort,
    RTCPeerConnection,
    RTCRtpCodecParameters,
    RtpPacket,
} from 'werift'
import {Server} from "ws";
import {createSocket} from "dgram";
import {spawn} from "child_process";
import LoggerFactory from "./logger/loggerFactory";

//

const log = LoggerFactory.getLogger('ServerMedia')

// Websocket server -> WebRTC
const serverPort = 8888
const server = new Server({port: serverPort});
log.info(`Server Media start om port: ${serverPort}`);

// UDP server -> ffmpeg
const udpPort = 48888
const udp = createSocket("udp4");
// udp.bind(udpPort, () => {
//     udp.addMembership("238.0.0.2");
// })
udp.bind(udpPort)
log.info(`UDP port: ${udpPort}`)


const createFFmpegProcess = () => {
    log.info(`Start ffmpeg process`)
    return spawn('ffmpeg', [
        '-re',                              // Read input at its native frame rate Important for live-streaming
        '-probesize', '32',                 // Set probing size to 32 bytes (32 is minimum)
        '-analyzeduration', '1000000',      // An input duration of 1 second
        '-c:v', 'h264',                     // Video codec of input video
        '-i', 'rtp://238.0.0.2:48888',      // Input stream URL
        '-map', '0:v?',                     // Select video from input stream
        '-c:v', 'libx264',                  // Video codec of output stream
        '-preset', 'ultrafast',             // Faster encoding for lower latency
        '-tune', 'zerolatency',             // Optimize for zero latency
        // '-s', '768x480',                    // Adjust the resolution (experiment with values)
        '-f', 'rtp', `rtp://127.0.0.1:${udpPort}` // Output stream URL
    ]);

}

let ffmpegProcess = createFFmpegProcess();


const attachFFmpegListeners = () => {
    // Capture standard output and print it
    ffmpegProcess.stdout.on('data', (data) => {
        log.info(`FFMPEG process stdout: ${data}`);
    });

    // Capture standard error and print it
    ffmpegProcess.stderr.on('data', (data) => {
        console.error(`ffmpeg stderr: ${data}`);
    });

    // Listen for the exit event
    ffmpegProcess.on('exit', (code, signal) => {
        if (code !== null) {
            log.info(`ffmpeg process exited with code ${code}`);
        } else if (signal !== null) {
            log.info(`ffmpeg process killed with signal ${signal}`);
        }
    });
};


attachFFmpegListeners();


server.on("connection", async (socket) => {
    const payloadType = 96; // It is a numerical value that is assigned to each codec in the SDP offer/answer exchange -> for H264
    // Create a peer connection with the codec parameters set in advance.
    const pc = new RTCPeerConnection({
        codecs: {
            audio: [],
            video: [
                new RTCRtpCodecParameters({
                    mimeType: "video/H264",
                    clockRate: 90000, // 90000 is the default value for H264
                    payloadType: payloadType,
                }),
            ],
        },
    });

    const track = new MediaStreamTrack({kind: "video"});


    udp.on("message", (data) => {
        console.log(data)
        const rtp = RtpPacket.deSerialize(data);
        rtp.header.payloadType = payloadType;
        track.writeRtp(rtp);
    });

    udp.on("error", (err) => {
        console.log(err)

    });

    udp.on("close", () => {
        console.log("close")
    });

    pc.addTransceiver(track, {direction: "sendonly"});

    await pc.setLocalDescription(await pc.createOffer());
    const sdp = JSON.stringify(pc.localDescription);
    socket.send(sdp);

    socket.on("message", (data: any) => {
        if (data.toString() === 'resetFFMPEG') {
            ffmpegProcess.kill('SIGINT');
            log.info(`FFMPEG process killed`)
            setTimeout(() => {
                ffmpegProcess = createFFmpegProcess();
                attachFFmpegListeners();
            }, 5000)
        } else {
            pc.setRemoteDescription(JSON.parse(data));
        }
    });
});


    


    And this fronted :

    


    &#xA;&#xA;&#xA;    &#xA;    &#xA;    <code class="echappe-js">&lt;script&amp;#xA;            crossorigin&amp;#xA;            src=&quot;https://unpkg.com/react@16/umd/react.development.js&quot;&amp;#xA;    &gt;&lt;/script&gt;&#xA;    &lt;script&amp;#xA;            crossorigin&amp;#xA;            src=&quot;https://unpkg.com/react-dom@16/umd/react-dom.development.js&quot;&amp;#xA;    &gt;&lt;/script&gt;&#xA;    &lt;script&amp;#xA;            crossorigin&amp;#xA;            src=&quot;https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.34/browser.min.js&quot;&amp;#xA;    &gt;&lt;/script&gt;&#xA;    &lt;script src=&quot;https://cdn.jsdelivr.net/npm/babel-regenerator-runtime@6.5.0/runtime.min.js&quot;&gt;&lt;/script&gt;&#xA;&#xA;&#xA;
    &#xA;

    &#xA;

    &#xA;&lt;script type=&quot;text/babel&quot;&gt;&amp;#xA;    let rtc;&amp;#xA;&amp;#xA;    const App = () =&gt; {&amp;#xA;        const [log, setLog] = React.useState([]);&amp;#xA;        const videoRef = React.useRef();&amp;#xA;        const socket = new WebSocket(&quot;ws://localhost:8888&quot;);&amp;#xA;        const [peer, setPeer] = React.useState(null); // Add state to keep track of the peer connection&amp;#xA;&amp;#xA;        React.useEffect(() =&gt; {&amp;#xA;            (async () =&gt; {&amp;#xA;                await new Promise((r) =&gt; (socket.onopen = r));&amp;#xA;                console.log(&quot;open websocket&quot;);&amp;#xA;&amp;#xA;                const handleOffer = async (offer) =&gt; {&amp;#xA;                    console.log(&quot;new offer&quot;, offer.sdp);&amp;#xA;&amp;#xA;                    const updatedPeer = new RTCPeerConnection({&amp;#xA;                        iceServers: [],&amp;#xA;                        sdpSemantics: &quot;unified-plan&quot;,&amp;#xA;                    });&amp;#xA;&amp;#xA;                    updatedPeer.onicecandidate = ({ candidate }) =&gt; {&amp;#xA;                        if (!candidate) {&amp;#xA;                            const sdp = JSON.stringify(updatedPeer.localDescription);&amp;#xA;                            console.log(sdp);&amp;#xA;                            socket.send(sdp);&amp;#xA;                        }&amp;#xA;                    };&amp;#xA;&amp;#xA;                    updatedPeer.oniceconnectionstatechange = () =&gt; {&amp;#xA;                        console.log(&amp;#xA;                            &quot;oniceconnectionstatechange&quot;,&amp;#xA;                            updatedPeer.iceConnectionState&amp;#xA;                        );&amp;#xA;                    };&amp;#xA;&amp;#xA;                    updatedPeer.ontrack = (e) =&gt; {&amp;#xA;                        console.log(&quot;ontrack&quot;, e);&amp;#xA;                        videoRef.current.srcObject = e.streams[0];&amp;#xA;                    };&amp;#xA;&amp;#xA;                    await updatedPeer.setRemoteDescription(offer);&amp;#xA;                    const answer = await updatedPeer.createAnswer();&amp;#xA;                    await updatedPeer.setLocalDescription(answer);&amp;#xA;&amp;#xA;                    setPeer(updatedPeer);&amp;#xA;                };&amp;#xA;&amp;#xA;                socket.onmessage = (ev) =&gt; {&amp;#xA;                    const data = JSON.parse(ev.data);&amp;#xA;                    if (data.type === &quot;offer&quot;) {&amp;#xA;                        handleOffer(data);&amp;#xA;                    } else if (data.type === &quot;resetFFMPEG&quot;) {&amp;#xA;                        // Handle the resetFFMPEG message&amp;#xA;                        console.log(&quot;FFmpeg reset requested&quot;);&amp;#xA;                    }&amp;#xA;                };&amp;#xA;            })();&amp;#xA;        }, []); // Added socket as a dependency to the useEffect hook&amp;#xA;&amp;#xA;        const sendRequestToResetFFmpeg = () =&gt; {&amp;#xA;            socket.send(&quot;resetFFMPEG&quot;);&amp;#xA;        };&amp;#xA;&amp;#xA;        return (&amp;#xA;            &lt;div&gt;&amp;#xA;                Video: &amp;#xA;                &lt;video ref={videoRef} autoPlay muted /&gt;&amp;#xA;                &lt;button onClick={() =&gt; sendRequestToResetFFmpeg()}&gt;Reset FFMPEG&lt;/button&gt;&amp;#xA;            &lt;/div&gt;&amp;#xA;        );&amp;#xA;    };&amp;#xA;&amp;#xA;    ReactDOM.render(&lt;App /&gt;, document.getElementById(&quot;app1&quot;));&amp;#xA;&lt;/script&gt;&#xA;&#xA;&#xA;

    &#xA;

  • Using FFmpeg to stitch together H.264 videos and variably-spaced JPEG pictures ; dealing with ffmpeg warnings

    19 octobre 2022, par LB2

    Context

    &#xA;

    I have a process flow that may output either H.264 Annex B streams, variably-spaced JPEGs, or a mixture of two. By variably-spaced I mean where elapsed time between any two adjacent JPEGs may (and likely to be) different from any other two adjacent JPEGs. So an example of possible inputs are :

    &#xA;

      &#xA;
    1. stream1.h264
    2. &#xA;

    3. {Set of JPEGs}
    4. &#xA;

    5. stream1.h264 &#x2B; stream2.h264
    6. &#xA;

    7. stream1.h264 &#x2B; {Set of JPEGs}
    8. &#xA;

    9. stream1.h264 &#x2B; {Set of JPEGs} &#x2B; stream2.h264
    10. &#xA;

    11. stream1.h264 &#x2B; {Set of JPEGs} &#x2B; stream2.h264 &#x2B; {Set of JPEGs} &#x2B; ...
    12. &#xA;

    13. stream1.h264 &#x2B; stream2.h264 &#x2B; {Set of JPEGs} &#x2B; ...
    14. &#xA;

    &#xA;

    The output needs to be a single stitched (i.e. concatenated) output in MPEG-4 container.

    &#xA;

    Requirements : No re-encoding or transcoding of existing video compression (One time conversion of JPEG sets to video format is OKay).

    &#xA;

    Solution Prototype

    &#xA;

    To prototype the solution I have found that ffmpeg has concat demuxer that would let me specify an ordered sequence of inputs that ffmpeg would then concatenate together, but all inputs must be of the same format. So, to meet that requirement, I :

    &#xA;

      &#xA;
    1. Convert every JPEG set to an .mp4 using concat (and using delay # directive to specify time-spacing between each JPEG)
    2. &#xA;

    3. Convert every .h264 to .mp4 using -c copy to avoid transcoding.
    4. &#xA;

    5. Stitch all generated interim .mp4 files into the single final .mp4 using -f concat and -c copy.
    6. &#xA;

    &#xA;

    Here's the bash script, in parts, that performs the above :

    &#xA;

      &#xA;
    1. Ignore the curl comment ; that's from originally generating a 100 jpeg images with numbers and these are simply saved locally. What the loop does is it generates concat input file with file sequence#.jpeg directives and duration # directive where each successive JPEG delay is incremented by 0.1 seconds (0.1 between first and second, 0.2 b/w 2nd and 3rd, 0.3 b/w 3rd and 4th, and so on). Then it runs ffmpeg command to convert the set of JPEGs to .mp4 interim file.

      &#xA;

      echo "ffconcat version 1.0" >ffconcat-jpeg.txt&#xA;echo >>ffconcat-jpeg.txt&#xA;&#xA;for i in {1..100}&#xA;do&#xA;    echo "file $i.jpeg" >>ffconcat-jpeg.txt&#xA;    d=$(echo "$i" | awk &#x27;{printf "%f", $1 / 10}&#x27;)&#xA;    # d=$(echo "scale=2; $i/10" | bc)&#xA;    echo "duration $d" >>ffconcat-jpeg.txt&#xA;    echo "" >>ffconcat-jpeg.txt&#xA;    # curl -o "$i.jpeg" "https://math.tools/equation/get_equaimages?equation=$i&amp;fontsize=256"&#xA;done&#xA;&#xA;ffmpeg \&#xA;    -hide_banner \&#xA;    -vsync vfr \&#xA;    -f concat \&#xA;    -i ffconcat-jpeg.txt \&#xA;    -r 30 \&#xA;    -video_track_timescale 90000 \&#xA;    video-jpeg.mp4&#xA;

      &#xA;

    2. &#xA;

    3. Convert two streams from .h264 to .mp4 via copy (no transcoding).

      &#xA;

      ffmpeg \&#xA;    -hide_banner \&#xA;    -i low-motion-video.h264 \&#xA;    -c copy \&#xA;    -vsync vfr \&#xA;    -video_track_timescale 90000 \&#xA;    low-motion-video.mp4&#xA;&#xA;ffmpeg \&#xA;    -hide_banner \&#xA;    -i full-video.h264 \&#xA;    -c copy \&#xA;    -video_track_timescale 90000 \&#xA;    -vsync vfr \&#xA;    full-video.mp4&#xA;

      &#xA;

    4. &#xA;

    5. Stitch all together by generating another concat directive file.

      &#xA;

      echo "ffconcat version 1.0" >ffconcat-h264.txt&#xA;echo >>ffconcat-h264.txt&#xA;echo "file low-motion-video.mp4" >>ffconcat-h264.txt&#xA;echo >>ffconcat-h264.txt&#xA;echo "file full-video.mp4" >>ffconcat-h264.txt&#xA;echo >>ffconcat-h264.txt&#xA;echo "file video-jpeg.mp4" >>ffconcat-h264.txt&#xA;echo >>ffconcat-h264.txt&#xA;&#xA;ffmpeg \&#xA;    -hide_banner \&#xA;    -f concat \&#xA;    -i ffconcat-h264.txt \&#xA;    -pix_fmt yuv420p \&#xA;    -c copy \&#xA;    -video_track_timescale 90000 \&#xA;    -vsync vfr \&#xA;    video-out.mp4&#xA;&#xA;

      &#xA;

    6. &#xA;

    &#xA;

    Problem (and attempted troubleshooting)

    &#xA;

    The above does produce a reasonable output — it plays first video, then plays second video with no timing/rate issues AFAICT, then plays JPEGs with time between each JPEG "frame" growing successively, as expected.

    &#xA;

    But, the conversion process produces warnings that concern me (for compatibility with players ; or potentially other IRL streams that may result in some issue my prototyping content doesn't make obvious). Initial attempts generated 100s of warnings, but with some arguments added, I reduced it down to just a handful, but this handful is stubborn and nothing I tried would help.

    &#xA;

    The first conversion of JPEGs to .mp4 goes fine with the following output :

    &#xA;

    Input #0, concat, from &#x27;ffconcat-jpeg.txt&#x27;:&#xA;  Duration: 00:08:25.00, start: 0.000000, bitrate: 0 kb/s&#xA;  Stream #0:0: Video: png, pal8(pc), 176x341 [SAR 3780:3780 DAR 16:31], 25 fps, 25 tbr, 25 tbn, 25 tbc&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (png (native) -> h264 (libx264))&#xA;Press [q] to stop, [?] for help&#xA;[libx264 @ 0x7fe418008e00] using SAR=1/1&#xA;[libx264 @ 0x7fe418008e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2&#xA;[libx264 @ 0x7fe418008e00] profile High 4:4:4 Predictive, level 1.3, 4:4:4, 8-bit&#xA;[libx264 @ 0x7fe418008e00] 264 - core 163 r3060 5db6aa6 - H.264/MPEG-4 AVC codec - Copyleft 2003-2021 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=11 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;Output #0, mp4, to &#x27;video-jpeg.mp4&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf58.76.100&#xA;  Stream #0:0: Video: h264 (avc1 / 0x31637661), yuv444p(tv, progressive), 176x341 [SAR 1:1 DAR 16:31], q=2-31, 30 fps, 90k tbn&#xA;    Metadata:&#xA;      encoder         : Lavc58.134.100 libx264&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A&#xA;frame=  100 fps=0.0 q=-1.0 Lsize=     157kB time=00:07:55.33 bitrate=   2.7kbits/s speed=2.41e&#x2B;03x    &#xA;video:155kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.800846%&#xA;[libx264 @ 0x7fe418008e00] frame I:1     Avg QP:20.88  size:   574&#xA;[libx264 @ 0x7fe418008e00] frame P:43    Avg QP:14.96  size:  2005&#xA;[libx264 @ 0x7fe418008e00] frame B:56    Avg QP:21.45  size:  1266&#xA;[libx264 @ 0x7fe418008e00] consecutive B-frames: 14.0% 24.0% 30.0% 32.0%&#xA;[libx264 @ 0x7fe418008e00] mb I  I16..4: 36.4% 55.8%  7.9%&#xA;[libx264 @ 0x7fe418008e00] mb P  I16..4:  5.1%  7.5% 11.2%  P16..4:  5.6%  8.1%  4.5%  0.0%  0.0%    skip:57.9%&#xA;[libx264 @ 0x7fe418008e00] mb B  I16..4:  2.4%  0.9%  3.9%  B16..8: 16.2%  8.8%  4.6%  direct: 1.2%  skip:62.0%  L0:56.6% L1:38.7% BI: 4.7%&#xA;[libx264 @ 0x7fe418008e00] 8x8 transform intra:28.3% inter:3.7%&#xA;[libx264 @ 0x7fe418008e00] coded y,u,v intra: 26.5% 0.0% 0.0% inter: 9.8% 0.0% 0.0%&#xA;[libx264 @ 0x7fe418008e00] i16 v,h,dc,p: 82% 13%  4%  0%&#xA;[libx264 @ 0x7fe418008e00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20%  8% 71%  1%  0%  0%  0%  0%  0%&#xA;[libx264 @ 0x7fe418008e00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 41% 11% 29%  4%  2%  3%  1%  7%  1%&#xA;[libx264 @ 0x7fe418008e00] Weighted P-Frames: Y:0.0% UV:0.0%&#xA;[libx264 @ 0x7fe418008e00] ref P L0: 44.1%  4.2% 28.4% 23.3%&#xA;[libx264 @ 0x7fe418008e00] ref B L0: 56.2% 32.1% 11.6%&#xA;[libx264 @ 0x7fe418008e00] ref B L1: 92.4%  7.6%&#xA;[libx264 @ 0x7fe418008e00] kb/s:2.50&#xA;

    &#xA;

    The conversion of individual streams from .h264 to .mp4 generates two types of warnings each. One is [mp4 @ 0x7faee3040400] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly, and the other is [mp4 @ 0x7faee3040400] pts has no value.

    &#xA;

    Some posts on SO (can't find my original finds on that now) suggested that it's safe to ignore and comes from H.264 being an elementary stream that supposedly doesn't contain timestamps. It surprises me a bit since I produce that stream using NVENC API and clearly supply timing information for each frame via PIC_PARAMS structure : NV_STRUCT(PIC_PARAMS, pp); ...; pp.inputTimeStamp = _frameIndex&#x2B;&#x2B; * (H264_CLOCK_RATE / _params.frameRate);, where #define H264_CLOCK_RATE 9000 and _params.frameRate = 30.

    &#xA;

    Input #0, h264, from &#x27;low-motion-video.h264&#x27;:&#xA;  Duration: N/A, bitrate: N/A&#xA;  Stream #0:0: Video: h264 (High), yuv420p(progressive), 1440x3040 [SAR 1:1 DAR 9:19], 30 fps, 30 tbr, 1200k tbn, 60 tbc&#xA;Output #0, mp4, to &#x27;low-motion-video.mp4&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf58.76.100&#xA;  Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1440x3040 [SAR 1:1 DAR 9:19], q=2-31, 30 fps, 30 tbr, 90k tbn, 1200k tbc&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (copy)&#xA;Press [q] to stop, [?] for help&#xA;[mp4 @ 0x7faee3040400] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly&#xA;[mp4 @ 0x7faee3040400] pts has no value&#xA;[mp4 @ 0x7faee3040400] pts has no value0kB time=-00:00:00.03 bitrate=N/A speed=N/A    &#xA;    Last message repeated 17985 times&#xA;frame=17987 fps=0.0 q=-1.0 Lsize=   79332kB time=00:09:59.50 bitrate=1084.0kbits/s speed=1.59e&#x2B;03x    &#xA;video:79250kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.103804%&#xA;Input #0, h264, from &#x27;full-video.h264&#x27;:&#xA;  Duration: N/A, bitrate: N/A&#xA;  Stream #0:0: Video: h264 (High), yuv420p(progressive), 1440x3040 [SAR 1:1 DAR 9:19], 30 fps, 30 tbr, 1200k tbn, 60 tbc&#xA;Output #0, mp4, to &#x27;full-video.mp4&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf58.76.100&#xA;  Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1440x3040 [SAR 1:1 DAR 9:19], q=2-31, 30 fps, 30 tbr, 90k tbn, 1200k tbc&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (copy)&#xA;Press [q] to stop, [?] for help&#xA;[mp4 @ 0x7f9381864600] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly&#xA;[mp4 @ 0x7f9381864600] pts has no value&#xA;[mp4 @ 0x7f9381864600] pts has no value0kB time=-00:00:00.03 bitrate=N/A speed=N/A    &#xA;    Last message repeated 17981 times&#xA;frame=17983 fps=0.0 q=-1.0 Lsize=   52976kB time=00:09:59.36 bitrate= 724.1kbits/s speed=1.33e&#x2B;03x    &#xA;video:52893kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.156232%&#xA;

    &#xA;

    But the most worrisome error for me is from stitching together all interim .mp4 files into one :

    &#xA;

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ff2010e00] Auto-inserting h264_mp4toannexb bitstream filter&#xA;Input #0, concat, from &#x27;ffconcat-h264.txt&#x27;:&#xA;  Duration: N/A, bitrate: 1082 kb/s&#xA;  Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1440x3040 [SAR 1:1 DAR 9:19], 1082 kb/s, 30 fps, 30 tbr, 90k tbn, 60 tbc&#xA;    Metadata:&#xA;      handler_name    : VideoHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;Output #0, mp4, to &#x27;video-out.mp4&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf58.76.100&#xA;  Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1440x3040 [SAR 1:1 DAR 9:19], q=2-31, 1082 kb/s, 30 fps, 30 tbr, 90k tbn, 90k tbc&#xA;    Metadata:&#xA;      handler_name    : VideoHandler&#xA;      vendor_id       : [0][0][0][0]&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (copy)&#xA;Press [q] to stop, [?] for help&#xA;[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9fe1009c00] Auto-inserting h264_mp4toannexb bitstream filter&#xA;[mp4 @ 0x7f9ff2023400] Non-monotonous DTS in output stream 0:0; previous: 53954460, current: 53954460; changing to 53954461. This may result in incorrect timestamps in the output file.&#xA;[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9fd1008a00] Auto-inserting h264_mp4toannexb bitstream filter&#xA;[mp4 @ 0x7f9ff2023400] Non-monotonous DTS in output stream 0:0; previous: 107900521, current: 107874150; changing to 107900522. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x7f9ff2023400] Non-monotonous DTS in output stream 0:0; previous: 107900522, current: 107886150; changing to 107900523. This may result in incorrect timestamps in the output file.&#xA;frame=36070 fps=0.0 q=-1.0 Lsize=  132464kB time=00:27:54.26 bitrate= 648.1kbits/s speed=6.54e&#x2B;03x    &#xA;video:132296kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.126409%&#xA;

    &#xA;

    I'm not sure how to deal with those non-monotonous DTS errors, and no matter what I try, nothing budges. I analyzed the interim .mp4 files using ffprobe -show_frames and found that the last frame of each interim .mp4 does not have DTS, while previous frames do. E.g. :

    &#xA;

    ...&#xA;[FRAME]&#xA;media_type=video&#xA;stream_index=0&#xA;key_frame=0&#xA;pkt_pts=53942461&#xA;pkt_pts_time=599.360678&#xA;pkt_dts=53942461&#xA;pkt_dts_time=599.360678&#xA;best_effort_timestamp=53942461&#xA;best_effort_timestamp_time=599.360678&#xA;pkt_duration=3600&#xA;pkt_duration_time=0.040000&#xA;pkt_pos=54161377&#xA;pkt_size=1034&#xA;width=1440&#xA;height=3040&#xA;pix_fmt=yuv420p&#xA;sample_aspect_ratio=1:1&#xA;pict_type=B&#xA;coded_picture_number=17982&#xA;display_picture_number=0&#xA;interlaced_frame=0&#xA;top_field_first=0&#xA;repeat_pict=0&#xA;color_range=unknown&#xA;color_space=unknown&#xA;color_primaries=unknown&#xA;color_transfer=unknown&#xA;chroma_location=left&#xA;[/FRAME]&#xA;[FRAME]&#xA;media_type=video&#xA;stream_index=0&#xA;key_frame=0&#xA;pkt_pts=53927461&#xA;pkt_pts_time=599.194011&#xA;pkt_dts=N/A&#xA;pkt_dts_time=N/A&#xA;best_effort_timestamp=53927461&#xA;...&#xA;

    &#xA;

    My guess is that as concat demuxer reads in (or elsewhere in ffmpeg's conversion pipeline), for the last frame it sees no DTS set, and produces a virtual value equal to the last seen. Then further in pipeline it consumes this input, sees that DTS value is being repeated, issues a warning and offsets it with increment by one, which might be somewhat nonsensical/unrealistic timing value.

    &#xA;

    I tried using -fflags &#x2B;genpts as suggested in this SO answer, but that doesn't change anything.

    &#xA;

    Per yet other posts suggesting issue being with incompatible tbn and tbc values and possible timebase issues, I tried adding -time_base 1:90000 and -enc_time_base 1:90000 and -copytb 1 and nothing budges. The -video_track_timescale 90000 is there b/c it helped reduce those DTS warnings from 100s down to 3, but doesn't eliminate them all.

    &#xA;

    Question

    &#xA;

    What is missing and how can I get ffmpeg to perform conversions without these warnings, to be sure it produces proper, well-formed output ?

    &#xA;