Recherche avancée

Médias (0)

Mot : - Tags -/serveur

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (74)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (12563)

  • VLC and ffplay not receiving video from RTMP stream on Nginx

    14 janvier, par Ekobadd

    I'm streaming via OBS 30.1.2 to an RTMP server on a digitalocean droplet. The server is running on nginx 1.26.0 using the RTMP plugin (libnginx-mod-rtmp in apt).

    


    OBS is configured to output H.264-encoded, 1200kbps, 24fps, 1920x1080 video and aac-encoded, stereo, 44.1kHz, 160kbps audio.

    


    Below is the minimal reproducible example. When I attempt to play the rtmp stream with ffplay or VLC, it's a random chance whether I get video or not. Audio is always fine. The output from ffplay or ffprobe (example below) occasionally does not show any video stream.

    


    rtmp {&#xA;        server {&#xA;                listen 1935;&#xA;                chunk_size 4096;&#xA;&#xA;                application ingest {&#xA;                        live on;&#xA;                        record off;&#xA;&#xA;                        allow publish <my ip="ip" address="address">;&#xA;                        deny publish all;&#xA;&#xA;                        allow play all;&#xA;                }&#xA;        }&#xA;}&#xA;</my>

    &#xA;

    The server has two applications, "ingest" and "live", the former uses the following ffmpeg command to transcode the stream and create a corresponding stream on the latter application :

    &#xA;

    exec_push ffmpeg -i rtmp://localhost/ingest/$name -b:v 1200k -c:v libx264 -c:a aac -ar 44100 -ac 1 -f flv -preset veryfast -tune zerolatency rtmp://localhost/live/$name 2>>/tmp/rtmp-ingest-$name.log;&#xA;

    &#xA;

    As you can see, this produces a log which shows the following :

    &#xA;

    Output #0, flv, to &#x27;rtmp://localhost/live/ekobadd&#x27;:&#xA;  Metadata:&#xA;    |RtmpSampleAccess: true&#xA;    Server          : NGINX RTMP (github.com/arut/nginx-rtmp-module)&#xA;    displayWidth    : 1920&#xA;    displayHeight   : 1080&#xA;    fps             : 23&#xA;    profile         :&#xA;    level           :&#xA;    encoder         : Lavf61.1.100&#xA;  Stream #0:0: Audio: aac (LC) ([10][0][0][0] / 0x000A), 44100 Hz, mono, fltp, 69 kb/s&#xA;      Metadata:&#xA;        encoder         : Lavc61.3.100 aac&#xA;

    &#xA;

    The video is not present, though I can see on the digitalocean control panel that the server is pulling 1.2Mbps inbound, which is about the same as my OBS video bitrate. And, although the ffmpeg instance which is transcoding does not appear to see the video stream from the ingest application, ffprobe from my local machine does, sometimes :

    &#xA;

    > ffprobe rtmp://mydomain.com/ingest/ekobadd&#xA;...&#xA;Input #0, flv, from &#x27;rtmp://mydomain.com/ingest/ekobadd&#x27;:   0B f=0/0&#xA;  Metadata:&#xA;    |RtmpSampleAccess: true&#xA;    Server          : NGINX RTMP (github.com/arut/nginx-rtmp-module)&#xA;    displayWidth    : 1920&#xA;    displayHeight   : 1080&#xA;    fps             : 23&#xA;    profile         :&#xA;    level           :&#xA;  Duration: 00:00:00.00, start: 122.045000, bitrate: N/A&#xA;  Stream #0:0: Audio: aac (LC), 48000 Hz, stereo, fltp, 163 kb/s&#xA;  Stream #0:1: Video: h264 (High), yuv420p(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 1228 kb/s, 23 fps, 23.98 tbr, 1k tbn&#xA;     126.24 A-V: -1.071 fd=   0 aq=   54KB vq=  161KB sq=    0B f=0/0&#xA;

    &#xA;

    Sometimes, however, it doesn't see the stream at all :

    &#xA;

    [rtmp @ 0000022d87d0fe00] Server error: No such stream&#xA;rtmp://mydomain.com/ingest/ekobadd: Operation not permitted&#xA;

    &#xA;

    Testing the stream with VLC gives the same results.

    &#xA;

    Of course, the "live" application also doesn't have video. I have, however, streamed video from it before. I assume if I restart nginx enough, that the exec_push ffmpeg command will randomly see the video stream much like ffprobe does. I also have HLS and DASH configured, and they're both working perfect if you're a radio talk show host.

    &#xA;

    /etc/nginx/nginx.conf : (I'm quite sure I never touched anything in the HTTP section, but it's included just in case)

    &#xA;

    rtmp {&#xA;        server {&#xA;                listen 1935;&#xA;                chunk_size 8192;&#xA;&#xA;                idle_streams off;&#xA;&#xA;                application ingest {&#xA;                        live on;&#xA;                        record off;&#xA;&#xA;                        # Transcode to h264/aac via flv, 1.2Mbps 24fps 44.1kHz, single audio channel video (HLS Ready)&#xA;                        exec_push ffmpeg -i rtmp://localhost/ingest/$name -b:v 1200k -c:v libx264 -c:a aac -ar 44100 -ac 1 -f flv -preset veryfast -tune zerolatency rtmp://localhost/live/$name 2>>/tmp/rtmp-ingest-$name.log;&#xA;&#xA;                        allow publish <my ip="ip" address="address">;&#xA;                        deny publish all;&#xA;&#xA;                        allow play all; # This was added for debugging.&#xA;                }&#xA;&#xA;                application live {&#xA;                        live on;&#xA;                        record off;&#xA;&#xA;                        hls on;&#xA;                        hls_path /var/www/mydomain.com/html/live/hls;&#xA;                        hls_fragment 6s;&#xA;                        hls_playlist_length 60;&#xA;&#xA;                        dash on;&#xA;                        dash_path /var/www/mydomain.com/html/live/dash;&#xA;&#xA;                        allow publish 127.0.0.1;&#xA;                        deny publish all;&#xA;&#xA;                        allow play all;&#xA;                }&#xA;        }&#xA;}&#xA;&#xA;http {&#xA;&#xA;        ##&#xA;        # Basic Settings&#xA;        ##&#xA;&#xA;        sendfile on;&#xA;        tcp_nopush on;&#xA;        types_hash_max_size 2048;&#xA;        server_tokens build; # Recommended practice is to turn this off&#xA;&#xA;        server_names_hash_bucket_size 64;&#xA;        # server_name_in_redirect off;&#xA;&#xA;        include /etc/nginx/mime.types;&#xA;        default_type application/octet-stream;&#xA;&#xA;        ##&#xA;        # SSL Settings&#xA;        ##&#xA;&#xA;        ssl_protocols TLSv1.2 TLSv1.3; # Dropping SSLv3 (POODLE), TLS 1.0, 1.1&#xA;        ssl_prefer_server_ciphers off; # Don&#x27;t force server cipher order.&#xA;&#xA;        ##&#xA;        # Logging Settings&#xA;        ##&#xA;&#xA;        access_log /var/log/nginx/access.log;&#xA;&#xA;        ##&#xA;        # Gzip Settings&#xA;        ##&#xA;&#xA;        gzip on;&#xA;&#xA;        # gzip_vary on;&#xA;        # gzip_proxied any;&#xA;        # gzip_comp_level 6;&#xA;        # gzip_buffers 16 8k;&#xA;        # gzip_http_version 1.1;&#xA;        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml&#x2B;rss text/javascript;&#xA;&#xA;        ##&#xA;        # Virtual Host Configs&#xA;        ##&#xA;&#xA;        include /etc/nginx/conf.d/*.conf;&#xA;        include /etc/nginx/sites-enabled/*;&#xA;}&#xA;</my>

    &#xA;

    /etc/nginx/sites-available/mydomain.com :

    &#xA;

    server {&#xA;        listen 443 ssl;&#xA;&#xA;        ssl_certificate     /etc/letsencrypt/live/mydomain.com/fullchain.pem;&#xA;        ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;&#xA;        ssl_protocols       TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;&#xA;        ssl_ciphers         HIGH:!aNULL:!MD5;&#xA;&#xA;        root /var/www/mydomain.com/html;&#xA;&#xA;        server_name mydomain.com www.mydomain.com;&#xA;&#xA;        location / {&#xA;                root /var/www/mydomain.com/html/live;&#xA;&#xA;#               add_header Cache-Control no-cache;&#xA;                add_header Access-Control-Allow-Origin *;&#xA;        }&#xA;}&#xA;&#xA;types {&#xA;#       application/vnd.apple.mpegurl m3u8;&#xA;        application/dash&#x2B;xml mpd;&#xA;}&#xA;

    &#xA;

  • ffmpeg-next potential bug in write_header causes timebase to bet set to Rational(1/15360)

    7 septembre 2024, par Huhngut

    I am trying to encode a video using the ffmpeg_next crate. I got everything working and it successfully creates an output video.&#xA;The only problem is that the time_base of my stream is wrongly written to the file.&#xA;I can confirm that I set the timebase correctly for both the encoder as well as the stream.

    &#xA;

    By debug prints I was able to narrow the problem down. octx.write_header().unwrap(); causes the stream timebase to be reset from Rational(1/30) to Rational(1/15360). Changing the timebase back afterwards has no effect. The wrong value must have been written to the header.

    &#xA;

    I modified the src code of ffmpeg-next and recompiled it. I can confirm that the correct value is set before the call to avformat_write_header

    &#xA;

    pub fn write_header(&amp;mut self) -> Result&lt;(), Error> {&#xA;        println!(&#xA;            "_________________ {:?}",&#xA;            self.stream(0).unwrap().time_base()&#xA;        );&#xA;        unsafe {&#xA;            match avformat_write_header(self.as_mut_ptr(), ptr::null_mut()) {&#xA;                0 => Ok(()),&#xA;                e => Err(Error::from(e)),&#xA;            }&#xA;        }&#xA;    }&#xA;

    &#xA;

    To my understanding this must be a bug in the crate but I dont want to accuse someone with my non existing knowledge about ffmpeg. Also the examples in the github repo seem not to have this problem. My fault then ? Unfortunately I was not able to get the transcode-x264 to run. Most of my code comes from this example.

    &#xA;

    Relevant code bits are these. I dont know how much the set_parameters influences anything. My testing said it has no influence. I also tried to set the timebase at the very end of the function if it gets reset my the parameters. This is not working

    &#xA;

    let mut ost = octx.add_stream(codec)?;&#xA;ost.set_time_base(Rational::new(1, FPS));&#xA;&#xA;ost.set_parameters(&amp;encoder);&#xA;encoder.set_time_base(Rational::new(1, FPS));&#xA;ost.set_parameters(&amp;opened_encoder);&#xA;

    &#xA;

    By default and in the above example the streams timebase is 0/0. If I leave it out or change it to this manually it has no effect.

    &#xA;

    I also noticed that changing the value inside set_pts influences the output fps. Although not the timebase. I think this is more of a sideeffect.

    &#xA;

    I will leave a minimal reproducible example below. Any help or hints would be appreciated

    &#xA;

    abstract main function

    &#xA;

    fn main() {&#xA;    let output_file = "output.mp4";&#xA;    let x264_opts = parse_opts("preset=medium".to_string()).expect("invalid x264 options string");&#xA;&#xA;    ffmpeg_next::init().unwrap();&#xA;    let mut octx = format::output(output_file).unwrap();&#xA;&#xA;    let mut encoder = Encoder::new(&amp;mut octx, x264_opts).unwrap();&#xA;&#xA;    format::context::output::dump(&amp;octx, 0, Some(&amp;output_file));&#xA;    //This line somehow clears the streams time base&#xA;    octx.write_header().unwrap();&#xA;&#xA;    // Without this line, the next logs returns Rational(1/30) Rational(1/15360) indicating streams timebase is wrong. even thought I set it above&#xA;    // this line changes it back but only for the print but not the actual output. Because the faulty data is written into the header&#xA;    // octx.stream_mut(0)&#xA;    //     .unwrap()&#xA;    //     .set_time_base(Rational::new(1, FPS));&#xA;&#xA;    println!(&#xA;        "---------------- {:?} {:?}",&#xA;        encoder.encoder.time_base(),&#xA;        octx.stream(0).unwrap().time_base(),&#xA;    );&#xA;&#xA;    for frame_num in 0..100 {&#xA;        let mut frame = encoder.create_frame();&#xA;        frame.set_pts(Some(frame_num));&#xA;        encoder.add_frame(&amp;frame, &amp;mut octx);&#xA;    }&#xA;&#xA;    encoder.close(&amp;mut octx);&#xA;    octx.write_trailer().unwrap();&#xA;}&#xA;

    &#xA;

    Encoder struct containing the implementation logic

    &#xA;

    struct Encoder {&#xA;    encoder: encoder::Video,&#xA;}&#xA;&#xA;impl Encoder {&#xA;    fn new(&#xA;        octx: &amp;mut format::context::Output,&#xA;        x264_opts: Dictionary,&#xA;    ) -> Result {&#xA;        let set_header = octx&#xA;            .format()&#xA;            .flags()&#xA;            .contains(ffmpeg_next::format::flag::Flags::GLOBAL_HEADER);&#xA;&#xA;        let codec = encoder::find(codec::Id::H264);&#xA;        let mut ost = octx.add_stream(codec)?;&#xA;        ost.set_time_base(Rational::new(1, FPS));&#xA;&#xA;        let mut encoder = codec::context::Context::new_with_codec(&#xA;            encoder::find(codec::Id::H264)&#xA;                .ok_or(ffmpeg_next::Error::InvalidData)&#xA;                .unwrap(),&#xA;        )&#xA;        .encoder()&#xA;        .video()&#xA;        .unwrap();&#xA;        ost.set_parameters(&amp;encoder);&#xA;&#xA;        encoder.set_width(WIDTH);&#xA;        encoder.set_height(HEIGHT);&#xA;        encoder.set_aspect_ratio(WIDTH as f64 / HEIGHT as f64);&#xA;        encoder.set_format(util::format::Pixel::YUV420P);&#xA;        encoder.set_frame_rate(Some(Rational::new(FPS, 1)));&#xA;        encoder.set_time_base(Rational::new(1, FPS));&#xA;&#xA;        if set_header {&#xA;            encoder.set_flags(ffmpeg_next::codec::flag::Flags::GLOBAL_HEADER);&#xA;        }&#xA;&#xA;        let opened_encoder = encoder&#xA;            .open_with(x264_opts.to_owned())&#xA;            .expect("error opening x264 with supplied settings");&#xA;        ost.set_parameters(&amp;opened_encoder);&#xA;&#xA;        println!(&#xA;            "\nost time_base: {}; encoder time_base: {}; encoder frame_rate: {}\n",&#xA;            ost.time_base(),&#xA;            &amp;opened_encoder.time_base(),&#xA;            &amp;opened_encoder.frame_rate()&#xA;        );&#xA;&#xA;        Ok(Self {&#xA;            encoder: opened_encoder,&#xA;        })&#xA;    }&#xA;&#xA;    fn add_frame(&amp;mut self, frame: &amp;frame::Video, octx: &amp;mut format::context::Output) {&#xA;        self.encoder.send_frame(frame).unwrap();&#xA;        self.process_packets(octx);&#xA;    }&#xA;&#xA;    fn close(&amp;mut self, octx: &amp;mut format::context::Output) {&#xA;        self.encoder.send_eof().unwrap();&#xA;        self.process_packets(octx);&#xA;    }&#xA;&#xA;    fn process_packets(&amp;mut self, octx: &amp;mut format::context::Output) {&#xA;        let mut encoded = Packet::empty();&#xA;        while self.encoder.receive_packet(&amp;mut encoded).is_ok() {&#xA;            encoded.set_stream(0);&#xA;            encoded.write_interleaved(octx).unwrap();&#xA;        }&#xA;    }&#xA;&#xA;    fn create_frame(&amp;self) -> frame::Video {&#xA;        return frame::Video::new(&#xA;            self.encoder.format(),&#xA;            self.encoder.width(),&#xA;            self.encoder.height(),&#xA;        );&#xA;    }&#xA;}&#xA;

    &#xA;

    other util stuff

    &#xA;

    use ffmpeg_next::{&#xA;    codec::{self},&#xA;    encoder, format, frame, util, Dictionary, Packet, Rational,&#xA;};&#xA;&#xA;const FPS: i32 = 30;&#xA;const WIDTH: u32 = 720;&#xA;const HEIGHT: u32 = 1080;&#xA;&#xA;fn parse_opts&lt;&#x27;a>(s: String) -> Option> {&#xA;    let mut dict = Dictionary::new();&#xA;    for keyval in s.split_terminator(&#x27;,&#x27;) {&#xA;        let tokens: Vec&lt;&amp;str> = keyval.split(&#x27;=&#x27;).collect();&#xA;        match tokens[..] {&#xA;            [key, val] => dict.set(key, val),&#xA;            _ => return None,&#xA;        }&#xA;    }&#xA;    Some(dict)&#xA;}&#xA;

    &#xA;

  • MJPEG decoding is 3x slower when opening a V4L2 input device [closed]

    26 octobre 2024, par Xenonic

    I'm trying to decode a MJPEG video stream coming from a webcam, but I'm hitting some performance blockers when using FFmpeg's C API in my application. I've recreated the problem using the example video decoder, where I just simply open the V4L2 input device, read packets, and push them to the decoder. What's strange is if I try to get my input packets from the V4L2 device instead of from a file, the avcodec_send_packet call to the decoder is nearly 3x slower. After further poking around, I narrowed the issue down to whether or not I open the V4L2 device at all.

    &#xA;

    Let's look at a minimal example demonstrating this behavior :

    &#xA;

    extern "C"&#xA;{&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavdevice></libavdevice>avdevice.h>&#xA;}&#xA;&#xA;#define INBUF_SIZE 4096&#xA;&#xA;static void decode(AVCodecContext *dec_ctx, AVFrame *frame, AVPacket *pkt)&#xA;{&#xA;    if (avcodec_send_packet(dec_ctx, pkt) &lt; 0)&#xA;        exit(1);&#xA; &#xA;    int ret = 0;&#xA;    while (ret >= 0) {&#xA;        ret = avcodec_receive_frame(dec_ctx, frame);&#xA;        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;            return;&#xA;        else if (ret &lt; 0)&#xA;            exit(1);&#xA;&#xA;        // Here we&#x27;d save off the decoded frame, but that&#x27;s not necessary for the example.&#xA;    }&#xA;}&#xA;&#xA;int main(int argc, char **argv)&#xA;{&#xA;    const char *filename;&#xA;    const AVCodec *codec;&#xA;    AVCodecParserContext *parser;&#xA;    AVCodecContext *c= NULL;&#xA;    FILE *f;&#xA;    AVFrame *frame;&#xA;    uint8_t inbuf[INBUF_SIZE &#x2B; AV_INPUT_BUFFER_PADDING_SIZE];&#xA;    uint8_t *data;&#xA;    size_t   data_size;&#xA;    int ret;&#xA;    int eof;&#xA;    AVPacket *pkt;&#xA;&#xA;    filename = argv[1];&#xA;&#xA;    pkt = av_packet_alloc();&#xA;    if (!pkt)&#xA;        exit(1);&#xA;&#xA;    /* set end of buffer to 0 (this ensures that no overreading happens for damaged MPEG streams) */&#xA;    memset(inbuf &#x2B; INBUF_SIZE, 0, AV_INPUT_BUFFER_PADDING_SIZE);&#xA;&#xA;    // Use MJPEG instead of the example&#x27;s MPEG1&#xA;    //codec = avcodec_find_decoder(AV_CODEC_ID_MPEG1VIDEO);&#xA;    codec = avcodec_find_decoder(AV_CODEC_ID_MJPEG);&#xA;    if (!codec) {&#xA;        fprintf(stderr, "Codec not found\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    parser = av_parser_init(codec->id);&#xA;    if (!parser) {&#xA;        fprintf(stderr, "parser not found\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    c = avcodec_alloc_context3(codec);&#xA;    if (!c) {&#xA;        fprintf(stderr, "Could not allocate video codec context\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    if (avcodec_open2(c, codec, NULL) &lt; 0) {&#xA;        fprintf(stderr, "Could not open codec\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    c->pix_fmt = AV_PIX_FMT_YUVJ422P;&#xA;&#xA;    f = fopen(filename, "rb");&#xA;    if (!f) {&#xA;        fprintf(stderr, "Could not open %s\n", filename);&#xA;        exit(1);&#xA;    }&#xA;&#xA;    frame = av_frame_alloc();&#xA;    if (!frame) {&#xA;        fprintf(stderr, "Could not allocate video frame\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    avdevice_register_all();&#xA;    auto* inputFormat = av_find_input_format("v4l2");&#xA;    AVDictionary* options = nullptr;&#xA;    av_dict_set(&amp;options, "input_format", "mjpeg", 0);&#xA;    av_dict_set(&amp;options, "video_size", "1920x1080", 0);&#xA;&#xA;    AVFormatContext* fmtCtx = nullptr;&#xA;&#xA;&#xA;    // Commenting this line out results in fast encoding!&#xA;    // Notice how fmtCtx is not even used anywhere, we still read packets from the file&#xA;    avformat_open_input(&amp;fmtCtx, "/dev/video0", inputFormat, &amp;options);&#xA;&#xA;&#xA;    // Just parse packets from a file and send them to the decoder.&#xA;    do {&#xA;        data_size = fread(inbuf, 1, INBUF_SIZE, f);&#xA;        if (ferror(f))&#xA;            break;&#xA;        eof = !data_size;&#xA;&#xA;        data = inbuf;&#xA;        while (data_size > 0 || eof) {&#xA;            ret = av_parser_parse2(parser, c, &amp;pkt->data, &amp;pkt->size,&#xA;                                   data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);&#xA;            if (ret &lt; 0) {&#xA;                fprintf(stderr, "Error while parsing\n");&#xA;                exit(1);&#xA;            }&#xA;            data      &#x2B;= ret;&#xA;            data_size -= ret;&#xA;&#xA;            if (pkt->size)&#xA;                decode(c, frame, pkt);&#xA;            else if (eof)&#xA;                break;&#xA;        }&#xA;    } while (!eof);&#xA;&#xA;    return 0;&#xA;}&#xA;

    &#xA;

    Here's a histogram of the CPU time spent in that avcodec_send_packet function call with and without opening the device by commenting out that avformat_open_input call above.

    &#xA;

    Without opening the V4L2 device :

    &#xA;

    fread_cpu

    &#xA;

    With opening the V4L2 device :

    &#xA;

    webcam_cpu

    &#xA;

    Interestingly we can see a significant number of function calls are in that 25ms time bin ! But most of them are 78ms... why ?

    &#xA;

    So what's going on here ? Why does opening the device destroy my decode performance ?

    &#xA;

    Additionally, if I try and run a seemingly equivalent pipeline through the ffmpeg tool itself, I don't hit this problem. Running this command :

    &#xA;

    ffmpeg -f v4l2 -input_format mjpeg -video_size 1920x1080 -r 30 -c:v mjpeg -i /dev/video0 -c:v copy out.mjpeg&#xA;

    &#xA;

    Is generating an output file with a reported speed of just barely over 1.0x, aka. 30 FPS. Perfect, why doesn't the C API give me the same results ? One thing to note is I do get periodic errors from the MJPEG decoder (about every second), not sure if these are a concern or not :

    &#xA;

    [mjpeg @ 0x5590d6b7b0] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 27 >= 27&#xA;[mjpeg @ 0x5590d6b7b0] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 30 >= 30&#xA;...&#xA;

    &#xA;

    I'm running on a Raspberry Pi CM4 with FFmpeg 6.1.1

    &#xA;