Recherche avancée

Médias (0)

Mot : - Tags -/tags

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (32)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (6789)

  • using ffmpeg libraray to write to a mp4, ffprobe shows there are 100 frames and 100 packets, but av_interleaved_write_frame only called 50 times

    2 mai 2023, par ollydbg23

    here is my code to generate a mp4 file by using ffmpeg and opencv library. The opencv library is only try to generate 100 images(frames), and ffmpeg library is to compress the images to a mp4 files.

    


    Here is the working code :

    


    #include <iostream>&#xA;#include <vector>&#xA;#include <cstring>&#xA;#include <fstream>&#xA;#include <sstream>&#xA;#include <stdexcept>&#xA;#include <opencv2></opencv2>opencv.hpp>&#xA;extern "C" {&#xA;#include <libavutil></libavutil>imgutils.h>&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;}&#xA;&#xA;#include<cstdlib> // to generate time stamps&#xA;&#xA;using namespace std;&#xA;using namespace cv;&#xA;&#xA;int main()&#xA;{&#xA;    // Set up input frames as BGR byte arrays&#xA;    vector<mat> frames;&#xA;&#xA;    int width = 640;&#xA;    int height = 480;&#xA;    int num_frames = 100;&#xA;    Scalar black(0, 0, 0);&#xA;    Scalar white(255, 255, 255);&#xA;    int font = FONT_HERSHEY_SIMPLEX;&#xA;    double font_scale = 1.0;&#xA;    int thickness = 2;&#xA;&#xA;    for (int i = 0; i &lt; num_frames; i&#x2B;&#x2B;) {&#xA;        Mat frame = Mat::zeros(height, width, CV_8UC3);&#xA;        putText(frame, std::to_string(i), Point(width / 2 - 50, height / 2), font, font_scale, white, thickness);&#xA;        frames.push_back(frame);&#xA;    }&#xA;&#xA;    // generate a serial of time stamps which is used to set the PTS value&#xA;    // suppose they are in ms unit, the time interval is between 30ms to 59ms&#xA;    vector<int> timestamps;&#xA;&#xA;    for (int i = 0; i &lt; num_frames; i&#x2B;&#x2B;) {&#xA;        int timestamp;&#xA;        if (i == 0)&#xA;            timestamp = 0;&#xA;        else&#xA;        {&#xA;            int random = 30 &#x2B; (rand() % 30);&#xA;            timestamp = timestamps[i-0] &#x2B; random;&#xA;        }&#xA;&#xA;        timestamps.push_back(timestamp);&#xA;    }&#xA;&#xA;    // Populate frames with BGR byte arrays&#xA;&#xA;    // Initialize FFmpeg&#xA;    //av_register_all();&#xA;&#xA;    // Set up output file&#xA;    AVFormatContext* outFormatCtx = nullptr;&#xA;    //AVCodec* outCodec = nullptr;&#xA;    AVCodecContext* outCodecCtx = nullptr;&#xA;    //AVStream* outStream = nullptr;&#xA;    //AVPacket outPacket;&#xA;&#xA;    const char* outFile = "output.mp4";&#xA;    int outWidth = frames[0].cols;&#xA;    int outHeight = frames[0].rows;&#xA;    int fps = 25;&#xA;&#xA;    // Open the output file context&#xA;    avformat_alloc_output_context2(&amp;outFormatCtx, nullptr, nullptr, outFile);&#xA;    if (!outFormatCtx) {&#xA;        cerr &lt;&lt; "Error: Could not allocate output format context" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Open the output file&#xA;    if (avio_open(&amp;outFormatCtx->pb, outFile, AVIO_FLAG_WRITE) &lt; 0) {&#xA;        cerr &lt;&lt; "Error opening output file" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Set up output codec&#xA;    const AVCodec* outCodec = avcodec_find_encoder(AV_CODEC_ID_H264);&#xA;    if (!outCodec) {&#xA;        cerr &lt;&lt; "Error: Could not find H.264 codec" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    outCodecCtx = avcodec_alloc_context3(outCodec);&#xA;    if (!outCodecCtx) {&#xA;        cerr &lt;&lt; "Error: Could not allocate output codec context" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;    outCodecCtx->codec_id = AV_CODEC_ID_H264;&#xA;    outCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    outCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;&#xA;    outCodecCtx->width = outWidth;&#xA;    outCodecCtx->height = outHeight;&#xA;    //outCodecCtx->time_base = { 1, fps*1000 };   // 25000&#xA;    outCodecCtx->time_base = { 1, fps};   // 25000&#xA;    outCodecCtx->framerate = {fps, 1};          // 25&#xA;    outCodecCtx->bit_rate = 4000000;&#xA;&#xA;    //https://github.com/leandromoreira/ffmpeg-libav-tutorial&#xA;    //We set the flag AV_CODEC_FLAG_GLOBAL_HEADER which tells the encoder that it can use the global headers.&#xA;    if (outFormatCtx->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;    {&#xA;        outCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; //&#xA;    }&#xA;&#xA;    // Open output codec&#xA;    if (avcodec_open2(outCodecCtx, outCodec, nullptr) &lt; 0) {&#xA;        cerr &lt;&lt; "Error: Could not open output codec" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Create output stream&#xA;    AVStream* outStream = avformat_new_stream(outFormatCtx, outCodec);&#xA;    if (!outStream) {&#xA;        cerr &lt;&lt; "Error: Could not allocate output stream" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Configure output stream parameters (e.g., time base, codec parameters, etc.)&#xA;    // ...&#xA;&#xA;    // Connect output stream to format context&#xA;    outStream->codecpar->codec_id = outCodecCtx->codec_id;&#xA;    outStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    outStream->codecpar->width = outCodecCtx->width;&#xA;    outStream->codecpar->height = outCodecCtx->height;&#xA;    outStream->codecpar->format = outCodecCtx->pix_fmt;&#xA;    outStream->time_base = outCodecCtx->time_base;&#xA;&#xA;    int ret = avcodec_parameters_from_context(outStream->codecpar, outCodecCtx);&#xA;    if (ret &lt; 0) {&#xA;        cerr &lt;&lt; "Error: Could not copy codec parameters to output stream" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    outStream->avg_frame_rate = outCodecCtx->framerate;&#xA;    //outStream->id = outFormatCtx->nb_streams&#x2B;&#x2B;;  &lt;--- We shouldn&#x27;t modify outStream->id&#xA;&#xA;    ret = avformat_write_header(outFormatCtx, nullptr);&#xA;    if (ret &lt; 0) {&#xA;        cerr &lt;&lt; "Error: Could not write output header" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Convert frames to YUV format and write to output file&#xA;    int frame_count = -1;&#xA;    for (const auto&amp; frame : frames) {&#xA;        frame_count&#x2B;&#x2B;;&#xA;        AVFrame* yuvFrame = av_frame_alloc();&#xA;        if (!yuvFrame) {&#xA;            cerr &lt;&lt; "Error: Could not allocate YUV frame" &lt;&lt; endl;&#xA;            return -1;&#xA;        }&#xA;        av_image_alloc(yuvFrame->data, yuvFrame->linesize, outWidth, outHeight, AV_PIX_FMT_YUV420P, 32);&#xA;&#xA;        yuvFrame->width = outWidth;&#xA;        yuvFrame->height = outHeight;&#xA;        yuvFrame->format = AV_PIX_FMT_YUV420P;&#xA;&#xA;        // Convert BGR frame to YUV format&#xA;        Mat yuvMat;&#xA;        cvtColor(frame, yuvMat, COLOR_BGR2YUV_I420);&#xA;        memcpy(yuvFrame->data[0], yuvMat.data, outWidth * outHeight);&#xA;        memcpy(yuvFrame->data[1], yuvMat.data &#x2B; outWidth * outHeight, outWidth * outHeight / 4);&#xA;        memcpy(yuvFrame->data[2], yuvMat.data &#x2B; outWidth * outHeight * 5 / 4, outWidth * outHeight / 4);&#xA;&#xA;        // Set up output packet&#xA;        //av_init_packet(&amp;outPacket); //error C4996: &#x27;av_init_packet&#x27;: was declared deprecated&#xA;        AVPacket* outPacket = av_packet_alloc();&#xA;        memset(outPacket, 0, sizeof(outPacket)); //Use memset instead of av_init_packet (probably unnecessary).&#xA;        //outPacket->data = nullptr;&#xA;        //outPacket->size = 0;&#xA;&#xA;        // set the frame pts, do I have to set the package pts?&#xA;&#xA;        // yuvFrame->pts = av_rescale_q(timestamps[frame_count]*25, outCodecCtx->time_base, outStream->time_base); //Set PTS timestamp&#xA;        yuvFrame->pts = av_rescale_q(frame_count*frame_count, outCodecCtx->time_base, outStream->time_base); //Set PTS timestamp&#xA;&#xA;        // Encode frame and write to output file&#xA;        int ret = avcodec_send_frame(outCodecCtx, yuvFrame);&#xA;        if (ret &lt; 0) {&#xA;            cerr &lt;&lt; "Error: Could not send frame to output codec" &lt;&lt; endl;&#xA;            return -1;&#xA;        }&#xA;        while (ret >= 0)&#xA;        {&#xA;            ret = avcodec_receive_packet(outCodecCtx, outPacket);&#xA;&#xA;            if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;            {&#xA;                int abc;&#xA;                abc&#x2B;&#x2B;;&#xA;                break;&#xA;            }&#xA;            else if (ret &lt; 0)&#xA;            {&#xA;                cerr &lt;&lt; "Error: Could not receive packet from output codec" &lt;&lt; endl;&#xA;                return -1;&#xA;            }&#xA;&#xA;            //av_packet_rescale_ts(&amp;outPacket, outCodecCtx->time_base, outStream->time_base);&#xA;&#xA;            outPacket->stream_index = outStream->index;&#xA;&#xA;            outPacket->duration = av_rescale_q(1, outCodecCtx->time_base, outStream->time_base);   // Set packet duration&#xA;&#xA;            ret = av_interleaved_write_frame(outFormatCtx, outPacket);&#xA;&#xA;            static int call_write = 0;&#xA;&#xA;            call_write&#x2B;&#x2B;;&#xA;            printf("av_interleaved_write_frame %d\n", call_write);&#xA;&#xA;            av_packet_unref(outPacket);&#xA;            if (ret &lt; 0) {&#xA;                cerr &lt;&lt; "Error: Could not write packet to output file" &lt;&lt; endl;&#xA;                return -1;&#xA;            }&#xA;        }&#xA;&#xA;        av_frame_free(&amp;yuvFrame);&#xA;    }&#xA;&#xA;    // Flush the encoder&#xA;    ret = avcodec_send_frame(outCodecCtx, nullptr);&#xA;    if (ret &lt; 0) {&#xA;        std::cerr &lt;&lt; "Error flushing encoder: " &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    while (ret >= 0) {&#xA;        AVPacket* pkt = av_packet_alloc();&#xA;        if (!pkt) {&#xA;            std::cerr &lt;&lt; "Error allocating packet" &lt;&lt; std::endl;&#xA;            return -1;&#xA;        }&#xA;        ret = avcodec_receive_packet(outCodecCtx, pkt);&#xA;&#xA;        // Write the packet to the output file&#xA;        if (ret == 0)&#xA;        {&#xA;            pkt->stream_index = outStream->index;&#xA;            pkt->duration = av_rescale_q(1, outCodecCtx->time_base, outStream->time_base);   // &lt;---- Set packet duration&#xA;            ret = av_interleaved_write_frame(outFormatCtx, pkt);&#xA;            av_packet_unref(pkt);&#xA;            if (ret &lt; 0) {&#xA;                std::cerr &lt;&lt; "Error writing packet to output file: " &lt;&lt; std::endl;&#xA;                return -1;&#xA;            }&#xA;        }&#xA;    }&#xA;&#xA;&#xA;    // Write output trailer&#xA;    av_write_trailer(outFormatCtx);&#xA;&#xA;    // Clean up&#xA;    avcodec_close(outCodecCtx);&#xA;    avcodec_free_context(&amp;outCodecCtx);&#xA;    avformat_free_context(outFormatCtx);&#xA;&#xA;    return 0;&#xA;}&#xA;&#xA;</int></mat></cstdlib></stdexcept></sstream></fstream></cstring></vector></iostream>

    &#xA;

    Note that I have used the ffprobe tool(one of the tool from ffmpeg) to inspect the generated mp4 files.

    &#xA;

    I see that the mp4 file has 100 frames and 100 packets, but in my code, I have such lines :

    &#xA;

                static int call_write = 0;&#xA;&#xA;            call_write&#x2B;&#x2B;;&#xA;            printf("av_interleaved_write_frame %d\n", call_write);&#xA;

    &#xA;

    I just see that the av_interleaved_write_frame function is only called 50 times, not the expected 100 times, anyone can explain it ?

    &#xA;

    Thanks.

    &#xA;

    BTW, from the ffmpeg document( see here : For video, it should typically contain one compressed frame ), I see that a packet mainly has one video frame, so the ffprobe's result looks correct.

    &#xA;

    Here is the command I used to inspect the mp4 file :

    &#xA;

    ffprobe -show_frames output.mp4 >> frames.txt&#xA;ffprobe -show_packets output.mp4 >> packets.txt&#xA;

    &#xA;

    My testing code is derived from an answer in another question here : avformat_write_header() function call crashed when I try to save several RGB data to a output.mp4 file

    &#xA;

  • How to record the bluetooth audio from my phone in my pc using ffmpeg ?

    12 septembre 2023, par Juan Inzunza

    I have my phone connected via Bluetooth to my PC and playing audio in my phone so i can hear that audio in my pc and i was trying to record that audio in my PC using ffmpeg this is the command i got so far

    &#xA;

    ffmpeg -f pulse -ac 1 -ar 44100 -i alsa_input.usb-KTMicro_KT_USB_Audio_2021-06-25-0000-0000-0000--00.mono-fallback -f pulse -ac 2 -ar 44100 -i bluez_source.84_5F_04_75_B9_F6.a2dp_source -filter_complex amix=inputs=2 -i /dev/video5 -s 640x480 -vcodec libx264 -preset veryfast -crf 18 -acodec libmp3lame -ar 44100 -q:a 1 -pix_fmt yuv420p -aq 0 ~/Videos&#xA;

    &#xA;

    "bluez_source.84_5F_04_75_B9_F6.a2dp_source" is the name of the bluetooth audio source (my phone) but the problem is, if i am not playing audio on the device, just disappears when i execute this command "pactl list sources", and appears again when i play audio, so is listed when i run "pactl list sources" and i have to keep the phone playing audio so ffmpeg recognizes that device and not fail when executed.

    &#xA;

    Finally my question is how can i keep that device in running state or suspended state without need to play audio all the time.

    &#xA;

    gandalf@Mordor:~$ pactl list sources&#xA;Source #64&#xA;    State: IDLE&#xA;    Name: alsa_output.usb-KTMicro_KT_USB_Audio_2021-06-25-0000-0000-0000--00.analog-stereo.monitor&#xA;    Description: Monitor of KT USB Audio Analog Stereo&#xA;    Driver: module-alsa-card.c&#xA;    Sample Specification: s16le 2ch 44100Hz&#xA;    Channel Map: front-left,front-right&#xA;    Owner Module: 85&#xA;    Mute: no&#xA;    Volume: front-left: 65536 / 100% / 0.00 dB,   front-right: 65536 / 100% / 0.00 dB&#xA;            balance 0.00&#xA;    Base Volume: 65536 / 100% / 0.00 dB&#xA;    Monitor of Sink: alsa_output.usb-KTMicro_KT_USB_Audio_2021-06-25-0000-0000-0000--00.analog-stereo&#xA;    Latency: 0 usec, configured 2000000 usec&#xA;    Flags: DECIBEL_VOLUME LATENCY &#xA;    Properties:&#xA;        device.description = "Monitor of KT USB Audio Analog Stereo"&#xA;        device.class = "monitor"&#xA;        alsa.card = "1"&#xA;        alsa.card_name = "KT USB Audio"&#xA;        alsa.long_card_name = "KTMicro KT USB Audio at usb-0000:03:00.3-3, full speed"&#xA;        alsa.driver_name = "snd_usb_audio"&#xA;        device.bus_path = "pci-0000:03:00.3-usb-0:3:1.0"&#xA;        sysfs.path = "/devices/pci0000:00/0000:00:08.1/0000:03:00.3/usb1/1-3/1-3:1.0/sound/card1"&#xA;        udev.id = "usb-KTMicro_KT_USB_Audio_2021-06-25-0000-0000-0000--00"&#xA;        device.bus = "usb"&#xA;        device.vendor.id = "12d1"&#xA;        device.vendor.name = "Huawei Technologies Co., Ltd."&#xA;        device.product.id = "0010"&#xA;        device.product.name = "KT USB Audio"&#xA;        device.serial = "KTMicro_KT_USB_Audio_2021-06-25-0000-0000-0000-"&#xA;        device.string = "1"&#xA;        module-udev-detect.discovered = "1"&#xA;        device.icon_name = "audio-card-usb"&#xA;    Formats:&#xA;        pcm&#xA;&#xA;Source #65&#xA;    State: SUSPENDED&#xA;    Name: alsa_input.usb-KTMicro_KT_USB_Audio_2021-06-25-0000-0000-0000--00.mono-fallback&#xA;    Description: KT USB Audio Mono&#xA;    Driver: module-alsa-card.c&#xA;    Sample Specification: s16le 1ch 44100Hz&#xA;    Channel Map: mono&#xA;    Owner Module: 85&#xA;    Mute: no&#xA;    Volume: mono: 65536 / 100% / 0.00 dB&#xA;            balance 0.00&#xA;    Base Volume: 65536 / 100% / 0.00 dB&#xA;    Monitor of Sink: n/a&#xA;    Latency: 0 usec, configured 0 usec&#xA;    Flags: HARDWARE HW_MUTE_CTRL HW_VOLUME_CTRL DECIBEL_VOLUME LATENCY &#xA;    Properties:&#xA;        alsa.resolution_bits = "16"&#xA;        device.api = "alsa"&#xA;        device.class = "sound"&#xA;        alsa.class = "generic"&#xA;        alsa.subclass = "generic-mix"&#xA;        alsa.name = "USB Audio"&#xA;        alsa.id = "USB Audio"&#xA;        alsa.subdevice = "0"&#xA;        alsa.subdevice_name = "subdevice #0"&#xA;        alsa.device = "0"&#xA;        alsa.card = "1"&#xA;        alsa.card_name = "KT USB Audio"&#xA;        alsa.long_card_name = "KTMicro KT USB Audio at usb-0000:03:00.3-3, full speed"&#xA;        alsa.driver_name = "snd_usb_audio"&#xA;        device.bus_path = "pci-0000:03:00.3-usb-0:3:1.0"&#xA;        sysfs.path = "/devices/pci0000:00/0000:00:08.1/0000:03:00.3/usb1/1-3/1-3:1.0/sound/card1"&#xA;        udev.id = "usb-KTMicro_KT_USB_Audio_2021-06-25-0000-0000-0000--00"&#xA;        device.bus = "usb"&#xA;        device.vendor.id = "12d1"&#xA;        device.vendor.name = "Huawei Technologies Co., Ltd."&#xA;        device.product.id = "0010"&#xA;        device.product.name = "KT USB Audio"&#xA;        device.serial = "KTMicro_KT_USB_Audio_2021-06-25-0000-0000-0000-"&#xA;        device.string = "hw:1"&#xA;        device.buffering.buffer_size = "176400"&#xA;        device.buffering.fragment_size = "88200"&#xA;        device.access_mode = "mmap&#x2B;timer"&#xA;        device.profile.name = "mono-fallback"&#xA;        device.profile.description = "Mono"&#xA;        device.description = "KT USB Audio Mono"&#xA;        module-udev-detect.discovered = "1"&#xA;        device.icon_name = "audio-card-usb"&#xA;    Ports:&#xA;        analog-input-mic: Microphone (type: Mic, priority: 8700, availability unknown)&#xA;    Active Port: analog-input-mic&#xA;    Formats:&#xA;        pcm&#xA;&#xA;Source #68&#xA;    State: RUNNING&#xA;    Name: bluez_source.84_5F_04_75_B9_F6.a2dp_source&#xA;    Description: Juan&#x27;s S20 FE&#xA;    Driver: module-bluez5-device.c&#xA;    Sample Specification: s16le 2ch 44100Hz&#xA;    Channel Map: front-left,front-right&#xA;    Owner Module: 23&#xA;    Mute: no&#xA;    Volume: front-left: 53151 /  81% / -5.46 dB,   front-right: 53151 /  81% / -5.46 dB&#xA;            balance 0.00&#xA;    Base Volume: 65536 / 100% / 0.00 dB&#xA;    Monitor of Sink: n/a&#xA;    Latency: 68928 usec, configured 68537 usec&#xA;    Flags: HARDWARE DECIBEL_VOLUME LATENCY &#xA;    Properties:&#xA;        bluetooth.protocol = "a2dp_source"&#xA;        bluetooth.codec = "sbc"&#xA;        device.description = "Juan&#x27;s S20 FE"&#xA;        device.string = "84:5F:04:75:B9:F6"&#xA;        device.api = "bluez"&#xA;        device.class = "sound"&#xA;        device.bus = "bluetooth"&#xA;        device.form_factor = "phone"&#xA;        bluez.path = "/org/bluez/hci0/dev_84_5F_04_75_B9_F6"&#xA;        bluez.class = "0x5a020c"&#xA;        bluez.alias = "Juan&#x27;s S20 FE"&#xA;        device.icon_name = "audio-card-bluetooth"&#xA;    Ports:&#xA;        phone-input: Phone (type: Phone, priority: 0, available)&#xA;    Active Port: phone-input&#xA;    Formats:&#xA;        pcm&#xA;

    &#xA;

    This is the error i am getting when there's no playing audio in my phone.

    &#xA;

    ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers&#xA;  built with gcc 11 (Ubuntu 11.2.0-19ubuntu1)&#xA;  configuration: --prefix=/usr --extra-version=0ubuntu0.22.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared&#xA;  libavutil      56. 70.100 / 56. 70.100&#xA;  libavcodec     58.134.100 / 58.134.100&#xA;  libavformat    58. 76.100 / 58. 76.100&#xA;  libavdevice    58. 13.100 / 58. 13.100&#xA;  libavfilter     7.110.100 /  7.110.100&#xA;  libswscale      5.  9.100 /  5.  9.100&#xA;  libswresample   3.  9.100 /  3.  9.100&#xA;  libpostproc    55.  9.100 / 55.  9.100&#xA;Guessed Channel Layout for Input Stream #0.0 : mono&#xA;Input #0, pulse, from &#x27;alsa_input.usb-KTMicro_KT_USB_Audio_2021-06-25-0000-0000-0000--00.mono-fallback&#x27;:&#xA;  Duration: N/A, start: 1694557964.268337, bitrate: 705 kb/s&#xA;  Stream #0:0: Audio: pcm_s16le, 44100 Hz, mono, s16, 705 kb/s&#xA;bluez_source.84_5F_04_75_B9_F6.a2dp_source: Input/output error&#xA;&#xA;

    &#xA;

  • lavc/speedhqdec : Add AV_CODEC_CAP_SLICE_THREADS

    13 mai 2024, par Tomas Härdin
    lavc/speedhqdec : Add AV_CODEC_CAP_SLICE_THREADS
    

    Each field slice is assigned to one thread.
    Serial performance is unaffected.

    • [DH] libavcodec/speedhqdec.c