Recherche avancée

Médias (2)

Mot : - Tags -/plugins

Autres articles (53)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (8600)

  • libavfilter/scale2ref : Add constants for the primary input

    30 mai 2017, par Kevin Mark
    libavfilter/scale2ref : Add constants for the primary input
    

    Variables pertaining to the main video are now available when
    using the scale2ref filter. This allows, as an example, scaling a
    video with another as a reference point while maintaining the
    original aspect ratio of the primary/non-reference video.

    Consider the following graph : scale2ref=iw/6 :-1 [main][ref]
    This will scale [main] to 1/6 the width of [ref] while maintaining
    the aspect ratio. This works well when the AR of [ref] is equal to
    the AR of [main] only. What the above filter really does is
    maintain the AR of [ref] when scaling [main]. So in all non-same-AR
    situations [main] will appear stretched or compressed to conform to
    the same AR of the reference video. Without doing this calculation
    externally there is no way to scale in reference to another input
    while maintaining AR in libavfilter.

    To make this possible, we introduce eight new constants to be used
    in the w and h expressions only in the scale2ref filter :

    * main_w/main_h : width/height of the main input video
    * main_a : aspect ratio of the main input video
    * main_sar : sample aspect ratio of the main input video
    * main_dar : display aspect ratio of the main input video
    * main_hsub/main_vsub : horiz/vert chroma subsample vals of main
    * mdar : a shorthand alias of main_dar

    Of course, not all of these constants are needed for maintaining the
    AR, but adding additional constants in line of what is available for
    in/out allows for other scaling possibilities I have not imagined.

    So to now scale a video to 1/6 the size of another video using the
    width and maintaining its own aspect ratio you can do this :

    scale2ref=iw/6:ow/mdar [main][ref]

    This is ideal for picture-in-picture configurations where you could
    have a square or 4:3 video overlaid on a corner of a larger 16:9
    feed all while keeping the scaled video in the corner at its correct
    aspect ratio and always the same size relative to the larger video.

    I've tried to re-use as much code as possible. I could not find a way
    to avoid duplication of the var_names array. It must now be kept in
    sync with the other (the normal one and the scale2ref one) for
    everything to work which does not seem ideal. For every new variable
    introduced/removed into/from the normal scale filter one must be
    added/removed to/from the scale2ref version. Suggestions on how to
    avoid var_names duplication are welcome.

    var_values has been increased to always be large enough for the
    additional scale2ref variables. I do not forsee this being a problem
    as the names variable will always be the correct size. From my
    understanding of av_expr_parse_and_eval it will stop processing
    variables when it runs out of names even though there may be
    additional (potentially uninitialized) entries in the values array.
    The ideal solution here would be using a variable-length array but
    that is unsupported in C90.

    This patch does not remove any functionality and is strictly a
    feature patch. There are no API changes. Behavior does not change for
    any previously valid inputs.

    The applicable documentation has also been updated.

    Signed-off-by : Kevin Mark <kmark937@gmail.com>
    Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>

    • [DH] doc/filters.texi
    • [DH] libavfilter/scale.c
  • FFmpeg avcodec_open2 throws -22 ("Invalid Argument")

    14 avril 2023, par stupidbutseeking

    I got stuck trying to write a simple video conversion using C++ and ffmpeg.

    &#xA;

    When trying to convert a video using FFmpeg, calling avcodec_open2 fails with the code "-22" which seems to be an "Invalid Argument"-error.

    &#xA;

    I can't figure out why it fails, nor what the invalid argument is. In the following snippet I create the output codec and pass its context the information from the source (code further down below).

    &#xA;

    The check for the "outputCodec" works and does not throw an error. As far as I know an "AVDictionary"-argument is optional. So I guess the reason for the error is the context.

    &#xA;

        const AVCodec* outputCodec = avcodec_find_encoder_by_name(codecName.c_str());&#xA;&#xA;    if (!outputCodec)&#xA;    {&#xA;        std::cout &lt;&lt; "Zielformat-Codec nicht gefunden" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    AVCodecContext* outputCodecContext = avcodec_alloc_context3(outputCodec);&#xA;    outputCodecContext->bit_rate = bitRate;&#xA;    outputCodecContext->width = inputCodecContext->width;&#xA;    outputCodecContext->height = inputCodecContext->height;&#xA;    outputCodecContext->pix_fmt = outputCodec->pix_fmts[0];&#xA;    outputCodecContext->time_base = inputCodecContext->time_base;&#xA;&#xA;    **int errorCode = avcodec_open2(outputCodecContext, outputCodec, NULL); //THIS RETURNS -22**&#xA;    if (errorCode != 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Fehler beim &#xD6;ffnen des Zielformat-Codecs" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;

    &#xA;

    Here is the code for getting the input file & context :

    &#xA;

        std::string inputFilename = "input_video.mp4";&#xA;    std::string outputFilename = "output.avi";&#xA;    std::string codecName = "mpeg4";&#xA;    int bitRate = 400000;&#xA;&#xA;    AVFormatContext* inputFormatContext = NULL;&#xA;    if (avformat_open_input(&amp;inputFormatContext, inputFilename.c_str(), NULL, NULL) != 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Fehler beim &#xD6;ffnen der Eingabedatei" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    [Do Video Stream Search)&#xA;&#xA;    AVCodecParameters* inputCodecParameters = inputFormatContext->streams[videoStreamIndex]->codecpar;&#xA;    const AVCodec* inputCodec = avcodec_find_decoder(inputCodecParameters->codec_id);&#xA;    AVCodecContext* inputCodecContext = avcodec_alloc_context3(inputCodec);&#xA;    if (avcodec_parameters_to_context(inputCodecContext, inputCodecParameters) != 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Fehler beim Setzen des Eingabecodecs" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;    if (avcodec_open2(inputCodecContext, inputCodec, NULL) != 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Fehler beim &#xD6;ffnen des Eingabecodecs" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;

    &#xA;

    The purpose was simply to get started with ffmpeg in an own C++ project.

    &#xA;

    If it is of any need, I downloaded the ffmpeg libs from here. I used the gpl shared ones. The architecture is win x64. I referenced them through the project properties (additional libraries and so on).

    &#xA;

    I tried to convert a .mp4 video to an .avi video with an "mpeg4" compression.&#xA;I also tried other compressions like "libx264" but none worked.

    &#xA;

    I searched for the problem on stackoverflow but could not find the exact same problem. While its purpose is different this post is about the same error code when calling avcodec_open2. But its solution is not working for me. I inspected the watch for the "outputContext" while running the code and the codec_id, codec_type and format is set. I use the time_base from the input file. According to my understanding, this should be equal to the source. So I can not find out what I am missing.

    &#xA;

    Thanks in advance and sorry for my english.

    &#xA;

    And for completion, here is the whole method :

    &#xA;

    int TestConvert()&#xA;{&#xA;    std::string inputFilename = "input_video.mp4";&#xA;    std::string outputFilename = "output.avi";&#xA;    std::string codecName = "mpeg4";&#xA;    int bitRate = 400000;&#xA;&#xA;    AVFormatContext* inputFormatContext = NULL;&#xA;    if (avformat_open_input(&amp;inputFormatContext, inputFilename.c_str(), NULL, NULL) != 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Fehler beim &#xD6;ffnen der Eingabedatei" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    int videoStreamIndex = -1;&#xA;    for (unsigned int i = 0; i &lt; inputFormatContext->nb_streams; i&#x2B;&#x2B;)&#xA;    {&#xA;        if (inputFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;        {&#xA;            videoStreamIndex = i;&#xA;            break;&#xA;        }&#xA;    }&#xA;&#xA;    AVCodecParameters* inputCodecParameters = inputFormatContext->streams[videoStreamIndex]->codecpar;&#xA;    const AVCodec* inputCodec = avcodec_find_decoder(inputCodecParameters->codec_id);&#xA;    AVCodecContext* inputCodecContext = avcodec_alloc_context3(inputCodec);&#xA;    if (avcodec_parameters_to_context(inputCodecContext, inputCodecParameters) != 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Fehler beim Setzen des Eingabecodecs" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;    if (avcodec_open2(inputCodecContext, inputCodec, NULL) != 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Fehler beim &#xD6;ffnen des Eingabecodecs" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    const AVCodec* outputCodec = avcodec_find_encoder_by_name(codecName.c_str());&#xA;&#xA;    if (!outputCodec)&#xA;    {&#xA;        std::cout &lt;&lt; "Zielformat-Codec nicht gefunden" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    AVCodecContext* outputCodecContext = avcodec_alloc_context3(outputCodec);&#xA;    outputCodecContext->bit_rate = bitRate;&#xA;    outputCodecContext->width = inputCodecContext->width;&#xA;    outputCodecContext->height = inputCodecContext->height;&#xA;    outputCodecContext->pix_fmt = outputCodec->pix_fmts[0];&#xA;    outputCodecContext->time_base = inputCodecContext->time_base;&#xA;&#xA;    int errorCode = avcodec_open2(outputCodecContext, outputCodec, NULL);&#xA;    if (errorCode != 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Fehler beim &#xD6;ffnen des Zielformat-Codecs" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    AVFormatContext* outputFormatContext = NULL;&#xA;    if (avformat_alloc_output_context2(&amp;outputFormatContext, NULL, NULL, outputFilename.c_str()) != 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Fehler beim Erstellen des Ausgabe-Formats" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    AVStream* outputVideoStream = avformat_new_stream(outputFormatContext, outputCodec);&#xA;    if (outputVideoStream == NULL)&#xA;    {&#xA;        std::cout &lt;&lt; "Fehler beim Hinzuf&#xFC;gen des Video-Streams zum Ausgabe-Format" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;    outputVideoStream->id = outputFormatContext->nb_streams - 1;&#xA;    AVCodecParameters* outputCodecParameters = outputVideoStream->codecpar;&#xA;    if (avcodec_parameters_from_context(outputCodecParameters, outputCodecContext) != 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Fehler beim Setzen des Ausgabe-Codecs" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    if (!(outputFormatContext->oformat->flags &amp; AVFMT_NOFILE))&#xA;    {&#xA;        if (avio_open(&amp;outputFormatContext->pb, outputFilename.c_str(), AVIO_FLAG_WRITE) != 0)&#xA;        {&#xA;            std::cout &lt;&lt; "Fehler beim &#xD6;ffnen der Ausgabedatei" &lt;&lt; std::endl;&#xA;            return -1;&#xA;        }&#xA;    }&#xA;&#xA;    if (avformat_write_header(outputFormatContext, NULL) != 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Fehler beim Schreiben des Ausgabe-Formats in die Ausgabedatei" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    AVPacket packet;&#xA;    int response;&#xA;    AVFrame* frame = av_frame_alloc();&#xA;    AVFrame* outputFrame = av_frame_alloc();&#xA;    while (av_read_frame(inputFormatContext, &amp;packet) == 0)&#xA;    {&#xA;        if (packet.stream_index == videoStreamIndex)&#xA;        {&#xA;            response = avcodec_send_packet(inputCodecContext, &amp;packet);&#xA;            while (response >= 0)&#xA;            {&#xA;                response = avcodec_receive_frame(inputCodecContext, frame);&#xA;                if (response == AVERROR(EAGAIN) || response == AVERROR_EOF)&#xA;                {&#xA;                    break;&#xA;                }&#xA;                else if (response &lt; 0)&#xA;                {&#xA;                    std::cout &lt;&lt; "Fehler beim Dekodieren des Video-Pakets" &lt;&lt; std::endl;&#xA;                    return -1;&#xA;                }&#xA;&#xA;                struct SwsContext* swsContext = sws_getContext(inputCodecContext->width, inputCodecContext->height, inputCodecContext->pix_fmt, outputCodecContext->width, outputCodecContext->height, outputCodecContext->pix_fmt, SWS_BILINEAR, NULL, NULL, NULL); if (!swsContext)&#xA;                {&#xA;                    std::cout &lt;&lt; "Fehler beim Erstellen des SwsContext" &lt;&lt; std::endl;&#xA;                    return -1;&#xA;                }&#xA;                sws_scale(swsContext, frame->data, frame->linesize, 0, inputCodecContext->height, outputFrame->data, outputFrame->linesize);&#xA;                sws_freeContext(swsContext);&#xA;&#xA;                outputFrame->pts = frame->pts;&#xA;                outputFrame->pkt_dts = frame->pkt_dts;&#xA;                //outputFrame->pkt_duration = frame->pkt_duration;&#xA;                response = avcodec_send_frame(outputCodecContext, outputFrame);&#xA;                while (response >= 0)&#xA;                {&#xA;                    response = avcodec_receive_packet(outputCodecContext, &amp;packet);&#xA;                    if (response == AVERROR(EAGAIN) || response == AVERROR_EOF)&#xA;                    {&#xA;                        break;&#xA;                    }&#xA;                    else if (response &lt; 0)&#xA;                    {&#xA;                        std::cout &lt;&lt; "Fehler beim Kodieren des Ausgabe-Frames" &lt;&lt; std::endl;&#xA;                        return -1;&#xA;                    }&#xA;&#xA;                    packet.stream_index = outputVideoStream->id;&#xA;                    av_packet_rescale_ts(&amp;packet, outputCodecContext->time_base, outputVideoStream->time_base);&#xA;                    if (av_interleaved_write_frame(outputFormatContext, &amp;packet) != 0)&#xA;                    {&#xA;                        std::cout &lt;&lt; "Fehler beim Schreiben des Ausgabe-Pakets" &lt;&lt; std::endl;&#xA;                        return -1;&#xA;                    }&#xA;                    av_packet_unref(&amp;packet);&#xA;                }&#xA;            }&#xA;        }&#xA;        av_packet_unref(&amp;packet);&#xA;    }&#xA;&#xA;    av_write_trailer(outputFormatContext);&#xA;    avcodec_free_context(&amp;inputCodecContext);&#xA;    avcodec_free_context(&amp;outputCodecContext);&#xA;    avformat_close_input(&amp;inputFormatContext);&#xA;    avformat_free_context(inputFormatContext);&#xA;    avformat_free_context(outputFormatContext);&#xA;    av_frame_free(&amp;frame);&#xA;    av_frame_free(&amp;outputFrame);&#xA;&#xA;    return 0;&#xA;&#xA;}&#xA;

    &#xA;

  • What's the most desireable way to capture system display and audio in the form of individual encoded audio and video packets in go (language) ? [closed]

    11 janvier 2023, par Tiger Yang

    Question (read the context below first) :

    &#xA;

    For those of you familiar with the capabilities of go, Is there a better way to go about all this ? Since ffmpeg is so ubiquitous, I'm sure it's been optomized to perfection, but what's the best way to capture system display and audio in the form of individual encoded audio and video packets in go (language), so that they can be then sent via webtransport-go ? I wish for it to prioritize efficiency and low latency, and ideally capture and encode the framebuffer directly like ffmpeg does.

    &#xA;

    Thanks ! I have many other questions about this, but I think it's best to ask as I go.

    &#xA;

    Context and what I've done so far :

    &#xA;

    I'm writing a remote desktop software for my personal use because of grievances with current solutions out there. At the moment, it consists of a web app that uses the webtransport API to send input datagrams and receive AV packets on two dedicated unidirectional streams, and the webcodecs API to decode these packets. On the serverside, I originally planned to use python with the aioquic library as a webtransport server. Upon connection and authentication, the server would start ffmpeg as a subprocess with this command :

    &#xA;

    ffmpeg -init_hw_device d3d11va -filter_complex ddagrab=video_size=1920x1080:framerate=60 -vcodec hevc_nvenc -tune ll -preset p7 -spatial_aq 1 -temporal_aq 1 -forced-idr 1 -rc cbr -b:v 400K -no-scenecut 1 -g 216000 -f hevc -

    &#xA;

    What I really appreciate about this is that it uses windows' desktop duplication API to copy the framebuffer of my GPU and hand that directly to the on-die hardware encoder with zero round trips to the CPU. I think it's about as efficient and elegant a solution as I can manage. It then outputs the encoded stream to the stdout, which python can read and send to the client.

    &#xA;

    As for the audio, there is another ffmpeg instance :

    &#xA;

    ffmpeg -f dshow -channels 2 -sample_rate 48000 -sample_size 16 -audio_buffer_size 15 -i audio="RD Audio (High Definition Audio Device)" -acodec libopus -vbr on -application audio -mapping_family 0 -apply_phase_inv true -b:a 25K -fec false -packet_loss 0 -map 0 -f data -

    &#xA;

    which listens to a physical loopback interface, which is literally just a short wire bridging the front panel headphone and microphone jacks (I'm aware of the quality loss of converting to analog and back, but the audio is then crushed down to 25kbps so it's fine) ()

    &#xA;

    Unfortunately, aioquic was not easy to work with IMO, and I found webtransport-go https://github.com/adriancable/webtransport-go, which was a hell of a lot better in both simplicity and documentation. However, now I'm dealing with a whole new language, and I wanna ask : (above)

    &#xA;

    EDIT : Here's the code for my server so far :

    &#xA;

    &#xD;&#xA;
    &#xD;&#xA;
    package main&#xA;&#xA;import (&#xA;    "bytes"&#xA;    "context"&#xA;    "fmt"&#xA;    "log"&#xA;    "net/http"&#xA;    "os/exec"&#xA;    "time"&#xA;&#xA;    "github.com/adriancable/webtransport-go"&#xA;)&#xA;&#xA;func warn(str string) {&#xA;    fmt.Printf("\n===== WARNING ===================================================================================================\n   %s\n=================================================================================================================\n", str)&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;    password := []byte("abc")&#xA;&#xA;    videoString := []string{&#xA;        "ffmpeg",&#xA;        "-init_hw_device", "d3d11va",&#xA;        "-filter_complex", "ddagrab=video_size=1920x1080:framerate=60",&#xA;        "-vcodec", "hevc_nvenc",&#xA;        "-tune", "ll",&#xA;        "-preset", "p7",&#xA;        "-spatial_aq", "1",&#xA;        "-temporal_aq", "1",&#xA;        "-forced-idr", "1",&#xA;        "-rc", "cbr",&#xA;        "-b:v", "500K",&#xA;        "-no-scenecut", "1",&#xA;        "-g", "216000",&#xA;        "-f", "hevc", "-",&#xA;    }&#xA;&#xA;    audioString := []string{&#xA;        "ffmpeg",&#xA;        "-f", "dshow",&#xA;        "-channels", "2",&#xA;        "-sample_rate", "48000",&#xA;        "-sample_size", "16",&#xA;        "-audio_buffer_size", "15",&#xA;        "-i", "audio=RD Audio (High Definition Audio Device)",&#xA;        "-acodec", "libopus",&#xA;        "-mapping_family", "0",&#xA;        "-b:a", "25K",&#xA;        "-map", "0",&#xA;        "-f", "data", "-",&#xA;    }&#xA;&#xA;    connected := false&#xA;&#xA;    http.HandleFunc("/", func(writer http.ResponseWriter, request *http.Request) {&#xA;        session := request.Body.(*webtransport.Session)&#xA;&#xA;        session.AcceptSession()&#xA;        fmt.Println("\nAccepted incoming WebTransport connection.")&#xA;        fmt.Println("Awaiting authentication...")&#xA;&#xA;        authData, err := session.ReceiveMessage(session.Context()) // Waits here till first datagram&#xA;        if err != nil {                                            // if client closes connection before sending anything&#xA;            fmt.Println("\nConnection closed:", err)&#xA;            return&#xA;        }&#xA;&#xA;        if len(authData) >= 2 &amp;&amp; bytes.Equal(authData[2:], password) {&#xA;            if connected {&#xA;                session.CloseSession()&#xA;                warn("Client has authenticated, but a session is already taking place! Connection closed.")&#xA;                return&#xA;            } else {&#xA;                connected = true&#xA;                fmt.Println("Client has authenticated!\n")&#xA;            }&#xA;        } else {&#xA;            session.CloseSession()&#xA;            warn("Client has failed authentication! Connection closed. (" &#x2B; string(authData[2:]) &#x2B; ")")&#xA;            return&#xA;        }&#xA;&#xA;        videoStream, _ := session.OpenUniStreamSync(session.Context())&#xA;&#xA;        videoCmd := exec.Command(videoString[0], videoString[1:]...)&#xA;        go func() {&#xA;            videoOut, _ := videoCmd.StdoutPipe()&#xA;            videoCmd.Start()&#xA;&#xA;            buffer := make([]byte, 15000)&#xA;            for {&#xA;                len, err := videoOut.Read(buffer)&#xA;                if err != nil {&#xA;                    break&#xA;                }&#xA;                if len > 0 {&#xA;                    videoStream.Write(buffer[:len])&#xA;                }&#xA;            }&#xA;        }()&#xA;&#xA;        time.Sleep(50 * time.Millisecond)&#xA;&#xA;        audioStream, err := session.OpenUniStreamSync(session.Context())&#xA;&#xA;        audioCmd := exec.Command(audioString[0], audioString[1:]...)&#xA;        go func() {&#xA;            audioOut, _ := audioCmd.StdoutPipe()&#xA;            audioCmd.Start()&#xA;&#xA;            buffer := make([]byte, 15000)&#xA;            for {&#xA;                len, err := audioOut.Read(buffer)&#xA;                if err != nil {&#xA;                    break&#xA;                }&#xA;                if len > 0 {&#xA;                    audioStream.Write(buffer[:len])&#xA;                }&#xA;            }&#xA;        }()&#xA;&#xA;        for {&#xA;            data, err := session.ReceiveMessage(session.Context())&#xA;            if err != nil {&#xA;                videoCmd.Process.Kill()&#xA;                audioCmd.Process.Kill()&#xA;&#xA;                connected = false&#xA;&#xA;                fmt.Println("\nConnection closed:", err)&#xA;                break&#xA;            }&#xA;&#xA;            if len(data) == 0 {&#xA;&#xA;            } else if data[0] == byte(0) {&#xA;                fmt.Printf("Received mouse datagram: %s\n", data)&#xA;            }&#xA;        }&#xA;&#xA;    })&#xA;&#xA;    server := &amp;webtransport.Server{&#xA;        ListenAddr: ":1024",&#xA;        TLSCert:    webtransport.CertFile{Path: "SSL/fullchain.pem"},&#xA;        TLSKey:     webtransport.CertFile{Path: "SSL/privkey.pem"},&#xA;        QuicConfig: &amp;webtransport.QuicConfig{&#xA;            KeepAlive:      false,&#xA;            MaxIdleTimeout: 3 * time.Second,&#xA;        },&#xA;    }&#xA;&#xA;    fmt.Println("Launching WebTransport server at", server.ListenAddr)&#xA;    ctx, cancel := context.WithCancel(context.Background())&#xA;    if err := server.Run(ctx); err != nil {&#xA;        log.Fatal(err)&#xA;        cancel()&#xA;    }&#xA;&#xA;}

    &#xD;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;&#xA;