Recherche avancée

Médias (0)

Mot : - Tags -/configuration

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (61)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (7185)

  • I want to take any Audio from a file and encode it as PCM_ALAW. My Example is a .m4a file to .wav file

    22 novembre 2023, par Clockman

    I have been working on this for a while now while am generally new to ffmpeg library, I have studied it a bit. The challenge I have that at the point of witting to file I get the following exception.

    


    "Exception thrown at 0x00007FFACA8305B3 (avformat-60.dll) in FfmpegPractice.exe : 0xC0000005 : Access violation writing location 0x0000000000000000.". I understand this means am writing to an uninitialized buffer am unable to discover why this is happening. The exception call stack shows the following

    


    avformat-60.dll!avformat_write_header() C
avformat-60.dll!ff_write_chained()  C
avformat-60.dll!ff_write_chained()  C
avformat-60.dll!av_write_frame()    C
FfmpegPractice.exe!main() Line 215  C++


    


    Some things I have tried

    


    This code is part of a larger project built with CMake but for some reason I could no step into ffmpeg library while debugging, So I recompiled ffmpeg ensured debugging was enabled so I could drill down to the root cause but I still could not step into the ffmpeg library.

    


    I then created a minimal project using visual studio c++ console project and I still could not step into the code.

    


    I have read through many ffmpeg docs and some I could find on the internet and I still could not solve it.

    


    This is the code

    


    #include <iostream>&#xA;&#xA;extern "C" {&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libswresample></libswresample>swresample.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavutil></libavutil>audio_fifo.h>&#xA;}&#xA;&#xA;using namespace std;&#xA;&#xA;//in audio file&#xA;string filename{ "rapid_caller_test.m4a" };&#xA;AVFormatContext* pFormatCtx{};&#xA;AVCodecContext* pCodecCtx{};&#xA;AVStream* pStream{};&#xA;&#xA;//out audio file&#xA;string outFilename{ "output.wav" };&#xA;AVFormatContext* pOutFormatCtx{ nullptr };&#xA;AVCodecContext* pOutCodecCtx{ nullptr };&#xA;AVIOContext* pOutIoContext{ nullptr };&#xA;const AVCodec* pOutCodec{ nullptr };&#xA;AVStream* pOutStream{ nullptr };&#xA;const int OUTPUT_CHANNELS = 1;&#xA;const int SAMPLE_RATE = 8000;&#xA;const int OUT_BIT_RATE = 64000;&#xA;uint8_t** convertedSamplesBuffer{ nullptr };&#xA;int64_t dstNmbrSamples{ 0 };&#xA;int dstLineSize{ 0 };&#xA;static int64_t pts{ 0 };&#xA;&#xA;//conversion context;&#xA;SwrContext* swr{};&#xA;&#xA;uint32_t i{ 0 };&#xA;int audiostream{ -1 };&#xA;&#xA;&#xA;void cleanUp() &#xA;{&#xA;  avcodec_free_context(&amp;pOutCodecCtx);;&#xA;  avio_closep(&amp;(pOutFormatCtx)->pb);&#xA;  avformat_free_context(pOutFormatCtx);&#xA;  pOutFormatCtx = nullptr;&#xA;}&#xA;&#xA;int main()&#xA;{&#xA;&#xA;/*&#xA;* section to setup input file&#xA;*/&#xA;if (avformat_open_input(&amp;pFormatCtx, filename.data(), nullptr, nullptr) != 0) {&#xA;  cout &lt;&lt; "could not open file " &lt;&lt; filename &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;if (avformat_find_stream_info(pFormatCtx, nullptr) &lt; 0) {&#xA;  cout &lt;&lt; "Could not retrieve stream information from file " &lt;&lt; filename &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;av_dump_format(pFormatCtx, 0, filename.c_str(), 0);&#xA;&#xA;for (i = 0; i &lt; pFormatCtx->nb_streams; i&#x2B;&#x2B;) {&#xA;  if (pFormatCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {&#xA;    audiostream = i;&#xA;    break;&#xA;  }&#xA;}&#xA;if (audiostream == -1) {&#xA;  cout &lt;&lt; "did not find audio stream" &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;&#xA;pStream = pFormatCtx->streams[audiostream];&#xA;const AVCodec* pCodec{ avcodec_find_decoder(pStream->codecpar->codec_id) };&#xA;pCodecCtx = avcodec_alloc_context3(pCodec);&#xA;avcodec_parameters_to_context(pCodecCtx, pStream->codecpar);&#xA;if (avcodec_open2(pCodecCtx, pCodec, nullptr)) {&#xA;  cout &lt;&lt; "could not open codec" &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;&#xA;/*&#xA;* section to set up output file which is a G711 audio&#xA;*/&#xA;if (avio_open(&amp;pOutIoContext, outFilename.data(), AVIO_FLAG_WRITE)) {&#xA;  cout &lt;&lt; "could not open out put file" &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;if (!(pOutFormatCtx = avformat_alloc_context())) {&#xA;  cout &lt;&lt; "could not create format conext" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;pOutFormatCtx->pb = pOutIoContext;&#xA;if (!(pOutFormatCtx->oformat = av_guess_format(nullptr, outFilename.data(), nullptr))) {&#xA;  cout &lt;&lt; "could not find output file format" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;if (!(pOutFormatCtx->url = av_strdup(outFilename.data()))) {&#xA;  cout &lt;&lt; "could not allocate file name" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;if (!(pOutCodec = avcodec_find_encoder(AV_CODEC_ID_PCM_ALAW))) {&#xA;  cout &lt;&lt; "codec not found" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;if (!(pOutStream = avformat_new_stream(pOutFormatCtx, nullptr))) {&#xA;  cout &lt;&lt; "could not create new stream" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;if (!(pOutCodecCtx = avcodec_alloc_context3(pOutCodec))) {&#xA;  cout &lt;&lt; "could not allocate codec context" &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;av_channel_layout_default(&amp;pOutCodecCtx->ch_layout, OUTPUT_CHANNELS);&#xA;pOutCodecCtx->sample_rate = SAMPLE_RATE;&#xA;pOutCodecCtx->sample_fmt = pOutCodec->sample_fmts[0];&#xA;pOutCodecCtx->bit_rate = OUT_BIT_RATE;&#xA;&#xA;//setting sample rate for the container&#xA;pOutStream->time_base.den = SAMPLE_RATE;&#xA;pOutStream->time_base.num = 1;&#xA;if (pOutFormatCtx->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;  pOutCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;&#xA;if (avcodec_open2(pOutCodecCtx, pOutCodec, nullptr)) {&#xA;  cout &lt;&lt; "could not open output codec" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;if ((avcodec_parameters_from_context(pOutStream->codecpar, pOutCodecCtx)) &lt; 0) {&#xA;  cout &lt;&lt; "could not initialize stream parameters" &lt;&lt; endl;&#xA;}   &#xA;&#xA;AVPacket* packet = av_packet_alloc();&#xA;&#xA;swr = swr_alloc();&#xA;swr_alloc_set_opts2(&amp;swr, &amp;pOutCodecCtx->ch_layout, pOutCodecCtx->sample_fmt, pOutCodecCtx->sample_rate,&amp;pCodecCtx->ch_layout, pCodecCtx->sample_fmt, pCodecCtx->sample_rate, 0, nullptr);&#xA;swr_init(swr);&#xA;&#xA;int ret{};&#xA;int bSize{};&#xA;while (av_read_frame(pFormatCtx, packet) >= 0) {&#xA;  AVFrame* pFrame = av_frame_alloc();&#xA;  AVFrame* pOutFrame = av_frame_alloc();&#xA;  if (packet->stream_index == audiostream) {&#xA;    ret = avcodec_send_packet(pCodecCtx, packet);&#xA;    while (ret >= 0) {&#xA;    ret = avcodec_receive_frame(pCodecCtx, pFrame);&#xA;    if (ret == AVERROR(EAGAIN))&#xA;    continue;&#xA;    else if (ret == AVERROR_EOF)&#xA;    break;&#xA;    dstNmbrSamples = av_rescale_rnd(swr_get_delay(swr, pCodecCtx->sample_rate) &#x2B; pFrame->nb_samples, pOutCodecCtx->sample_rate, pCodecCtx->sample_rate, AV_ROUND_UP);&#xA;    if ((av_samples_alloc_array_and_samples(&amp;convertedSamplesBuffer, &amp;dstLineSize, pOutCodecCtx->ch_layout.nb_channels,dstNmbrSamples, pOutCodecCtx->sample_fmt, 0)) &lt; 0) {&#xA;    cout &lt;&lt; "coult not allocate samples array and buffer" &lt;&lt; endl;&#xA;    }&#xA;    int channel_samples_count{ 0 };&#xA;    channel_samples_count = swr_convert(swr, convertedSamplesBuffer, dstNmbrSamples, (const uint8_t**)pFrame->data, pFrame->nb_samples);&#xA;    bSize = av_samples_get_buffer_size(&amp;dstLineSize, pOutCodecCtx->ch_layout.nb_channels, channel_samples_count, pOutCodecCtx->sample_fmt, 0);&#xA;    cout &lt;&lt; "no of samples is " &lt;&lt; channel_samples_count &lt;&lt; " the buffer size " &lt;&lt; bSize &lt;&lt; endl;&#xA;    pOutFrame->nb_samples = channel_samples_count;&#xA;    av_channel_layout_copy(&amp;pOutFrame->ch_layout, &amp;pOutCodecCtx->ch_layout);&#xA;    pOutFrame->format = pOutCodecCtx->sample_fmt;&#xA;    pOutFrame->sample_rate = pOutCodecCtx->sample_rate;&#xA;    if ((av_frame_get_buffer(pOutFrame, 0)) &lt; 0) {&#xA;    cout &lt;&lt; "could not allocate output frame samples " &lt;&lt; endl;&#xA;    av_frame_free(&amp;pOutFrame);&#xA;  }&#xA;                &#xA;    //populate out frame buffer&#xA;    av_frame_make_writable(pOutFrame);&#xA;    for (int i{ 0 }; i &lt; bSize; i&#x2B;&#x2B;) {&#xA;    pOutFrame->data[0][i] = convertedSamplesBuffer[0][i];&#xA;    cout &lt;&lt; pOutFrame->data[0][i];&#xA;   }&#xA;   if (pOutFrame) {&#xA;   pOutFrame->pts = pts;&#xA;   pts &#x2B;= pOutFrame->nb_samples;&#xA;  }&#xA;   int res = avcodec_send_frame(pOutCodecCtx, pOutFrame);&#xA;    if (res &lt; 0) {&#xA;    cout &lt;&lt; "error sending frame to encoder" &lt;&lt; endl;&#xA;    cleanUp();&#xA;    return -1;&#xA;   }&#xA;   //int er = avformat_write_header(pOutFormatCtx,nullptr);&#xA;   AVPacket* pOutPacket = av_packet_alloc();&#xA;   pOutPacket->time_base.num = 1;&#xA;   pOutPacket->time_base.den = 8000;&#xA;   if (pOutPacket == nullptr) {&#xA;    cout &lt;&lt; "unable to allocate packet" &lt;&lt; endl;&#xA;  }&#xA;  while (res >= 0) {&#xA;   res = avcodec_receive_packet(pOutCodecCtx, pOutPacket);&#xA;   if (res == AVERROR(EAGAIN))&#xA;    continue;&#xA;   else if (ret == AVERROR_EOF)&#xA;    break;&#xA;   av_packet_rescale_ts(pOutPacket, pOutCodecCtx->time_base, pOutFormatCtx->streams[0]->time_base);&#xA;   //av_dump_format(pOutFormatCtx, 0, outFilename.c_str(), 1);&#xA;   if (av_write_frame(pOutFormatCtx, pOutPacket) &lt; 0) {&#xA;    cout &lt;&lt; "could not write frame" &lt;&lt; endl;&#xA;    }&#xA;   }&#xA;  }&#xA;}&#xA; av_frame_free(&amp;pFrame);&#xA; av_frame_free(&amp;pOutFrame);&#xA;}&#xA;if (av_write_trailer(pOutFormatCtx) &lt; 0) {&#xA; cout &lt;&lt; "could not write file trailer" &lt;&lt; endl;&#xA;}&#xA;swr_free(&amp;swr);&#xA;avcodec_free_context(&amp;pOutCodecCtx);&#xA;av_packet_free(&amp;packet);&#xA;}&#xA;</iostream>

    &#xA;

    Error/Exception

    &#xA;

    The exception is thrown when I call

    &#xA;

    if (av_write_frame(pOutFormatCtx, pOutPacket) &lt; 0)  {   cout &lt;&lt; "could not write frame" &lt;&lt; endl; } &#xA;I also called this line

    &#xA;

    //int er = avformat_write_header(pOutFormatCtx,nullptr);

    &#xA;

    to see if I will get an exception but it did not throw any exception.

    &#xA;

    I have spent weeks on this issue with no success.&#xA;My goal is to take any audio from a file an be able to resample it if need be, and transcode it to PCM_ALAW.&#xA;I will appreciate any help I can get.

    &#xA;

  • How to Stream RTP (IP camera) Into React App setup

    10 novembre 2024, par sharon2469

    I am trying to transfer a live broadcast from an IP camera or any other broadcast coming from an RTP/RTSP source to my REACT application. BUT MUST BE LIVE

    &#xA;

    My setup at the moment is :

    &#xA;

    IP Camera -> (RTP) -> FFmpeg -> (udp) -> Server(nodeJs) -> (WebRTC) -> React app

    &#xA;

    In the current situation, There is almost no delay, but there are some things here that I can't avoid and I can't understand why, and here is my question :

    &#xA;

    1) First, is the SETUP even correct and this is the only way to Stream RTP video in Web app ?

    &#xA;

    2) Is it possible to avoid re-encode the stream , RTP transmission necessarily comes in H.264, hence I don't really need to execute the following command :

    &#xA;

        return spawn(&#x27;ffmpeg&#x27;, [&#xA;    &#x27;-re&#x27;,                              // Read input at its native frame rate Important for live-streaming&#xA;    &#x27;-probesize&#x27;, &#x27;32&#x27;,                 // Set probing size to 32 bytes (32 is minimum)&#xA;    &#x27;-analyzeduration&#x27;, &#x27;1000000&#x27;,      // An input duration of 1 second&#xA;    &#x27;-c:v&#x27;, &#x27;h264&#x27;,                     // Video codec of input video&#xA;    &#x27;-i&#x27;, &#x27;rtp://238.0.0.2:48888&#x27;,      // Input stream URL&#xA;    &#x27;-map&#x27;, &#x27;0:v?&#x27;,                     // Select video from input stream&#xA;    &#x27;-c:v&#x27;, &#x27;libx264&#x27;,                  // Video codec of output stream&#xA;    &#x27;-preset&#x27;, &#x27;ultrafast&#x27;,             // Faster encoding for lower latency&#xA;    &#x27;-tune&#x27;, &#x27;zerolatency&#x27;,             // Optimize for zero latency&#xA;    // &#x27;-s&#x27;, &#x27;768x480&#x27;,                    // Adjust the resolution (experiment with values)&#xA;    &#x27;-f&#x27;, &#x27;rtp&#x27;, `rtp://127.0.0.1:${udpPort}` // Output stream URL&#xA;]);&#xA;

    &#xA;

    As you can se in this command I re-encode to libx264, But if I set FFMPEG a parameter '-c:v' :'copy' instead of '-c:v', 'libx264' then FFMPEG throw an error says : that it doesn't know how to encode h264 and only knows what is libx264-> Basically, I want to stop the re-encode because there is really no need for it, because the stream is already encoded to H264. Are there certain recommendations that can be made ?

    &#xA;

    3) I thought about giving up the FFMPEG completely, but the RTP packets arrive at a size of 1200+ BYTES when WEBRTC is limited to up to 1280 BYTE. Is there a way to manage these sabotages without damaging the video and is it to enter this world ? I guess there is the whole story with the JITTER BUFFER here

    &#xA;

    This is my server side code (THIS IS JUST A TEST CODE)

    &#xA;

    import {&#xA;    MediaStreamTrack,&#xA;    randomPort,&#xA;    RTCPeerConnection,&#xA;    RTCRtpCodecParameters,&#xA;    RtpPacket,&#xA;} from &#x27;werift&#x27;&#xA;import {Server} from "ws";&#xA;import {createSocket} from "dgram";&#xA;import {spawn} from "child_process";&#xA;import LoggerFactory from "./logger/loggerFactory";&#xA;&#xA;//&#xA;&#xA;const log = LoggerFactory.getLogger(&#x27;ServerMedia&#x27;)&#xA;&#xA;// Websocket server -> WebRTC&#xA;const serverPort = 8888&#xA;const server = new Server({port: serverPort});&#xA;log.info(`Server Media start om port: ${serverPort}`);&#xA;&#xA;// UDP server -> ffmpeg&#xA;const udpPort = 48888&#xA;const udp = createSocket("udp4");&#xA;// udp.bind(udpPort, () => {&#xA;//     udp.addMembership("238.0.0.2");&#xA;// })&#xA;udp.bind(udpPort)&#xA;log.info(`UDP port: ${udpPort}`)&#xA;&#xA;&#xA;const createFFmpegProcess = () => {&#xA;    log.info(`Start ffmpeg process`)&#xA;    return spawn(&#x27;ffmpeg&#x27;, [&#xA;        &#x27;-re&#x27;,                              // Read input at its native frame rate Important for live-streaming&#xA;        &#x27;-probesize&#x27;, &#x27;32&#x27;,                 // Set probing size to 32 bytes (32 is minimum)&#xA;        &#x27;-analyzeduration&#x27;, &#x27;1000000&#x27;,      // An input duration of 1 second&#xA;        &#x27;-c:v&#x27;, &#x27;h264&#x27;,                     // Video codec of input video&#xA;        &#x27;-i&#x27;, &#x27;rtp://238.0.0.2:48888&#x27;,      // Input stream URL&#xA;        &#x27;-map&#x27;, &#x27;0:v?&#x27;,                     // Select video from input stream&#xA;        &#x27;-c:v&#x27;, &#x27;libx264&#x27;,                  // Video codec of output stream&#xA;        &#x27;-preset&#x27;, &#x27;ultrafast&#x27;,             // Faster encoding for lower latency&#xA;        &#x27;-tune&#x27;, &#x27;zerolatency&#x27;,             // Optimize for zero latency&#xA;        // &#x27;-s&#x27;, &#x27;768x480&#x27;,                    // Adjust the resolution (experiment with values)&#xA;        &#x27;-f&#x27;, &#x27;rtp&#x27;, `rtp://127.0.0.1:${udpPort}` // Output stream URL&#xA;    ]);&#xA;&#xA;}&#xA;&#xA;let ffmpegProcess = createFFmpegProcess();&#xA;&#xA;&#xA;const attachFFmpegListeners = () => {&#xA;    // Capture standard output and print it&#xA;    ffmpegProcess.stdout.on(&#x27;data&#x27;, (data) => {&#xA;        log.info(`FFMPEG process stdout: ${data}`);&#xA;    });&#xA;&#xA;    // Capture standard error and print it&#xA;    ffmpegProcess.stderr.on(&#x27;data&#x27;, (data) => {&#xA;        console.error(`ffmpeg stderr: ${data}`);&#xA;    });&#xA;&#xA;    // Listen for the exit event&#xA;    ffmpegProcess.on(&#x27;exit&#x27;, (code, signal) => {&#xA;        if (code !== null) {&#xA;            log.info(`ffmpeg process exited with code ${code}`);&#xA;        } else if (signal !== null) {&#xA;            log.info(`ffmpeg process killed with signal ${signal}`);&#xA;        }&#xA;    });&#xA;};&#xA;&#xA;&#xA;attachFFmpegListeners();&#xA;&#xA;&#xA;server.on("connection", async (socket) => {&#xA;    const payloadType = 96; // It is a numerical value that is assigned to each codec in the SDP offer/answer exchange -> for H264&#xA;    // Create a peer connection with the codec parameters set in advance.&#xA;    const pc = new RTCPeerConnection({&#xA;        codecs: {&#xA;            audio: [],&#xA;            video: [&#xA;                new RTCRtpCodecParameters({&#xA;                    mimeType: "video/H264",&#xA;                    clockRate: 90000, // 90000 is the default value for H264&#xA;                    payloadType: payloadType,&#xA;                }),&#xA;            ],&#xA;        },&#xA;    });&#xA;&#xA;    const track = new MediaStreamTrack({kind: "video"});&#xA;&#xA;&#xA;    udp.on("message", (data) => {&#xA;        console.log(data)&#xA;        const rtp = RtpPacket.deSerialize(data);&#xA;        rtp.header.payloadType = payloadType;&#xA;        track.writeRtp(rtp);&#xA;    });&#xA;&#xA;    udp.on("error", (err) => {&#xA;        console.log(err)&#xA;&#xA;    });&#xA;&#xA;    udp.on("close", () => {&#xA;        console.log("close")&#xA;    });&#xA;&#xA;    pc.addTransceiver(track, {direction: "sendonly"});&#xA;&#xA;    await pc.setLocalDescription(await pc.createOffer());&#xA;    const sdp = JSON.stringify(pc.localDescription);&#xA;    socket.send(sdp);&#xA;&#xA;    socket.on("message", (data: any) => {&#xA;        if (data.toString() === &#x27;resetFFMPEG&#x27;) {&#xA;            ffmpegProcess.kill(&#x27;SIGINT&#x27;);&#xA;            log.info(`FFMPEG process killed`)&#xA;            setTimeout(() => {&#xA;                ffmpegProcess = createFFmpegProcess();&#xA;                attachFFmpegListeners();&#xA;            }, 5000)&#xA;        } else {&#xA;            pc.setRemoteDescription(JSON.parse(data));&#xA;        }&#xA;    });&#xA;});&#xA;

    &#xA;

    And this fronted :

    &#xA;

    &#xA;&#xA;&#xA;    &#xA;    &#xA;    <code class="echappe-js">&lt;script&amp;#xA;            crossorigin&amp;#xA;            src=&quot;https://unpkg.com/react@16/umd/react.development.js&quot;&amp;#xA;    &gt;&lt;/script&gt;&#xA;    &lt;script&amp;#xA;            crossorigin&amp;#xA;            src=&quot;https://unpkg.com/react-dom@16/umd/react-dom.development.js&quot;&amp;#xA;    &gt;&lt;/script&gt;&#xA;    &lt;script&amp;#xA;            crossorigin&amp;#xA;            src=&quot;https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.34/browser.min.js&quot;&amp;#xA;    &gt;&lt;/script&gt;&#xA;    &lt;script src=&quot;https://cdn.jsdelivr.net/npm/babel-regenerator-runtime@6.5.0/runtime.min.js&quot;&gt;&lt;/script&gt;&#xA;&#xA;&#xA;
    &#xA;

    &#xA;

    &#xA;&lt;script type=&quot;text/babel&quot;&gt;&amp;#xA;    let rtc;&amp;#xA;&amp;#xA;    const App = () =&gt; {&amp;#xA;        const [log, setLog] = React.useState([]);&amp;#xA;        const videoRef = React.useRef();&amp;#xA;        const socket = new WebSocket(&quot;ws://localhost:8888&quot;);&amp;#xA;        const [peer, setPeer] = React.useState(null); // Add state to keep track of the peer connection&amp;#xA;&amp;#xA;        React.useEffect(() =&gt; {&amp;#xA;            (async () =&gt; {&amp;#xA;                await new Promise((r) =&gt; (socket.onopen = r));&amp;#xA;                console.log(&quot;open websocket&quot;);&amp;#xA;&amp;#xA;                const handleOffer = async (offer) =&gt; {&amp;#xA;                    console.log(&quot;new offer&quot;, offer.sdp);&amp;#xA;&amp;#xA;                    const updatedPeer = new RTCPeerConnection({&amp;#xA;                        iceServers: [],&amp;#xA;                        sdpSemantics: &quot;unified-plan&quot;,&amp;#xA;                    });&amp;#xA;&amp;#xA;                    updatedPeer.onicecandidate = ({ candidate }) =&gt; {&amp;#xA;                        if (!candidate) {&amp;#xA;                            const sdp = JSON.stringify(updatedPeer.localDescription);&amp;#xA;                            console.log(sdp);&amp;#xA;                            socket.send(sdp);&amp;#xA;                        }&amp;#xA;                    };&amp;#xA;&amp;#xA;                    updatedPeer.oniceconnectionstatechange = () =&gt; {&amp;#xA;                        console.log(&amp;#xA;                            &quot;oniceconnectionstatechange&quot;,&amp;#xA;                            updatedPeer.iceConnectionState&amp;#xA;                        );&amp;#xA;                    };&amp;#xA;&amp;#xA;                    updatedPeer.ontrack = (e) =&gt; {&amp;#xA;                        console.log(&quot;ontrack&quot;, e);&amp;#xA;                        videoRef.current.srcObject = e.streams[0];&amp;#xA;                    };&amp;#xA;&amp;#xA;                    await updatedPeer.setRemoteDescription(offer);&amp;#xA;                    const answer = await updatedPeer.createAnswer();&amp;#xA;                    await updatedPeer.setLocalDescription(answer);&amp;#xA;&amp;#xA;                    setPeer(updatedPeer);&amp;#xA;                };&amp;#xA;&amp;#xA;                socket.onmessage = (ev) =&gt; {&amp;#xA;                    const data = JSON.parse(ev.data);&amp;#xA;                    if (data.type === &quot;offer&quot;) {&amp;#xA;                        handleOffer(data);&amp;#xA;                    } else if (data.type === &quot;resetFFMPEG&quot;) {&amp;#xA;                        // Handle the resetFFMPEG message&amp;#xA;                        console.log(&quot;FFmpeg reset requested&quot;);&amp;#xA;                    }&amp;#xA;                };&amp;#xA;            })();&amp;#xA;        }, []); // Added socket as a dependency to the useEffect hook&amp;#xA;&amp;#xA;        const sendRequestToResetFFmpeg = () =&gt; {&amp;#xA;            socket.send(&quot;resetFFMPEG&quot;);&amp;#xA;        };&amp;#xA;&amp;#xA;        return (&amp;#xA;            &lt;div&gt;&amp;#xA;                Video: &amp;#xA;                &lt;video ref={videoRef} autoPlay muted /&gt;&amp;#xA;                &lt;button onClick={() =&gt; sendRequestToResetFFmpeg()}&gt;Reset FFMPEG&lt;/button&gt;&amp;#xA;            &lt;/div&gt;&amp;#xA;        );&amp;#xA;    };&amp;#xA;&amp;#xA;    ReactDOM.render(&lt;App /&gt;, document.getElementById(&quot;app1&quot;));&amp;#xA;&lt;/script&gt;&#xA;&#xA;&#xA;

    &#xA;

  • Render YUV frame using OpenTK [closed]

    20 mai 2024, par dima2012 terminator

    my window&#xA;Im trying to render YUV AVFrame, that i get from camera using OpenTK, im creating a rectangle and trying to apply a texture to it, but it doesnt work.

    &#xA;

    Here is my window class

    &#xA;

    using OpenTK.Graphics.Egl;&#xA;using OpenTK.Graphics.OpenGL4;&#xA;using OpenTK.Windowing.Common;&#xA;using OpenTK.Windowing.Desktop;&#xA;using OpenTK.Windowing.GraphicsLibraryFramework;&#xA;using System;&#xA;using System.Collections.Generic;&#xA;using System.Diagnostics;&#xA;using System.Linq;&#xA;using System.Text;&#xA;using System.Threading;&#xA;using System.Threading.Tasks;&#xA;&#xA;namespace myFFmpeg&#xA;{&#xA;    public class CameraWindow : GameWindow&#xA;    {&#xA;        private int vertexBufferHandle;&#xA;        private int elementBufferHandle;&#xA;        private int vertexArrayHandle;&#xA;        private int frameNumber = 0;&#xA;        private int yTex, uTex, vTex;&#xA;&#xA;        Shader shader;&#xA;        Texture texture;&#xA;&#xA;        float[] vertices =&#xA;        {&#xA;            //Position         | Texture coordinates&#xA;             0.5f,  0.5f, 0.0f, 1.0f, 0.0f, // top right&#xA;             0.5f, -0.5f, 0.0f, 1.0f, 1.0f, // bottom right&#xA;            -0.5f, -0.5f, 0.0f, 0.0f, 1.0f, // bottom left&#xA;            -0.5f,  0.5f, 0.0f, 0.0f, 0.0f  // top left&#xA;        };&#xA;&#xA;&#xA;        private uint[] indices = &#xA;        {&#xA;            0, 1, 3,   // first triangle&#xA;            1, 2, 3    // second triangle&#xA;        };&#xA;&#xA;        public CameraWindow(string title) : base(GameWindowSettings.Default, new NativeWindowSettings() { ClientSize = (1280, 720), Title = title }) { UpdateFrequency = 25; }&#xA;&#xA;        protected override void OnUpdateFrame(FrameEventArgs e)&#xA;        {&#xA;            base.OnUpdateFrame(e);&#xA;        }&#xA;&#xA;        protected override void OnLoad()&#xA;        {&#xA;            GL.ClearColor(0.5f, 0.3f, 0.3f, 1.0f);&#xA;&#xA;            shader = new Shader(@"..\..\shader.vert", @"..\..\shader.frag");&#xA;            texture = new Texture();&#xA;&#xA;            elementBufferHandle = GL.GenBuffer();&#xA;            GL.BindBuffer(BufferTarget.ElementArrayBuffer, elementBufferHandle);&#xA;            GL.BufferData(BufferTarget.ElementArrayBuffer, indices.Length * sizeof(uint), indices, BufferUsageHint.StaticDraw);&#xA;&#xA;            vertexBufferHandle = GL.GenBuffer();&#xA;            GL.BindBuffer(BufferTarget.ArrayBuffer, vertexBufferHandle);&#xA;            GL.BufferData(BufferTarget.ArrayBuffer, vertices.Length * sizeof(float), vertices, BufferUsageHint.StaticDraw);&#xA;&#xA;            GL.BindBuffer(BufferTarget.ArrayBuffer, 0);&#xA;&#xA;            vertexArrayHandle = GL.GenVertexArray();&#xA;            GL.BindVertexArray(vertexArrayHandle);&#xA;&#xA;            GL.BindBuffer(BufferTarget.ArrayBuffer, vertexBufferHandle);&#xA;            GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false, 5 * sizeof(float), 0);&#xA;            GL.EnableVertexAttribArray(0);&#xA;&#xA;            int vertexShader = GL.CreateShader(ShaderType.VertexShader);&#xA;            GL.ShaderSource(vertexShader, @"..\..\shader.vert");&#xA;            GL.CompileShader(vertexShader);&#xA;&#xA;            int fragmentShader = GL.CreateShader(ShaderType.FragmentShader);&#xA;            GL.ShaderSource(fragmentShader, @"..\..\shader.frag");&#xA;            GL.CompileShader(fragmentShader);&#xA;&#xA;            int shaderProgram = GL.CreateProgram();&#xA;            GL.AttachShader(shaderProgram, vertexShader);&#xA;            GL.AttachShader(shaderProgram, fragmentShader);&#xA;            GL.LinkProgram(shaderProgram);&#xA;&#xA;&#xA;            int vertexPosLocation = GL.GetAttribLocation(shaderProgram, "vertexPos");&#xA;            GL.EnableVertexAttribArray(vertexPosLocation);&#xA;            GL.VertexAttribPointer(vertexPosLocation, 2, VertexAttribPointerType.Float, false, 4 * sizeof(float), 0);&#xA;&#xA;            int texCoordLocation = GL.GetAttribLocation(shaderProgram, "texCoord");&#xA;            GL.EnableVertexAttribArray(texCoordLocation);&#xA;            GL.VertexAttribPointer(texCoordLocation, 2, VertexAttribPointerType.Float, false, 4 * sizeof(float), 2 * sizeof(float));&#xA;&#xA;            GL.UseProgram(shaderProgram);&#xA;&#xA;            GL.ActiveTexture(TextureUnit.Texture0);&#xA;            GL.BindTexture(TextureTarget.Texture2D, yTex);&#xA;            GL.Uniform1(GL.GetUniformLocation(shaderProgram, "yTex"), 0);&#xA;&#xA;            GL.ActiveTexture(TextureUnit.Texture1);&#xA;            GL.BindTexture(TextureTarget.Texture2D, uTex);&#xA;            GL.Uniform1(GL.GetUniformLocation(shaderProgram, "uTex"), 1);&#xA;&#xA;            GL.ActiveTexture(TextureUnit.Texture2);&#xA;            GL.BindTexture(TextureTarget.Texture2D, vTex);&#xA;            GL.Uniform1(GL.GetUniformLocation(shaderProgram, "vTex"), 2);&#xA;&#xA;            GL.BindVertexArray(0);&#xA;            //code&#xA;&#xA;            base.OnLoad();&#xA;        }&#xA;&#xA;        protected override void OnUnload()&#xA;        {&#xA;            GL.BindBuffer(BufferTarget.ArrayBuffer, 0);&#xA;            GL.DeleteBuffer(vertexBufferHandle);&#xA;            GL.UseProgram(0);&#xA;            shader.Dispose();&#xA;&#xA;            //code&#xA;&#xA;            base.OnUnload();&#xA;        }&#xA;&#xA;        protected override void OnRenderFrame(FrameEventArgs e)&#xA;        {&#xA;&#xA;            GL.Clear(ClearBufferMask.ColorBufferBit);&#xA;&#xA;            shader.Use();&#xA;            texture.Use(frameNumber&#x2B;&#x2B;);&#xA;&#xA;            GL.BindVertexArray(vertexArrayHandle);&#xA;&#xA;            GL.DrawElements(PrimitiveType.Triangles, indices.Length, DrawElementsType.UnsignedInt, indices);&#xA;&#xA;            Context.SwapBuffers();&#xA;&#xA;            base.OnRenderFrame(e);&#xA;        }&#xA;&#xA;        protected override void OnFramebufferResize(FramebufferResizeEventArgs e)&#xA;        {&#xA;            base.OnFramebufferResize(e);&#xA;&#xA;            GL.Viewport(0, 0, e.Width, e.Height);&#xA;        }&#xA;    }&#xA;}&#xA;

    &#xA;

    And my texture class :

    &#xA;

    using System;&#xA;using OpenTK;&#xA;using OpenTK.Graphics.OpenGL4;&#xA;using SkiaSharp;&#xA;using FFmpeg;&#xA;using SkiaSharp.Internals;&#xA;using StbImageSharp;&#xA;using FFmpeg.AutoGen;&#xA;using System.Threading;&#xA;&#xA;namespace myFFmpeg&#xA;{&#xA;    public class Texture&#xA;    {&#xA;        int Handle, yTex, uTex, vTex;&#xA;&#xA;        Program program = new Program();&#xA;&#xA;        public Texture()&#xA;        {&#xA;            Handle = GL.GenTexture();&#xA;        }&#xA;&#xA;&#xA;        public unsafe void Use(int frameNumber)&#xA;        {&#xA;            GL.BindTexture(TextureTarget.Texture2D, Handle);&#xA;&#xA;            // Generate textures only once (outside the loop)&#xA;            if (yTex == 0)&#xA;            {&#xA;                GL.GenTextures(1, out yTex);&#xA;            }&#xA;            if (uTex == 0)&#xA;            {&#xA;                GL.GenTextures(1, out uTex);&#xA;            }&#xA;            if (vTex == 0)&#xA;            {&#xA;                GL.GenTextures(1, out vTex);&#xA;            }&#xA;&#xA;            // Bind textures to specific units before rendering each frame&#xA;            GL.ActiveTexture(TextureUnit.Texture0);&#xA;            GL.BindTexture(TextureTarget.Texture2D, yTex);&#xA;            GL.ActiveTexture(TextureUnit.Texture1);&#xA;            GL.BindTexture(TextureTarget.Texture2D, uTex);&#xA;            GL.ActiveTexture(TextureUnit.Texture2);&#xA;&#xA;            // Update textures with new frame data from FFmpeg&#xA;            AVFrame frame = program.getFrame();&#xA;            int width = frame.width;&#xA;            int height = frame.height;&#xA;&#xA;            Console.BackgroundColor = ConsoleColor.White;&#xA;            Console.ForegroundColor = ConsoleColor.Black;&#xA;            Console.WriteLine((AVPixelFormat)frame.format);&#xA;            Console.BackgroundColor = ConsoleColor.Black;&#xA;&#xA;&#xA;            // Assuming YUV data is stored in separate planes (Y, U, V)&#xA;            GL.BindTexture(TextureTarget.Texture2D, yTex);&#xA;            GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Luminance, width, height, 0, PixelFormat.Luminance, PixelType.UnsignedByte, (IntPtr)frame.data[0]);&#xA;            GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);&#xA;            GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);&#xA;&#xA;            GL.BindTexture(TextureTarget.Texture2D, uTex);&#xA;            GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Luminance, width / 2, height / 2, 0, PixelFormat.Luminance, PixelType.UnsignedByte, (IntPtr)frame.data[1]);&#xA;            GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);&#xA;            GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);&#xA;&#xA;            GL.BindTexture(TextureTarget.Texture2D, vTex);&#xA;            GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Luminance, width / 2, height / 2, 0, PixelFormat.Luminance, PixelType.UnsignedByte, (IntPtr)frame.data[2]);&#xA;            GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);&#xA;            GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);&#xA;&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;

    &#xA;

    And my shader class :

    &#xA;

    using OpenTK.Graphics.OpenGL4;&#xA;using System;&#xA;using System.IO;&#xA;&#xA;namespace myFFmpeg&#xA;{&#xA;    public class Shader : IDisposable&#xA;    {&#xA;        public int Handle { get; private set; }&#xA;&#xA;        public Shader(string vertexPath, string fragmentPath)&#xA;        {&#xA;            string vertexShaderSource = File.ReadAllText(vertexPath);&#xA;            string fragmentShaderSource = File.ReadAllText(fragmentPath);&#xA;&#xA;            int vertexShader = GL.CreateShader(ShaderType.VertexShader);&#xA;            GL.ShaderSource(vertexShader, vertexShaderSource);&#xA;            GL.CompileShader(vertexShader);&#xA;            CheckShaderCompilation(vertexShader);&#xA;&#xA;            int fragmentShader = GL.CreateShader(ShaderType.FragmentShader);&#xA;            GL.ShaderSource(fragmentShader, fragmentShaderSource);&#xA;            GL.CompileShader(fragmentShader);&#xA;            CheckShaderCompilation(fragmentShader);&#xA;&#xA;            Handle = GL.CreateProgram();&#xA;            GL.AttachShader(Handle, vertexShader);&#xA;            GL.AttachShader(Handle, fragmentShader);&#xA;            GL.LinkProgram(Handle);&#xA;            CheckProgramLinking(Handle);&#xA;&#xA;            GL.DetachShader(Handle, vertexShader);&#xA;            GL.DetachShader(Handle, fragmentShader);&#xA;            GL.DeleteShader(vertexShader);&#xA;            GL.DeleteShader(fragmentShader);&#xA;        }&#xA;&#xA;        public void Use()&#xA;        {&#xA;            GL.UseProgram(Handle);&#xA;        }&#xA;&#xA;        public int GetAttribLocation(string attribName)&#xA;        {&#xA;            return GL.GetAttribLocation(Handle, attribName);&#xA;        }&#xA;&#xA;        public int GetUniformLocation(string uniformName)&#xA;        {&#xA;            return GL.GetUniformLocation(Handle, uniformName);&#xA;        }&#xA;&#xA;        private void CheckShaderCompilation(int shader)&#xA;        {&#xA;            GL.GetShader(shader, ShaderParameter.CompileStatus, out int success);&#xA;            if (success == 0)&#xA;            {&#xA;                string infoLog = GL.GetShaderInfoLog(shader);&#xA;                throw new InvalidOperationException($"Shader compilation failed: {infoLog}");&#xA;            }&#xA;        }&#xA;&#xA;        private void CheckProgramLinking(int program)&#xA;        {&#xA;            GL.GetProgram(program, GetProgramParameterName.LinkStatus, out int success);&#xA;            if (success == 0)&#xA;            {&#xA;                string infoLog = GL.GetProgramInfoLog(program);&#xA;                throw new InvalidOperationException($"Program linking failed: {infoLog}");&#xA;            }&#xA;        }&#xA;&#xA;        public void Dispose()&#xA;        {&#xA;            GL.DeleteProgram(Handle);&#xA;        }&#xA;    }&#xA;}&#xA;

    &#xA;

    Vert shader

    &#xA;

    #version 330 core&#xA;layout(location = 0) in vec3 vertexPos;&#xA;layout(location = 1) in vec2 texCoord;&#xA;&#xA;out vec2 TexCoord; &#xA;&#xA;void main()&#xA;{&#xA;    gl_Position = vec4(vertexPos,1.0);&#xA;    TexCoord = texCoord;&#xA;}&#xA;

    &#xA;

    Frag shader

    &#xA;

    #version 330 core&#xA;in vec2 TexCoord;&#xA;out vec4 color;&#xA;&#xA;uniform sampler2D yTex;&#xA;uniform sampler2D uTex;&#xA;uniform sampler2D vTex;&#xA;&#xA;void main()&#xA;{&#xA;  float y = texture(yTex, TexCoord).r;&#xA;  float u = texture(uTex, TexCoord).r - 0.5;&#xA;  float v = texture(vTex, TexCoord).r - 0.5;&#xA;&#xA;  // YUV to RGB conversion (BT.709)&#xA;  float r = y &#x2B; 1.5714 * v;&#xA;  float g = y - 0.6486 * u - 0.3918 * v;&#xA;  float b = y &#x2B; 1.8556 * u;&#xA;&#xA;  color = vec4(r, g, b, 1.0);&#xA;}&#xA;

    &#xA;

    I can provide more code, if needed..

    &#xA;

    I tried changing shaders, changing textures, getting frame using ffmpeg.av_hwframe_transfer_data(_receivedFrame, _pFrame, 0);

    &#xA;