Recherche avancée

Médias (1)

Mot : - Tags -/epub

Autres articles (90)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur

    8 février 2011, par

    La visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
    Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
    Configuration de la boite multimédia
    Dès (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (15334)

  • Streaming RTP with ffmpeg and node.js to voip phone

    5 juillet 2023, par Nik Hendricks

    I am trying to implement SIP in node.js. Here is the library i am working on

    


    Upon receiving an invite request such as

    


    
Received INVITE
INVITE sip:201@192.168.1.2:5060 SIP/2.0
Via: SIP/2.0/UDP 192.168.1.39:5062;branch=z9hG4bK1534941205
From: "Nik" <sip:nik@192.168.1.2>;tag=564148403
To: <sip:201@192.168.1.2>
Call-ID: 2068254636@192.168.1.39
CSeq: 2 INVITE
Contact: <sip:nik@192.168.1.39:5062>
Authorization: Digest username="Nik", realm="NRegistrar", nonce="1234abcd", uri="sip:201@192.168.1.2:5060", response="7fba16dafe3d60c270b774bd5bba524c", algorithm=MD5
Content-Type: application/sdp
Allow: INVITE, INFO, PRACK, ACK, BYE, CANCEL, OPTIONS, NOTIFY, REGISTER, SUBSCRIBE, REFER, PUBLISH, UPDATE, MESSAGE
Max-Forwards: 70
User-Agent: Yealink SIP-T42G 29.71.0.120
Supported: replaces
Allow-Events: talk,hold,conference,refer,check-sync
Content-Length: 306

v=0
o=- 20083 20083 IN IP4 192.168.1.39
s=SDP data
c=IN IP4 192.168.1.39
t=0 0
m=audio 11782 RTP/AVP 0 8 18 9 101
a=rtpmap:0 PCMU/8000
a=rtpmap:8 PCMA/8000
a=rtpmap:18 G729/8000
a=fmtp:18 annexb=no
a=rtpmap:9 G722/8000
a=fmtp:101 0-15
a=rtpmap:101 telephone-event/8000
a=ptime:20
a=sendrecv



    


    I can then parse the SDP into an object like this

    


     
{
    "session":{
        "version":"0",
        "origin":"- 20084 20084 IN IP4 192.168.1.39",
        "sessionName":"SDP data"
    },
    "media":[
        {
            "media":"audio",
            "port":11784,
            "protocol":"RTP/AVP",
            "format":"0",
            "attributes":[
                "rtpmap:0 PCMU/8000",
                "rtpmap:8 PCMA/8000",
                "rtpmap:18 G729/8000",
                "fmtp:18 annexb=no",
                "rtpmap:9 G722/8000",
                "fmtp:101 0-15",
                "rtpmap:101 telephone-event/8000",
                "ptime:20",
                "sendrecv"
            ]
        }
    ]
}


    


    After sending the 100 and 180 responses with my library i attempt to start a RTP stream with ffmpeg

    


    var port = SDPParser.parse(res.message.body).media[0].port
var s = new STREAMER('output.wav', '192.168.1.39', port)


    


    with the following STREAMER class

    


    class Streamer{
    constructor(inputFilePath, rtpAddress, rtpPort){
        this.inputFilePath = 'output.wav';
        this.rtpAddress = rtpAddress;
        this.rtpPort = rtpPort;
    }

    start(){
        return new Promise((resolve) => {
            const ffmpegCommand = `ffmpeg -re -i ${this.inputFilePath} -ar 8000 -f mulaw -f rtp rtp://${this.rtpAddress}:${this.rtpPort}`;
            const ffmpegProcess = spawn(ffmpegCommand, { shell: true });
    
            ffmpegProcess.stdout.on('data', (data) => {
                data = data.toString()
                //replace all instances of 127.0.0.1 with our local ip address
                data = data.replace(new RegExp('127.0.0.1', 'g'), '192.168.1.3');

                resolve(data.toString())
            });
    
            ffmpegProcess.stderr.on('data', (data) => {
              // Handle stderr data if required
              console.log(data.toString())
            });
    
            ffmpegProcess.on('close', (code) => {
              // Handle process close event if required
              console.log('close')
              console.log(code.toString())
            });
    
            ffmpegProcess.on('error', (error) => {
              // Handle process error event if required
              console.log(error.toString())
            });
        })
    }
     
}


    


    the start() function resolves with the SDP that ffmpeg generates. I am starting to think that ffmpeg cant generate proper SDP for voip calls.

    


    so when i create 200 response with the following sdp

    


    v=0
o=- 0 0 IN IP4 192.168.1.3
s=Impact Moderato
c=IN IP4 192.168.1.39
t=0 0
a=tool:libavformat 58.29.100
m=audio 12123 RTP/AVP 97
b=AS:128
a=rtpmap:97 PCMU/8000/2


    


    the other line never picks up. from my understanding the first invite from the caller will provide SDP that will tell me where to send the RTP stream too and the correct codecs and everything. I know that currently, my wav file is PCMU and i can listen to it with ffplay and the provided sdp. what is required to make the other line pickup specifically a Yealink t42g

    


    my full attempt looks like this

    


    Client.on('INVITE', (res) => {
    console.log("Received INVITE")
    var d = Client.Dialog(res).then(dialog => {
        dialog.send(res.CreateResponse(100))
        dialog.send(res.CreateResponse(180))
        var port = SDPParser.parse(res.message.body).media[0].port

        var s = new STREAMER('output.wav', '192.168.1.39', port)
        s.start().then(sdp => {
            console.log(sdp.split('SDP:')[1])
            var ok = res.CreateResponse(200)
            ok.body = sdp.split('SDP:')[1]
            dialog.send(ok)
        })

        dialog.on('BYE', (res) => {
            console.log("BYE")
            dialog.send(res.CreateResponse(200))
            dialog.kill()
        })
    })
})


    


    I have provided a link to my library at the top of this message. My current problem is in the examples/Client folder.

    


    I'm not sure what could be going wrong here. Maybe i'm not using the right format or codec for the VOIP phone i dont see whats wrong with the SDP. especially if i can listen to SDP generated by ffmpeg if i stream RTP back to the same computer i use ffplay on. Any help is greatly appreciated.

    


    Update

    


    As i test i decided to send the caller back SDP that was generated by a Yealink phone like itself. but with some modifications

    


    v=0
o=- ${this.output_port} ${this.output_port} IN IP4 192.168.1.39
s=SDP data
c=IN IP4 192.168.1.39
t=0 0
m=audio ${this.output_port} RTP/AVP 0 8 18 9 101
a=rtpmap:0 PCMU/8000
a=rtpmap:8 PCMA/8000
a=rtpmap:18 G729/8000
a=fmtp:18 annexb=no
a=rtpmap:9 G722/8000
a=fmtp:101 0-15
a=rtpmap:1
01 telephone-event/8000
a=ptime:20
a=sendrecv


    


    Finally, the phone that makes the call in the first place will fully answer but still no audio stream. I notice if I change the IP address or port to something wrong the other phone Will hear its own audio instead of just quiet. so this leads me to believe I am headed in the right direction. And maybe the problem lies in not sending the right audio format for what I'm describing.

    


    Additionaly, Whenever using ffmpeg to stream my audio with rtp I notice that it sees the file format as this pcm_alaw, 8000 Hz, mono, s16, 64 kb/s My new SDP describes using both ulaw and alaw but I'm not sure which it is saying it prefers

    


    v=0
o=- ${this.output_port} ${this.output_port} IN IP4 192.168.1.39
s=SDP data
c=IN IP4 192.168.1.39
t=0 0
m=audio ${this.output_port} RTP/AVP 0 101
a=rtpmap:0 PCMU/8000
a=fmtp:101 0-15
a=rtpmap:101 telephone-event/8000
a=ptime:0
a=sendrecv


    


    I have been able to simply the SDP down to this. This will let the other phone actually pickup and not hear its own audio. it's just a completely dead air stream.

    


  • Creating buttons with Imagick

    9 juin 2010, par Mikko Koppanen — Imagick, PHP stuff

    A fellow called kakapo asked me to create a button with Imagick. He had an image of the button and a Photoshop tutorial but unfortunately the tutorial was in Chinese. My Chinese is a bit rusty so it will take a little longer to create that specific button ;)

    The button in this example is created after this tutorial http://xeonfx.com/tutorials/easy-button-tutorial/ (yes, I googled “easy button tutorial”). The code and the button it creates are both very simple but the effect looks really nice.

    Here we go with the code :

    1. < ?php
    2.  
    3. /* Create a new Imagick object */
    4. $im = new Imagick() ;
    5.  
    6. /* Create empty canvas */
    7. $im->newImage( 200, 200, "white", "png" ) ;
    8.  
    9. /* Create the object used to draw */
    10. $draw = new ImagickDraw() ;
    11.  
    12. /* Set the button color.
    13.   Changing this value changes the color of the button */
    14. $draw->setFillColor( "#4096EE" ) ;
    15.  
    16. /* Create the outer circle */
    17. $draw->circle( 50, 50, 70, 70 ) ;
    18.  
    19. /* Create the smaller circle on the button */
    20. $draw->setFillColor( "white" ) ;
    21.  
    22. /* Semi-opaque fill */
    23. $draw->setFillAlpha( 0.2 ) ;
    24.  
    25. /* Draw the circle */
    26. $draw->circle( 50, 50, 68, 68 ) ;
    27.  
    28. /* Set the font */
    29. $draw->setFont( "./test1.ttf" ) ;
    30.  
    31. /* This is the alpha value used to annotate */
    32. $draw->setFillAlpha( 0.17 ) ;
    33.  
    34. /* Draw a curve on the button with 17% opaque fill */
    35. $draw->bezier( array(
    36.           array( "x" => 10 , "y" => 25 ),
    37.           array( "x" => 39, "y" => 49 ),
    38.           array( "x" => 60, "y" => 55 ),
    39.           array( "x" => 75, "y" => 70 ),
    40.           array( "x" => 100, "y" => 70 ),
    41.           array( "x" => 100, "y" => 10 ),
    42.          ) ) ;
    43.  
    44. /* Render all pending operations on the image */       
    45. $im->drawImage( $draw ) ;
    46.  
    47. /* Set fill to fully opaque */
    48. $draw->setFillAlpha( 1 ) ;
    49.  
    50. /* Set the font size to 30 */
    51. $draw->setFontSize( 30 ) ;
    52.  
    53. /* The text on the */
    54. $draw->setFillColor( "white" ) ;
    55.  
    56. /* Annotate the text */
    57. $im->annotateImage( $draw, 38, 55, 0, "go" ) ;
    58.  
    59. /* Trim extra area out of the image */
    60. $im->trimImage( 0 ) ;
    61.  
    62. /* Output the image */
    63. header( "Content-Type : image/png" ) ;
    64. echo $im ;
    65.  
    66.  ?>

    And here is a few buttons I created by changing the fill color value :

    red

    green

    blue

  • Decoding MediaRecorder produced webm stream

    15 août 2019, par sgmg

    I am trying to decode a video stream from the browser using the ffmpeg API. The stream is produced by the webcam and recorded with MediaRecorder as webm format. What I ultimately need is a vector of opencv cv::Mat objects for further processing.

    I have written a C++ webserver using the uWebsocket library. The video stream is sent via websocket from the browser to the server once per second. On the server, I append the received data to my custom buffer and decode it with the ffmpeg API.

    If I just save the data on the disk and later I play it with a media player, it works fine. So, whatever the browser sends is a valid video.

    I do not think that I correctly understand how should the custom IO behave with network streaming as nothing seems to be working.

    The custom buffer :

    struct Buffer
       {
           std::vector data;
           int currentPos = 0;
       };

    The readAVBuffer method for custom IO

    int MediaDecoder::readAVBuffer(void* opaque, uint8_t* buf, int buf_size)
    {
       MediaDecoder::Buffer* mbuf = (MediaDecoder::Buffer*)opaque;
       int count = 0;
       for(int i=0;icurrentPos;
           if(index >= (int)mbuf->data.size())
           {
               break;
           }
           count++;
           buf[i] = mbuf->data.at(index);
       }
       if(count > 0) mbuf->currentPos+=count;

       std::cout &lt;&lt; "read : "&lt;currentPos&lt;&lt;", buff size:"&lt;data.size() &lt;&lt; std::endl;
       if(count &lt;= 0) return AVERROR(EAGAIN); //is this error that should be returned? It cannot be EOF since we're not done yet, most likely
       return count;
    }

    The big decode method, that’s supposed to return whatever frames it could read

    std::vector MediaDecoder::decode(const char* data, size_t length)
    {
       std::vector frames;
       //add data to the buffer
       for(size_t i=0;i/do not invoke the decoders until we have 1MB of data
       if(((buf.data.size() - buf.currentPos) &lt; 1*1024*1024) &amp;&amp; !initializedCodecs) return frames;

       std::cout &lt;&lt; "decoding data length "&lt;/initialize ffmpeg objects. Custom I/O, format, decoder, etc.
       {      
           //these are just members of the class
           avioCtxPtr = std::unique_ptr(
                       avio_alloc_context((uint8_t*)av_malloc(4096),4096,0,&amp;buf,&amp;readAVBuffer,nullptr,nullptr),
                       avio_context_deleter());
           if(!avioCtxPtr)
           {
               std::cerr &lt;&lt; "Could not create IO buffer" &lt;&lt; std::endl;
               return frames;
           }                

           fmt_ctx = std::unique_ptr(avformat_alloc_context(),
                                                                             avformat_context_deleter());
           fmt_ctx->pb = avioCtxPtr.get();
           fmt_ctx->flags |= AVFMT_FLAG_CUSTOM_IO ;
           //fmt_ctx->max_analyze_duration = 2 * AV_TIME_BASE; // read 2 seconds of data
           {
               AVFormatContext *fmtCtxRaw = fmt_ctx.get();            
               if (avformat_open_input(&amp;fmtCtxRaw, "", nullptr, nullptr) &lt; 0) {
                   std::cerr &lt;&lt; "Could not open movie" &lt;&lt; std::endl;
                   return frames;
               }
           }
           if (avformat_find_stream_info(fmt_ctx.get(), nullptr) &lt; 0) {
               std::cerr &lt;&lt; "Could not find stream information" &lt;&lt; std::endl;
               return frames;
           }
           if((video_stream_idx = av_find_best_stream(fmt_ctx.get(), AVMEDIA_TYPE_VIDEO, -1, -1, nullptr, 0)) &lt; 0)
           {
               std::cerr &lt;&lt; "Could not find video stream" &lt;&lt; std::endl;
               return frames;
           }
           AVStream *video_stream = fmt_ctx->streams[video_stream_idx];
           AVCodec *dec = avcodec_find_decoder(video_stream->codecpar->codec_id);

           video_dec_ctx = std::unique_ptr (avcodec_alloc_context3(dec),
                                                                                 avcodec_context_deleter());
           if (!video_dec_ctx)
           {
               std::cerr &lt;&lt; "Failed to allocate the video codec context" &lt;&lt; std::endl;
               return frames;
           }
           avcodec_parameters_to_context(video_dec_ctx.get(),video_stream->codecpar);
           video_dec_ctx->thread_count = 1;
          /* video_dec_ctx->max_b_frames = 0;
           video_dec_ctx->frame_skip_threshold = 10;*/

           AVDictionary *opts = nullptr;
           av_dict_set(&amp;opts, "refcounted_frames", "1", 0);
           av_dict_set(&amp;opts, "deadline", "1", 0);
           av_dict_set(&amp;opts, "auto-alt-ref", "0", 0);
           av_dict_set(&amp;opts, "lag-in-frames", "1", 0);
           av_dict_set(&amp;opts, "rc_lookahead", "1", 0);
           av_dict_set(&amp;opts, "drop_frame", "1", 0);
           av_dict_set(&amp;opts, "error-resilient", "1", 0);

           int width = video_dec_ctx->width;
           videoHeight = video_dec_ctx->height;

           if(avcodec_open2(video_dec_ctx.get(), dec, &amp;opts) &lt; 0)
           {
               std::cerr &lt;&lt; "Failed to open the video codec context" &lt;&lt; std::endl;
               return frames;
           }

           AVPixelFormat  pFormat = AV_PIX_FMT_BGR24;
           img_convert_ctx = std::unique_ptr(sws_getContext(width, videoHeight,
                                            video_dec_ctx->pix_fmt,   width, videoHeight, pFormat,
                                            SWS_BICUBIC, nullptr, nullptr,nullptr),swscontext_deleter());

           frame = std::unique_ptr(av_frame_alloc(),avframe_deleter());
           frameRGB = std::unique_ptr(av_frame_alloc(),avframe_deleter());


           int numBytes = av_image_get_buffer_size(pFormat, width, videoHeight,32 /*https://stackoverflow.com/questions/35678041/what-is-linesize-alignment-meaning*/);
           std::unique_ptr imageBuffer((uint8_t *) av_malloc(numBytes*sizeof(uint8_t)),avbuffer_deleter());
           av_image_fill_arrays(frameRGB->data,frameRGB->linesize,imageBuffer.get(),pFormat,width,videoHeight,32);
           frameRGB->width = width;
           frameRGB->height = videoHeight;

           initializedCodecs = true;
       }    
       AVPacket pkt;
       av_init_packet(&amp;pkt);
       pkt.data = nullptr;
       pkt.size = 0;

       int read_frame_return = 0;
       while ( (read_frame_return=av_read_frame(fmt_ctx.get(), &amp;pkt)) >= 0)
       {
           readFrame(&amp;frames,&amp;pkt,video_dec_ctx.get(),frame.get(),img_convert_ctx.get(),
                     videoHeight,frameRGB.get());
           //if(cancelled) break;
       }
       avioCtxPtr->eof_reached = 0;
       avioCtxPtr->error = 0;


       //flush
      // readFrame(frames.get(),nullptr,video_dec_ctx.get(),frame.get(),
        //         img_convert_ctx.get(),videoHeight,frameRGB.get());

       avioCtxPtr->eof_reached = 0;
       avioCtxPtr->error = 0;

       if(frames->size() &lt;= 0)
       {
           std::cout &lt;&lt; "buffer pos: "&lt;code>

    What I would expect to happen would be for a continuous extraction of cv::Mat frames as I feed it more and more data. What actually happens is that after the the buffer is fully read I see :

    [matroska,webm @ 0x507b450] Read error at pos. 1278266 (0x13813a)
    [matroska,webm @ 0x507b450] Seek to desired resync point failed. Seeking to earliest point available instead.

    And then no more bytes are read from the buffer even if later I increase the size of it.

    There is something terribly wrong I’m doing here and I don’t understand what.