Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (69)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (10529)

  • Fragmented MP4 - problem playing in browser

    12 juin 2019, par PookyFan

    I try to create fragmented MP4 from raw H264 video data so I could play it in internet browser’s player. My goal is to create live streaming system, where media server would send fragmented MP4 pieces to browser. The server would buffer input data from RaspberryPi camera, which sends video as H264 frames. It would then mux that video data and make it available for client. The browser would play media data (that were muxed by server and sent i.e. through websocket) by using Media Source Extensions.

    For test purpose I wrote the following pieces of code (using many examples I found in the intenet) :

    C++ application using avcodec which muxes raw H264 video to fragmented MP4 and saves it to a file :

    #define READBUFSIZE 4096
    #define IOBUFSIZE 4096
    #define ERRMSGSIZE 128

    #include <cstdint>
    #include <iostream>
    #include <fstream>
    #include <string>
    #include <vector>

    extern "C"
    {
       #include <libavformat></libavformat>avformat.h>
       #include <libavutil></libavutil>error.h>
       #include <libavutil></libavutil>opt.h>
    }

    enum NalType : uint8_t
    {
       //NALs containing stream metadata
       SEQ_PARAM_SET = 0x7,
       PIC_PARAM_SET = 0x8
    };

    std::vector outputData;

    int mediaMuxCallback(void *opaque, uint8_t *buf, int bufSize)
    {
       outputData.insert(outputData.end(), buf, buf + bufSize);
       return bufSize;
    }

    std::string getAvErrorString(int errNr)
    {
       char errMsg[ERRMSGSIZE];
       av_strerror(errNr, errMsg, ERRMSGSIZE);
       return std::string(errMsg);
    }

    int main(int argc, char **argv)
    {
       if(argc &lt; 2)
       {
           std::cout &lt;&lt; "Missing file name" &lt;&lt; std::endl;
           return 1;
       }

       std::fstream file(argv[1], std::ios::in | std::ios::binary);
       if(!file.is_open())
       {
           std::cout &lt;&lt; "Couldn't open file " &lt;&lt; argv[1] &lt;&lt; std::endl;
           return 2;
       }

       std::vector inputMediaData;
       do
       {
           char buf[READBUFSIZE];
           file.read(buf, READBUFSIZE);

           int size = file.gcount();
           if(size > 0)
               inputMediaData.insert(inputMediaData.end(), buf, buf + size);
       } while(!file.eof());
       file.close();

       //Initialize avcodec
       av_register_all();
       uint8_t *ioBuffer;
       AVCodec *codec = avcodec_find_decoder(AV_CODEC_ID_H264);
       AVCodecContext *codecCtxt = avcodec_alloc_context3(codec);
       AVCodecParserContext *parserCtxt = av_parser_init(AV_CODEC_ID_H264);
       AVOutputFormat *outputFormat = av_guess_format("mp4", nullptr, nullptr);
       AVFormatContext *formatCtxt;
       AVIOContext *ioCtxt;
       AVStream *videoStream;

       int res = avformat_alloc_output_context2(&amp;formatCtxt, outputFormat, nullptr, nullptr);
       if(res &lt; 0)
       {
           std::cout &lt;&lt; "Couldn't initialize format context; the error was: " &lt;&lt; getAvErrorString(res) &lt;&lt; std::endl;
           return 3;
       }

       if((videoStream = avformat_new_stream( formatCtxt, avcodec_find_encoder(formatCtxt->oformat->video_codec) )) == nullptr)
       {
           std::cout &lt;&lt; "Couldn't initialize video stream" &lt;&lt; std::endl;
           return 4;
       }
       else if(!codec)
       {
           std::cout &lt;&lt; "Couldn't initialize codec" &lt;&lt; std::endl;
           return 5;
       }
       else if(codecCtxt == nullptr)
       {
           std::cout &lt;&lt; "Couldn't initialize codec context" &lt;&lt; std::endl;
           return 6;
       }
       else if(parserCtxt == nullptr)
       {
           std::cout &lt;&lt; "Couldn't initialize parser context" &lt;&lt; std::endl;
           return 7;
       }
       else if((ioBuffer = (uint8_t*)av_malloc(IOBUFSIZE)) == nullptr)
       {
           std::cout &lt;&lt; "Couldn't allocate I/O buffer" &lt;&lt; std::endl;
           return 8;
       }
       else if((ioCtxt = avio_alloc_context(ioBuffer, IOBUFSIZE, 1, nullptr, nullptr, mediaMuxCallback, nullptr)) == nullptr)
       {
           std::cout &lt;&lt; "Couldn't initialize I/O context" &lt;&lt; std::endl;
           return 9;
       }

       //Set video stream data
       videoStream->id = formatCtxt->nb_streams - 1;
       videoStream->codec->width = 1280;
       videoStream->codec->height = 720;
       videoStream->time_base.den = 60; //FPS
       videoStream->time_base.num = 1;
       videoStream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       formatCtxt->pb = ioCtxt;

       //Retrieve SPS and PPS for codec extdata
       const uint32_t synchMarker = 0x01000000;
       unsigned int i = 0;
       int spsStart = -1, ppsStart = -1;
       uint16_t spsSize = 0, ppsSize = 0;
       while(spsSize == 0 || ppsSize == 0)
       {
           uint32_t *curr =  (uint32_t*)(inputMediaData.data() + i);
           if(*curr == synchMarker)
           {
               unsigned int currentNalStart = i;
               i += sizeof(uint32_t);
               uint8_t nalType = inputMediaData.data()[i] &amp; 0x1F;
               if(nalType == SEQ_PARAM_SET)
                   spsStart = currentNalStart;
               else if(nalType == PIC_PARAM_SET)
                   ppsStart = currentNalStart;

               if(spsStart >= 0 &amp;&amp; spsSize == 0 &amp;&amp; spsStart != i)
                   spsSize = currentNalStart - spsStart;
               else if(ppsStart >= 0 &amp;&amp; ppsSize == 0 &amp;&amp; ppsStart != i)
                   ppsSize = currentNalStart - ppsStart;
           }
           ++i;
       }

       videoStream->codec->extradata = inputMediaData.data() + spsStart;
       videoStream->codec->extradata_size = ppsStart + ppsSize;

       //Write main header
       AVDictionary *options = nullptr;
       av_dict_set(&amp;options, "movflags", "frag_custom+empty_moov", 0);
       res = avformat_write_header(formatCtxt, &amp;options);
       if(res &lt; 0)
       {
           std::cout &lt;&lt; "Couldn't write container main header; the error was: " &lt;&lt; getAvErrorString(res) &lt;&lt; std::endl;
           return 10;
       }

       //Retrieve frames from input video and wrap them in container
       int currentInputIndex = 0;
       int framesInSecond = 0;
       while(currentInputIndex &lt; inputMediaData.size())
       {
           uint8_t *frameBuffer;
           int frameSize;
           res = av_parser_parse2(parserCtxt, codecCtxt, &amp;frameBuffer, &amp;frameSize, inputMediaData.data() + currentInputIndex,
               inputMediaData.size() - currentInputIndex, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
           if(frameSize == 0) //No more frames while some data still remains (is that even possible?)
           {
               std::cout &lt;&lt; "Some data left unparsed: " &lt;&lt; std::to_string(inputMediaData.size() - currentInputIndex) &lt;&lt; std::endl;
               break;
           }

           //Prepare packet with video frame to be dumped into container
           AVPacket packet;
           av_init_packet(&amp;packet);
           packet.data = frameBuffer;
           packet.size = frameSize;
           packet.stream_index = videoStream->index;
           currentInputIndex += frameSize;

           //Write packet to the video stream
           res = av_write_frame(formatCtxt, &amp;packet);
           if(res &lt; 0)
           {
               std::cout &lt;&lt; "Couldn't write packet with video frame; the error was: " &lt;&lt; getAvErrorString(res) &lt;&lt; std::endl;
               return 11;
           }

           if(++framesInSecond == 60) //We want 1 segment per second
           {
               framesInSecond = 0;
               res = av_write_frame(formatCtxt, nullptr); //Flush segment
           }
       }
       res = av_write_frame(formatCtxt, nullptr); //Flush if something has been left

       //Write media data in container to file
       file.open("my_mp4.mp4", std::ios::out | std::ios::binary);
       if(!file.is_open())
       {
           std::cout &lt;&lt; "Couldn't open output file " &lt;&lt; std::endl;
           return 12;
       }

       file.write((char*)outputData.data(), outputData.size());
       if(file.fail())
       {
           std::cout &lt;&lt; "Couldn't write to file" &lt;&lt; std::endl;
           return 13;
       }

       std::cout &lt;&lt; "Media file muxed successfully" &lt;&lt; std::endl;
       return 0;
    }
    </vector></string></fstream></iostream></cstdint>

    (I hardcoded a few values, such as video dimensions or framerate, but as I said this is just a test code.)


    Simple HTML webpage using MSE to play my fragmented MP4

       


       <video width="1280" height="720" controls="controls">
       </video>

    <code class="echappe-js">&lt;script&gt;<br />
    var vidElement = document.querySelector('video');<br />
    <br />
    if (window.MediaSource) {<br />
     var mediaSource = new MediaSource();<br />
     vidElement.src = URL.createObjectURL(mediaSource);<br />
     mediaSource.addEventListener('sourceopen', sourceOpen);<br />
    } else {<br />
     console.log(&quot;The Media Source Extensions API is not supported.&quot;)<br />
    }<br />
    <br />
    function sourceOpen(e) {<br />
     URL.revokeObjectURL(vidElement.src);<br />
     var mime = 'video/mp4; codecs=&quot;avc1.640028&quot;';<br />
     var mediaSource = e.target;<br />
     var sourceBuffer = mediaSource.addSourceBuffer(mime);<br />
     var videoUrl = 'my_mp4.mp4';<br />
     fetch(videoUrl)<br />
       .then(function(response) {<br />
         return response.arrayBuffer();<br />
       })<br />
       .then(function(arrayBuffer) {<br />
         sourceBuffer.addEventListener('updateend', function(e) {<br />
           if (!sourceBuffer.updating &amp;amp;&amp;amp; mediaSource.readyState === 'open') {<br />
             mediaSource.endOfStream();<br />
           }<br />
         });<br />
         sourceBuffer.appendBuffer(arrayBuffer);<br />
       });<br />
    }<br />
    &lt;/script&gt;

    Output MP4 file generated by my C++ application can be played i.e. in MPC, but it doesn’t play in any web browser I tested it with. It also doesn’t have any duration (MPC keeps showing 00:00).

    To compare output MP4 file I got from my C++ application described above, I also used FFMPEG to create fragmented MP4 file from the same source file with raw H264 stream. I used the following command :

    ffmpeg -r 60 -i input.h264 -c:v copy -f mp4 -movflags empty_moov+default_base_moof+frag_keyframe test.mp4

    This file generated by FFMPEG is played correctly by every web browser I used for tests. It also has correct duration (but also it has trailing atom, which wouldn’t be present in my live stream anyway, and as I need a live stream, it won’t have any fixed duration in the first place).

    MP4 atoms for both files look very similiar (they have identical avcc section for sure). What’s interesting (but not sure if it’s of any importance), both files have different NALs format than input file (RPI camera produces video stream in Annex-B format, while output MP4 files contain NALs in AVCC format... or at least it looks like it’s the case when I compare mdat atoms with input H264 data).

    I assume there is some field (or a few fields) I need to set for avcodec to make it produce video stream that would be properly decoded and played by browsers players. But what field(s) do I need to set ? Or maybe problem lies somewhere else ? I ran out of ideas.


    EDIT 1 :
    As suggested, I investigated binary content of both MP4 files (generated by my app and FFMPEG tool) with hex editor. What I can confirm :

    • both files have identical avcc section (they match perfectly and are in AVCC format, I analyzed it byte after byte and there’s no mistake about it)
    • both files have NALs in AVCC format (I looked closely at mdat atoms and they don’t differ between both MP4 files)

    So I guess there’s nothing wrong with the extradata creation in my code - avcodec takes care of it properly, even if I just feed it with SPS and PPS NALs. It converts them by itself, so no need for me to do it by hand. Still, my original problem remains.

    EDIT 2 : I achieved partial success - MP4 generated by my app now plays in Firefox. I added this line to the code (along with rest of stream initialization) :

    videoStream->codec->time_base = videoStream->time_base;

    So now this section of my code looks like this :

    //Set video stream data
    videoStream->id = formatCtxt->nb_streams - 1;
    videoStream->codec->width = 1280;
    videoStream->codec->height = 720;
    videoStream->time_base.den = 60; //FPS
    videoStream->time_base.num = 1;
    videoStream->codec->time_base = videoStream->time_base;
    videoStream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
    formatCtxt->pb = ioCtxt;
  • screen recording of windows form in a virtual machine

    19 novembre 2018, par Mahesh Vemuri

    Hi I am trying to stream my winform app window. Whenever I connect to my system using RDP, it is streaming the window. If I close the RDP system or minimize it, ffmpeg is not able to stream the window. I have tried with many other tools. But same behaviour.

    Is there any way to record my window in virtual machine in AWS ?

  • FFMpeg UDP streaming - random split video effect

    3 octobre 2014, par Philmacs

    I’m trying to get a simple local preview of my webcam from an FFMpeg udp stream using an embedded Mplayer window. I can view the live stream using MPlayer but the image is unstable. I’m using the following FFMpeg command :

    ffmpeg -f dshow -video_size 640x480 -i video="Lenovo EasyCamera" -an -f rawvideo -pix_fmt yuyv422 -r 15 udp://127.0.0.1:1234

    And this is the MPlayer command :

    mplayer -demuxer rawvideo -rawvideo fps=15:w=640:h=480:format=yuy2 -nofs -noquiet -identify -idle -slave -nomouseinput -framedrop -wid 1051072

    Sometimes the stream image is OK, but intermittently the image tears randomly and this is how it looks (sorry, not enough rep for images in posts)

    http://imgur.com/sLC3FW0

    I have tried with FFPlay to see if it’s a problem with MPlayer, but I get the same result :

    ffplay -s 640x480 -pix_fmt yuyv422 -f rawvideo -i udp://127.0.0.1:1234

    http://imgur.com/06L42Cj

    This effect is happening at random. If I stop and restart the video might be OK, or it may look like the above. Using aything other than udp and rawvideo adds a delay to the video stream, which I want to avoid.

    The FFMpeg streaming guide suggest methods when you get packet loss, but as far as I’m aware I don’t seem to be getting that.

    I’m new to FFMpeg/Mplayer/video streaming and any help or thoughts greatly appreciated.