Recherche avancée

Médias (1)

Mot : - Tags -/bug

Autres articles (86)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (6993)

  • Video Conferencing in HTML5 : WebRTC via Socket.io

    http://mirror.linux.org.au/linux.conf.au/2013/mp4/Code_up_your_own_video_conference_in_HTML5.mp4
    5 février 2013, par silvia

    Six months ago I experimented with Web sockets for WebRTC and the early implementations of PeerConnection in Chrome. Last week I gave a presentation about WebRTC at Linux.conf.au, so it was time to update that codebase.

    I decided to use socket.io for the signalling following the idea of Luc, which made the server code even smaller and reduced it to a mere reflector :

     var app = require(’http’).createServer().listen(1337) ;
     var io = require(’socket.io’).listen(app) ;
    

    io.sockets.on(’connection’, function(socket)
    socket.on(’message’, function(message)
    socket.broadcast.emit(’message’, message) ;
    ) ;
    ) ;

    Then I turned to the client code. I was surprised to see the massive changes that PeerConnection has gone through. Check out my slide deck to see the different components that are now necessary to create a PeerConnection.

    I was particularly surprised to see the SDP object now fully exposed to JavaScript and thus the ability to manipulate it directly rather than through some API. This allows Web developers to manipulate the type of session that they are asking the browsers to set up. I can imaging e.g. if they have support for a video codec in JavaScript that the browser does not provide built-in, they can add that codec to the set of choices to be offered to the peer. While it is flexible, I am concerned if this might create more problems than it solves. I guess we’ll have to wait and see.

    I was also surprised by the need to use ICE, even though in my experiment I got away with an empty list of ICE servers – the ICE messages just got exchanged through the socket.io server. I am not sure whether this is a bug, but I was very happy about it because it meant I could run the whole demo on a completely separate network from the Internet.

    The most exciting news since my talk is that Mozilla and Google have managed to get a PeerConnection working between Firefox and Chrome – this is the first cross-browser video conference call without a plugin ! The code differences are minor.

    Since the specification of the WebRTC API and of the MediaStream API are now official Working Drafts at the W3C, I expect other browsers will follow. I am also looking forward to the possibilities of :

    The best places to learn about the latest possibilities of WebRTC are webrtc.org and the W3C WebRTC WG. code.google.com has open source code that continues to be updated to the latest released and interoperable features in browsers.

    The video of my talk is in the process of being published. There is a MP4 version on the Linux Australia mirror server, but I expect it will be published properly soon. I will update the blog post when that happens.

  • Video Conferencing in HTML5 : WebRTC via Socket.io

    http://mirror.linux.org.au/linux.conf.au/2013/mp4/Code_up_your_own_video_conference_in_HTML5.mp4
    1er janvier 2014, par silvia

    Six months ago I experimented with Web sockets for WebRTC and the early implementations of PeerConnection in Chrome. Last week I gave a presentation about WebRTC at Linux.conf.au, so it was time to update that codebase.

    I decided to use socket.io for the signalling following the idea of Luc, which made the server code even smaller and reduced it to a mere reflector :

     var app = require(’http’).createServer().listen(1337) ;
     var io = require(’socket.io’).listen(app) ;
    

    io.sockets.on(’connection’, function(socket)
    socket.on(’message’, function(message)
    socket.broadcast.emit(’message’, message) ;
    ) ;
    ) ;

    Then I turned to the client code. I was surprised to see the massive changes that PeerConnection has gone through. Check out my slide deck to see the different components that are now necessary to create a PeerConnection.

    I was particularly surprised to see the SDP object now fully exposed to JavaScript and thus the ability to manipulate it directly rather than through some API. This allows Web developers to manipulate the type of session that they are asking the browsers to set up. I can imaging e.g. if they have support for a video codec in JavaScript that the browser does not provide built-in, they can add that codec to the set of choices to be offered to the peer. While it is flexible, I am concerned if this might create more problems than it solves. I guess we’ll have to wait and see.

    I was also surprised by the need to use ICE, even though in my experiment I got away with an empty list of ICE servers – the ICE messages just got exchanged through the socket.io server. I am not sure whether this is a bug, but I was very happy about it because it meant I could run the whole demo on a completely separate network from the Internet.

    The most exciting news since my talk is that Mozilla and Google have managed to get a PeerConnection working between Firefox and Chrome – this is the first cross-browser video conference call without a plugin ! The code differences are minor.

    Since the specification of the WebRTC API and of the MediaStream API are now official Working Drafts at the W3C, I expect other browsers will follow. I am also looking forward to the possibilities of :

    The best places to learn about the latest possibilities of WebRTC are webrtc.org and the W3C WebRTC WG. code.google.com has open source code that continues to be updated to the latest released and interoperable features in browsers.

    The video of my talk is in the process of being published. There is a MP4 version on the Linux Australia mirror server, but I expect it will be published properly soon. I will update the blog post when that happens.

  • How parse and decode H264 file with libav/ffmpeg ?

    12 août 2022, par isrepeat

    According to official documentations I try decode my test.mp4 with AV_CODEC_ID_H264.

    


    Of course I can do this with av_read_frame(), but how do it with av_parser_parse2() ?

    


    The problem occurs at avcodec_send_packet(...) at decode_nal_units(...) at ff_h2645_packet_split(...) [h264dec.c]

    


    extern "C" {&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;}&#xA;&#xA;//#define INBUF_SIZE 4096&#xA;#define INBUF_SIZE 256000&#xA;&#xA;void decode(AVCodecContext* dec_ctx, AVFrame* frame, AVPacket* pkt, const char* filename);&#xA;&#xA;int main(int argc, char** argv)&#xA;{&#xA;    const char* filename;&#xA;    const AVCodec* codec;&#xA;    AVFormatContext* formatCtx = NULL;&#xA;    AVCodecParserContext* parser;&#xA;    AVCodecContext* c = NULL;&#xA;    AVStream* videoStream = NULL;&#xA;    FILE* f;&#xA;    AVFrame* frame;&#xA;    uint8_t inbuf[INBUF_SIZE &#x2B; AV_INPUT_BUFFER_PADDING_SIZE];&#xA;    uint8_t* data;&#xA;    size_t   data_size;&#xA;    int ret;&#xA;    AVPacket* pkt;&#xA;&#xA;    filename = "D:\\test.mp4";&#xA;&#xA;    //if (avformat_open_input(&amp;formatCtx, filename, nullptr, nullptr) &lt; 0) {&#xA;    //    throw std::exception("Could not open source file");&#xA;    //}&#xA;&#xA;    //if (avformat_find_stream_info(formatCtx, nullptr) &lt; 0) {&#xA;    //    throw std::exception("Could not find stream information");&#xA;    //}&#xA;&#xA;    //videoStream = formatCtx->streams[0];&#xA;&#xA;&#xA;&#xA;    pkt = av_packet_alloc();&#xA;    if (!pkt)&#xA;        exit(1);&#xA;&#xA;    /* set end of buffer to 0 (this ensures that no overreading happens for damaged MPEG streams) */&#xA;    memset(inbuf &#x2B; INBUF_SIZE, 0, AV_INPUT_BUFFER_PADDING_SIZE);&#xA;&#xA;    /* find the MPEG-1 video decoder */&#xA;    //codec = avcodec_find_decoder(AV_CODEC_ID_MPEG1VIDEO);&#xA;    codec = avcodec_find_decoder(AV_CODEC_ID_H264);&#xA;    if (!codec) {&#xA;        fprintf(stderr, "Codec not found\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    parser = av_parser_init(codec->id);&#xA;    if (!parser) {&#xA;        fprintf(stderr, "parser not found\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    parser->flags = PARSER_FLAG_COMPLETE_FRAMES;&#xA;&#xA;    c = avcodec_alloc_context3(codec);&#xA;    if (!c) {&#xA;        fprintf(stderr, "Could not allocate video codec context\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* For some codecs, such as msmpeg4 and mpeg4, width and height&#xA;       MUST be initialized there because this information is not&#xA;       available in the bitstream. */&#xA;&#xA;    //avcodec_parameters_to_context(c, videoStream->codecpar);&#xA;&#xA;       /* open it */&#xA;    if (avcodec_open2(c, codec, NULL) &lt; 0) {&#xA;        fprintf(stderr, "Could not open codec\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    f = fopen(filename, "rb");&#xA;    if (!f) {&#xA;        fprintf(stderr, "Could not open %s\n", filename);&#xA;        exit(1);&#xA;    }&#xA;&#xA;    frame = av_frame_alloc();&#xA;    if (!frame) {&#xA;        fprintf(stderr, "Could not allocate video frame\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    // ---- Use parser to get packets ----&#xA;    while (!feof(f)) {&#xA;        /* read raw data from the input file */&#xA;        data_size = fread(inbuf, 1, INBUF_SIZE, f);&#xA;        if (!data_size)&#xA;            break;&#xA;&#xA;        /* use the parser to split the data into frames */&#xA;        data = inbuf;&#xA;        while (data_size > 0) {&#xA;            ret = av_parser_parse2(parser, c, &amp;pkt->data, &amp;pkt->size, data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);&#xA;            if (ret &lt; 0) {&#xA;                fprintf(stderr, "Error while parsing\n");&#xA;                exit(1);&#xA;            }&#xA;            data &#x2B;= ret;&#xA;            data_size -= ret;&#xA;&#xA;            if (pkt->size)&#xA;                decode(c, frame, pkt, outfilename);&#xA;        }&#xA;    }&#xA;&#xA;    // ---- Use FormatContext to get packets ----&#xA;    //  while (av_read_frame(fmt_ctx, pkt) == 0)&#xA;    //  {&#xA;    //      if (pkt->stream_index == AVMEDIA_TYPE_VIDEO) {&#xA;    //          if (pkt->size > 0)&#xA;    //              decode(cdc_ctx, frame, pkt, fp_out);&#xA;    //      }&#xA;    //  }&#xA;&#xA;    /* flush the decoder */&#xA;    decode(c, frame, NULL, outfilename);&#xA;&#xA;    fclose(f);&#xA;&#xA;    av_parser_close(parser);&#xA;    avcodec_free_context(&amp;c);&#xA;    av_frame_free(&amp;frame);&#xA;    av_packet_free(&amp;pkt);&#xA;&#xA;    return 0;&#xA;}&#xA;&#xA;void decode(AVCodecContext* dec_ctx, AVFrame* frame, AVPacket* pkt, const char* filename)&#xA;{&#xA;    char buf[1024];&#xA;    int ret;&#xA;&#xA;    ret = avcodec_send_packet(dec_ctx, pkt);&#xA;    if (ret &lt; 0) {&#xA;        char buff[255]{ 0 };&#xA;        std::string strError = av_make_error_string(buff, 255, ret);&#xA;        fprintf(stderr, "Error sending a packet for decoding\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    while (ret >= 0) {&#xA;        ret = avcodec_receive_frame(dec_ctx, frame);&#xA;        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;            return;&#xA;        else if (ret &lt; 0) {&#xA;            fprintf(stderr, "Error during decoding\n");&#xA;            exit(1);&#xA;        }&#xA;&#xA;        printf("saving frame %3d\n", dec_ctx->frame_number);&#xA;        fflush(stdout);&#xA;        /* the picture is allocated by the decoder. no need to&#xA;           free it */&#xA;        // handle frame ...&#xA;    }&#xA;}&#xA;

    &#xA;