Recherche avancée

Médias (1)

Mot : - Tags -/école

Autres articles (104)

  • Soumettre améliorations et plugins supplémentaires

    10 avril 2011

    Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
    Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (9663)

  • Muxing with libav

    14 février 2014, par LordDoskias

    I have a program which is supposed to demux input mpeg-ts, transcode the mpeg2 into h264 and then mux the audio alongside the transcoded video. When I open the resulting muxed file with VLC I neither get audio nor video. Here is the relevant code.

    My main worker loop is as follows :

    void
    *writer_thread(void *thread_ctx) {

       struct transcoder_ctx_t *ctx = (struct transcoder_ctx_t *) thread_ctx;
       AVStream *video_stream = NULL, *audio_stream = NULL;
       AVFormatContext *output_context = init_output_context(ctx, &video_stream, &audio_stream);
       struct mux_state_t mux_state = {0};

       //from omxtx
       mux_state.pts_offset = av_rescale_q(ctx->input_context->start_time, AV_TIME_BASE_Q, output_context->streams[ctx->video_stream_index]->time_base);

       //write stream header if any
       avformat_write_header(output_context, NULL);

       //do not start doing anything until we get an encoded packet
       pthread_mutex_lock(&ctx->pipeline.video_encode.is_running_mutex);
       while (!ctx->pipeline.video_encode.is_running) {
           pthread_cond_wait(&ctx->pipeline.video_encode.is_running_cv, &ctx->pipeline.video_encode.is_running_mutex);
       }

       while (!ctx->pipeline.video_encode.eos || !ctx->processed_audio_queue->queue_finished) {
           //FIXME a memory barrier is required here so that we don't race
           //on above variables

           //fill a buffer with video data
           OERR(OMX_FillThisBuffer(ctx->pipeline.video_encode.h, omx_get_next_output_buffer(&ctx->pipeline.video_encode)));

           write_audio_frame(output_context, audio_stream, ctx); //write full audio frame
           //FIXME no guarantee that we have a full frame per packet?
           write_video_frame(output_context, video_stream, ctx, &mux_state); //write full video frame
           //encoded_video_queue is being filled by the previous command

       }

       av_write_trailer(output_context);

       //free all the resources
       avcodec_close(video_stream->codec);
       avcodec_close(audio_stream->codec);
       /* Free the streams. */
       for (int i = 0; i < output_context->nb_streams; i++) {
           av_freep(&output_context->streams[i]->codec);
           av_freep(&output_context->streams[i]);
       }

       if (!(output_context->oformat->flags & AVFMT_NOFILE)) {
           /* Close the output file. */
           avio_close(output_context->pb);
       }


       /* free the stream */
       av_free(output_context);
       free(mux_state.pps);
       free(mux_state.sps);
    }

    The code for initialising libav output context is this :

    static
    AVFormatContext *
    init_output_context(const struct transcoder_ctx_t *ctx, AVStream **video_stream, AVStream **audio_stream) {
       AVFormatContext *oc;
       AVOutputFormat *fmt;
       AVStream *input_stream, *output_stream;
       AVCodec *c;
       AVCodecContext *cc;
       int audio_copied = 0; //copy just 1 stream

       fmt = av_guess_format("mpegts", NULL, NULL);
       if (!fmt) {
           fprintf(stderr, "[DEBUG] Error guessing format, dying\n");
           exit(199);
       }

       oc = avformat_alloc_context();
       if (!oc) {
           fprintf(stderr, "[DEBUG] Error allocating context, dying\n");
           exit(200);
       }

       oc->oformat = fmt;
       snprintf(oc->filename, sizeof(oc->filename), "%s", ctx->output_filename);
       oc->debug = 1;
       oc->start_time_realtime = ctx->input_context->start_time;
       oc->start_time = ctx->input_context->start_time;
       oc->duration = 0;
       oc->bit_rate = 0;

       for (int i = 0; i < ctx->input_context->nb_streams; i++) {
           input_stream = ctx->input_context->streams[i];
           output_stream = NULL;
           if (input_stream->index == ctx->video_stream_index) {
               //copy stuff from input video index
               c = avcodec_find_encoder(CODEC_ID_H264);
               output_stream = avformat_new_stream(oc, c);
               *video_stream = output_stream;
               cc = output_stream->codec;
               cc->width = input_stream->codec->width;
               cc->height = input_stream->codec->height;
               cc->codec_id = CODEC_ID_H264;
               cc->codec_type = AVMEDIA_TYPE_VIDEO;
               cc->bit_rate = ENCODED_BITRATE;
               cc->time_base = input_stream->codec->time_base;

               output_stream->avg_frame_rate = input_stream->avg_frame_rate;
               output_stream->r_frame_rate = input_stream->r_frame_rate;
               output_stream->start_time = AV_NOPTS_VALUE;

           } else if ((input_stream->codec->codec_type == AVMEDIA_TYPE_AUDIO) && !audio_copied)  {
               /* i care only about audio */
               c = avcodec_find_encoder(input_stream->codec->codec_id);
               output_stream = avformat_new_stream(oc, c);
               *audio_stream = output_stream;
               avcodec_copy_context(output_stream->codec, input_stream->codec);
               /* Apparently fixes a crash on .mkvs with attachments: */
               av_dict_copy(&output_stream->metadata, input_stream->metadata, 0);
               /* Reset the codec tag so as not to cause problems with output format */
               output_stream->codec->codec_tag = 0;
               audio_copied = 1;
           }
       }

       for (int i = 0; i < oc->nb_streams; i++) {
           if (oc->oformat->flags & AVFMT_GLOBALHEADER)
               oc->streams[i]->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
           if (oc->streams[i]->codec->sample_rate == 0)
               oc->streams[i]->codec->sample_rate = 48000; /* ish */
       }

       if (!(fmt->flags & AVFMT_NOFILE)) {
           fprintf(stderr, "[DEBUG] AVFMT_NOFILE set, allocating output container\n");
           if (avio_open(&oc->pb, ctx->output_filename, AVIO_FLAG_WRITE) < 0) {
               fprintf(stderr, "[DEBUG] error creating the output context\n");
               exit(1);
           }
       }

       return oc;
    }

    Finally this is the code for writing audio :

    static
    void
    write_audio_frame(AVFormatContext *oc, AVStream *st, struct transcoder_ctx_t *ctx) {
       AVPacket pkt = {0}; // data and size must be 0;
       struct packet_t *source_audio;
       av_init_packet(&pkt);

       if (!(source_audio = packet_queue_get_next_item_asynch(ctx->processed_audio_queue))) {
           return;
       }

       pkt.stream_index = st->index;
       pkt.size = source_audio->data_length;
       pkt.data = source_audio->data;
       pkt.pts = source_audio->PTS;
       pkt.dts = source_audio->DTS;
       pkt.duration = source_audio->duration;
       pkt.destruct = avpacket_destruct;
       /* Write the compressed frame to the media file. */
       if (av_interleaved_write_frame(oc, &pkt) != 0) {
           fprintf(stderr, "[DEBUG] Error while writing audio frame\n");
       }

       packet_queue_free_packet(source_audio, 0);
    }

    A resulting mpeg4 file can be obtained from here : http://87.120.131.41/dl/mpeg4.h264

    I have ommited the write_video_frame code since it is a lot more complicated and I might be making something wrong there as I'm doing timebase conversation etc. For audio however I'm doing 1:1 copy. Each packet_t packet contains data from av_read_frame from the input mpegts container. In the worst case I'd expect that my audio is working and not my video. However I cannot get either of those to work. Seems the documentation is rather vague on making things like that - I've tried both libav and ffmpeg irc channels to no avail. Any information regarding how I can debug the issue will be greatly appreciated.

  • Video Conferencing in HTML5 : WebRTC via Web Sockets

    1er janvier 2014, par silvia

    A bit over a week ago I gave a presentation at Web Directions Code 2012 in Melbourne. Maxine and John asked me to speak about something related to HTML5 video, so I went for the new shiny : WebRTC – real-time communication in the browser.

    Presentation slides

    I only had 20 min, so I had to make it tight. I wanted to show off video conferencing without special plugins in Google Chrome in just a few lines of code, as is the promise of WebRTC. To a large extent, I achieved this. But I made some interesting discoveries along the way. Demos are in the slide deck.

    UPDATE : Opera 12 has been released with WebRTC support.

    Housekeeping : if you want to replicate what I have done, you need to install a Google Chrome Web Browser 19+. Then make sure you go to chrome ://flags and activate the MediaStream and PeerConnection experiment(s). Restart your browser and now you can experiment with this feature. Big warning up-front : it’s not production-ready, since there are still changes happening to the spec and there is no compatible implementation by another browser yet.

    Here is a brief summary of the steps involved to set up video conferencing in your browser :

    1. Set up a video element each for the local and the remote video stream.
    2. Grab the local camera and stream it to the first video element.
    3. (*) Establish a connection to another person running the same Web page.
    4. Send the local camera stream on that peer connection.
    5. Accept the remote camera stream into the second video element.

    Now, the most difficult part of all of this – believe it or not – is the signalling part that is required to build the peer connection (marked with (*)). Initially I wanted to run completely without a server and just enter the remote’s IP address to establish the connection. This is, however, not a functionality that the PeerConnection object provides [might this be something to add to the spec ?].

    So, you need a server known to both parties that can provide for the handshake to set up the connection. All the examples that I have seen, such as https://apprtc.appspot.com/, use a channel management server on Google’s appengine. I wanted it all working with HTML5 technology, so I decided to use a Web Socket server instead.

    I implemented my Web Socket server using node.js (code of websocket server). The video conferencing demo is in the slide deck in an iframe – you can also use the stand-alone html page. Works like a treat.

    While it is still using Google’s STUN server to get through NAT, the messaging for setting up the connection is running completely through the Web Socket server. The messages that get exchanged are plain SDP message packets with a session ID. There are OFFER, ANSWER, and OK packets exchanged for each streaming direction. You can see some of it in the below image :

    WebRTC demo

    I’m not running a public WebSocket server, so you won’t be able to see this part of the presentation working. But the local loopback video should work.

    At the conference, it all went without a hitch (while the wireless played along). I believe you have to host the WebSocket server on the same machine as the Web page, otherwise it won’t work for security reasons.

    A whole new world of opportunities lies out there when we get the ability to set up video conferencing on every Web page – scary and exciting at the same time !

  • Video Conferencing in HTML5 : WebRTC via Web Sockets

    14 juin 2012, par silvia

    A bit over a week ago I gave a presentation at Web Directions Code 2012 in Melbourne. Maxine and John asked me to speak about something related to HTML5 video, so I went for the new shiny : WebRTC – real-time communication in the browser.

    Presentation slides

    I only had 20 min, so I had to make it tight. I wanted to show off video conferencing without special plugins in Google Chrome in just a few lines of code, as is the promise of WebRTC. To a large extent, I achieved this. But I made some interesting discoveries along the way. Demos are in the slide deck.

    UPDATE : Opera 12 has been released with WebRTC support.

    Housekeeping : if you want to replicate what I have done, you need to install a Google Chrome Web Browser 19+. Then make sure you go to chrome ://flags and activate the MediaStream and PeerConnection experiment(s). Restart your browser and now you can experiment with this feature. Big warning up-front : it’s not production-ready, since there are still changes happening to the spec and there is no compatible implementation by another browser yet.

    Here is a brief summary of the steps involved to set up video conferencing in your browser :

    1. Set up a video element each for the local and the remote video stream.
    2. Grab the local camera and stream it to the first video element.
    3. (*) Establish a connection to another person running the same Web page.
    4. Send the local camera stream on that peer connection.
    5. Accept the remote camera stream into the second video element.

    Now, the most difficult part of all of this – believe it or not – is the signalling part that is required to build the peer connection (marked with (*)). Initially I wanted to run completely without a server and just enter the remote’s IP address to establish the connection. This is, however, not a functionality that the PeerConnection object provides [might this be something to add to the spec ?].

    So, you need a server known to both parties that can provide for the handshake to set up the connection. All the examples that I have seen, such as https://apprtc.appspot.com/, use a channel management server on Google’s appengine. I wanted it all working with HTML5 technology, so I decided to use a Web Socket server instead.

    I implemented my Web Socket server using node.js (code of websocket server). The video conferencing demo is in the slide deck in an iframe – you can also use the stand-alone html page. Works like a treat.

    While it is still using Google’s STUN server to get through NAT, the messaging for setting up the connection is running completely through the Web Socket server. The messages that get exchanged are plain SDP message packets with a session ID. There are OFFER, ANSWER, and OK packets exchanged for each streaming direction. You can see some of it in the below image :

    WebRTC demo

    I’m not running a public WebSocket server, so you won’t be able to see this part of the presentation working. But the local loopback video should work.

    At the conference, it all went without a hitch (while the wireless played along). I believe you have to host the WebSocket server on the same machine as the Web page, otherwise it won’t work for security reasons.

    A whole new world of opportunities lies out there when we get the ability to set up video conferencing on every Web page – scary and exciting at the same time !