Recherche avancée

Médias (1)

Mot : - Tags -/remix

Autres articles (32)

  • Emballe Médias : Mettre en ligne simplement des documents

    29 octobre 2010, par

    Le plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
    Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
    D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (5608)

  • Can't get ffmpeg to stream webcam [on hold]

    29 novembre 2017, par joey

    I have a rapsberry with ffmpeg installed and a microsoft hd3000 cam installed
    I run the following command :
    ffserver -f /etc/ffserver.conf & ffmpeg -framerate 21 -re -f video4linux2 -i /dev/video0 -f alsa -i sysdefault:CARD=HD3000 http://localhost:8090/feed1.ffm

    and i get the following :

    /etc/ffserver.conf:164: Setting default value for video bit rate tolerance = 16000. Use NoDefaults to disable it.
    /etc/ffserver.conf:164: Setting default value for video rate control equation = tex^qComp. Use NoDefaults to disable it.
    /etc/ffserver.conf:164: Setting default value for video max rate = 6229744. Use NoDefaults to disable it.
    /etc/ffserver.conf:219: Setting default value for audio sample rate = 22050. Use NoDefaults to disable it.
    /etc/ffserver.conf:219: Setting default value for audio channel count = 1. Use NoDefaults to disable it.
    /etc/ffserver.conf:219: Setting default value for video bit rate tolerance = 64000. Use NoDefaults to disable it.
    /etc/ffserver.conf:219: Setting default value for video rate control equation = tex^qComp. Use NoDefaults to disable it.
    /etc/ffserver.conf:219: Setting default value for video max rate = 6369328. Use NoDefaults to disable it.
    bind(port 8090): Address already in use
    Wed Nov 29 13:17:49 2017 Could not start server
    [video4linux2,v4l2 @ 0x1a35630] The driver changed the time per frame from 1/21 to 1/10
    Input #0, video4linux2,v4l2, from '/dev/video0':
     Duration: N/A, start: 12116.079136, bitrate: 147456 kb/s
       Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 1280x720, 147456 kb/s, 10 fps, 10 tbr, 1000k tbn, 1000k tbc
    Guessed Channel Layout for Input Stream #1.0 : stereo
    Input #1, alsa, from 'sysdefault:CARD=HD3000':
     Duration: N/A, start: 1511961469.424072, bitrate: 1536 kb/s
       Stream #1:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s
    [tcp @ 0x1a44160] Connection to tcp://localhost:8090 failed (Connection refused), trying next address
    Wed Nov 29 13:17:49 2017 127.0.0.1 - - [GET] "/feed1.ffm HTTP/1.1" 200 4175
    [tcp @ 0x1a63ca0] Connection to tcp://localhost:8090 failed (Connection refused), trying next address
    [mpeg1video @ 0x1a6ecb0] bitrate tolerance 21333 too small for bitrate 64000, overriding
    [mpeg1video @ 0x1a6ecb0] MPEG-1/2 does not support 3/1 fps
    Stream mapping:
     Stream #1:0 -> #0:0 (pcm_s16le (native) -> mp2 (native))
     Stream #0:0 -> #0:1 (rawvideo (native) -> mpeg1video (native))
     Stream #1:0 -> #0:2 (pcm_s16le (native) -> wmav2 (native))
     Stream #0:0 -> #0:3 (rawvideo (native) -> msmpeg4v3 (msmpeg4))
    Error while opening encoder for output stream #0:1 - maybe incorrect parameters such as bit_rate, rate, width or height
    Wed Nov 29 13:17:49 2017 127.0.0.1 - - [POST] "/feed1.ffm HTTP/1.1" 200 0
    [2]+  Exit 1                  ffserver -f /etc/ffserver.conf

    I use the default /etc/ffserver.conf.
    I can’t seem to figur out what is the problem.

  • Use deck.js as a remote presentation tool

    8 janvier 2014, par silvia

    deck.js is one of the new HTML5-based presentation tools. It’s simple to use, in particular for your basic, every-day presentation needs. You can also create more complex slides with animations etc. if you know your HTML and CSS.

    Yesterday at linux.conf.au (LCA), I gave a presentation using deck.js. But I didn’t give it from the lectern in the room in Perth where LCA is being held – instead I gave it from the comfort of my home office at the other end of the country.

    I used my laptop with in-built webcam and my Chrome browser to give this presentation. Beforehand, I had uploaded the presentation to a Web server and shared the link with the organiser of my speaker track, who was on site in Perth and had set up his laptop in the same fashion as myself. His screen was projecting the Chrome tab in which my slides were loaded and he had hooked up the audio output of his laptop to the room speaker system. His camera was pointed at the audience so I could see their reaction.

    I loaded a slide master URL :
    http://html5videoguide.net/presentations/lca_2014_webrtc/?master
    and the room loaded the URL without query string :
    http://html5videoguide.net/presentations/lca_2014_webrtc/.

    Then I gave my talk exactly as I would if I was in the same room. Yes, it felt exactly as though I was there, including nervousness and audience feedback.

    How did we do that ? WebRTC (Web Real-time Communication) to the rescue, of course !

    We used one of the modules of the rtc.io project called rtc-glue to add the video conferencing functionality and the slide navigation to deck.js. It was actually really really simple !

    Here are the few things we added to deck.js to make it work :

    • Code added to index.html to make the video connection work :
      <meta name="rtc-signalhost" content="http://rtc.io/switchboard/">
      <meta name="rtc-room" content="lca2014">
      ...
      <video id="localV" rtc-capture="camera" muted></video>
      <video id="peerV" rtc-peer rtc-stream="localV"></video>
      ...
      <script src="glue.js"></script>
      <script>
      glue.config.iceServers = [{ url: 'stun:stun.l.google.com:19302' }];
      </script>

      The iceServers config is required to punch through firewalls – you may also need a TURN server. Note that you need a signalling server – in our case we used http://rtc.io/switchboard/, which runs the code from rtc-switchboard.

    • Added glue.js library to deck.js :

      Downloaded from https://raw.github.com/rtc-io/rtc-glue/master/dist/glue.js into the source directory of deck.js.

    • Code added to index.html to synchronize slide navigation :
      glue.events.once('connected', function(signaller) {
       if (location.search.slice(1) !== '') {
         $(document).bind('deck.change', function(evt, from, to) {
           signaller.send('/slide', {
             idx: to,
             sender: signaller.id
           });
         });
       }
       signaller.on('slide', function(data) {
         console.log('received notification to change to slide: ', data.idx);
         $.deck('go', data.idx);
       });
      });

      This simply registers a callback on the slide master end to send a slide position message to the room end, and a callback on the room end that initiates the slide navigation.

    And that’s it !

    You can find my slide deck on GitHub.

    Feel free to write your own slides in this manner – I would love to have more users of this approach. It should also be fairly simple to extend this to share pointer positions, so you can actually use the mouse pointer to point to things on your slides remotely. Would love to hear your experiences !

    Note that the slides are actually a talk about the rtc.io project, so if you want to find out more about these modules and what other things you can do, read the slide deck or watch the talk when it has been published by LCA.

    Many thanks to Damon Oehlman for his help in getting this working.

    BTW : somebody should really fix that print style sheet for deck.js – I’m only ever getting the one slide that is currently showing. ;-)

  • How do I extract color matrix from MP4 an x264 stream in Media Foundation

    23 août 2016, par Jules

    I am playing a video (mp4 containing x264 encoded video stream) with a custom player using media foundation.

    When I convert the YUV information into RGB I need to account for the color matrix and range used at encode time.

    Some of my videos have this information, I can use MediaInfo.exe or FFMPEG to see that it is present.

    However, for such videos if I look at the relevant Media Foundation properties (Extended Color Information) the properties are not present in the files.

    So, somehow I need to find a way to access the information.

    Media Foundation does provide access to MF_MT_MPEG4_SAMPLE_DESCRIPTION and MF_MT_MPEG_SEQUENCE_HEADER for the video stream but I can’t find descriptions of what these contain.

    I noticed that the MF_MT_MPEG_SEQUENCE_HEADER is much longer for the videos with the information present and this (MPEG Headers Quick Reference) seems to suggest headers might contain the information I need.

    I’m looking for Color Range (limited/full), Color Primaries, Transfer Characteristics and Matrix Coefficients (BT.709 etc).

    I’d greatly appreciate any help finding this information from a Media Foundation video stream.

    Thanks

    Jules


    Update - Sequence Header

    The sequence header appears to be a subset of MPEG4 sample description, though I can’t find anything that indicates what either bits of data actually contains / doesn’t contain specifically.

    The sequence header appears to contain data structured as an MP4 byte stream as described in the H264 Standards Document and includes the VUI (Video Usability Information - Annex E of document) which may then include the colour information I’m interested in.

    Given that it’s a byte stream I need to know where it starts and whether there’s some existing code I could use to decode it.

    In FFMPEG in libavcodec/h264_ps.c there is a function called ff_h264_decode_seq_parameter_set which ends up calling decode_vui_parameters. It seems possible that seq_parameter_set maps to MF_MT_MPEG_SEQUENCE_HEADER and it may be possible to use that code to decode the data.

    If anyone one has any direct experience with decoding this data it would be very useful.

    Thanks again


    Update - Related posts

    I found this How to decode sprop-parameter-sets in a H264 SDP ? and Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream which are fairly helpful.

    The sequence header would appear to be Sequence or picture parameter set (pps) and the parameters I want are the VUI extension subset.

    Plus this post H.264 stream structure gives the high level of how the stream data is structured, and the MF_MT_MPEG_SEQUENCE_HEADER appears to start with a NAL 0x00 0x00 0x01 so I’m guessing it is a NAL containing the PPS.