Recherche avancée

Médias (91)

Autres articles (51)

  • Changer son thème graphique

    22 février 2011, par

    Le thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
    Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
    Modifier le thème graphique utilisé
    Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
    Il suffit ensuite de se rendre dans l’espace de configuration du (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (7246)

  • How to setup a virtual mic and pipe audio to it from node.js

    28 octobre 2018, par Niellles

    Summary of what I am trying to achieve :

    I’m currently doing some work on a Discord bot. I’m trying to join a voice channel, which is the easy part, and then use the combined audio of the speakers in that voice channel as input for a webpage in a web browser. It doesn’t really matter which browser it is as long as it can be controlled with Selenium.


    What I’ve tried/looked into so far

    My bot so far is written up in Python using the discord.py API wrapper. Unfortunately listening to, as opposed to putting in, audio hasn’t been exactly implemented great − let alone documented − with discord.py. This made me decide to switch to node.js (i.e. discord.js) for the voice channel stuff of my bot.

    After switching to discord.js it was pretty easy to determine who’s talking and create an audio stream (PCM stream) for that user. For the next part I though I’d just pipe the audio stream to a virtual microphone and select that as the audio input on the browser. You can even use FFMPEG from within node.js 1, to get something that looks like this :

    const Discord = require("discord.js");
    const client = new Discord.Client();

    client.on('ready', () => {
     voiceChannel = client.channels.get('SOME_CHANNEL_ID');
     voiceChannel.join()
       .then(conn => {
         console.log('Connected')

         const receiver = conn.createReceiver();

         conn.on('speaking', (user, speaking) => {
           if (speaking) {
             const audioStream = receiver.createPCMStream(user);

             ffmpeg(stream)
                 .inputFormat('s32le')
                 .audioFrequency(16000)
                 .audioChannels(1)
                 .audioCodec('pcm_s16le')
                 .format('s16le')
                 .pipe(someVirtualMic);          
           }
         });
       })
       .catch(console.log);
     });

    client.login('SOME_TOKEN');

    This last part, creating and streaming to a virtual microphone, has proven to be rather complicated. I’ve read a ton of SO posts and documentation on both The Advanced Linux Sound Architecture (ALSA) and the JACK Audio Connection Kit, but I simply can’t figure out how to setup a virtual microphone that will show up as a mic in my browser, or how to pipe audio to it.

    Any help or pointers to a solution would be greatly appreciated !


    Addendum

    For the past couple of days I’ve kept on looking into to this issue. I’ve now learned about ALSA loopback devices and feel that the solution must be there.

    I’ve pretty much followed a post that talks about loopback devices and aims to achieve the following :

    Simply imagine that you have a physical link between one OUT and one
    IN of the same device.

    I’ve set up the devices as described in the post and now two new audio devices show up when selecting a microphone in Firefox. I’d expect one, but I that may be because I don’t entirely understand the loopback devices (yet).

    The loop back devices are created and I think that they’re linked (if I understood the aforementioned article correctly). Assuming that’s the case the only problem I have to tackle is streaming the audio via FFMPEG from within node.js.

    Audio devices

  • Need help : Can i get ffmpeg to burn in the source timecode of my file ?

    6 novembre 2018, par Myles

    I have a .mov file that contains original source timecode metadata but i can’t figure out a way to get ffmpeg to burn the original timecode into the picture.

    If i open the original file in QuickTime Player we can see it displays the true timecode on the far left :
    Original TC

    I can also see that ffprobe is able to see the metadata when i run the following :

    Command :

    ffprobe -i test.mov -show_streams

    Abbreviated Result :

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test.mov':
     Metadata:
       major_brand     : qt  
       minor_version   : 537199360
       compatible_brands: qt  
       creation_time   : 2018-11-05T14:20:51.000000Z
       timecode        : 09:59:53:00
     Duration: 00:16:37.64, start: 0.000000, bitrate: 1680 kb/s

    So i can see that ffprobe is able to determine the start timecode of the file in its metadata results. The question is how to i pass that information into an ffmpeg command so that the timecode seen by ffprobe is what gets used when i convert the file for timecode burn-in ?

    An example of a standard burnt in timecode command would be this :

    ffmpeg -i test.mov -vcodec libx264 -cmp 22 -vf
    "drawtext=fontfile=DroidSansMono.ttf : timecode=’09:59:53:00’ : r=25 :
    x=(w-tw)/2 : y=h-(2*lh) : fontcolor=white : box=1 : boxcolor=0x00000099"
    -y test_bitc.mov

    The only problem there though is that i’ve had to manually put the timecode in myself. I want the command to use the existing timecode metadata as the timecode input value so the same command can be used on multiple files.

    Does anyone know how to do this ?

  • ffmpeg second text at end of first text [on hold]

    20 octobre 2018, par Prashant Godhani

    In FFMPEG, I want to add two text. Second text starts at end of the first text. I want to add drawtext at the end of another drawtext like below image. But the problem is i can’t find width of first text.

    enter image description here

    In this image there is three text.

    I HAVE TO WEAR A
    JACKET
    AND TIE TO WORK

    First and third has same color but second have different.

    I have read this question but it’s using subtitle but i don’t want to use it.
    Please suggest any other way to do this like get width of first text than determine width of second text.