Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (55)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (11178)

  • Normalize audio, then reduce the volume in ffmpeg

    27 octobre 2014, par Steve Sheldon

    I have a question relating to ffmpeg. First here is the scenario, I am working on a project where I need to have some audio with a presenter talking and then potentially some background music. I also have the requirement to normalize the audio. I would like to do this without presenting a bunch of options to the user.

    For normalization I use something similar to this post :

    How to normalize audio with ffmpeg.

    In short, I get a volume adjustment which I then apply to ffmpeg like this :

    ffmpeg -i <input /> -af "volume=xxxdB" <output>
    </output>

    So far so good. Now let’s consider the backing track, it doesn’t want to be the same volume as the presenters voice, this would be really distracting, so I want to lower that by some percentage. I can also do this with ffmpeg, I could do it like this (example would set volume to 50%) :

    ffmpeg -i <input /> -af "volume=0.5" <output>
    </output>

    Using these two commands back to back, I can get the desired result.

    My question has two parts :

    1. Is there a way to do this in one step ?
    2. Is there any benefit to doing it in one step ?

    Thanks for any help !

  • Native function in Vitamio

    1er août 2014, par hclee

    I am now looking the code beind the Vitamio (media framework) cause I want to know what API does it use to retrieve the buffer percentage/download rate and how it interect with the android OS to retrieve other information about the streaming video.

    But I realized that it does use some native functions which enable it use some code written in C/C++ language.

    I try to investigate on the C++ code but I don’t know where are they.
    I guessed they are stored inside the res/raw/librarm.so.
    I unzipped the file but all I can find is some machine code but what I want is the implementation of the native function.

    For example, I want to know the implementation of the following function :

    public native int getVideoTrack() ; // What is this function for ? What does it mean by the track
    // number of a straming video ?

    or

    private static native boolean loadFFmpeg_native(String ffmpegPath) ;

    and when will this function be called :

    private static void postEventFromNative(Object mediaplayer_ref, int what, int arg1, int arg2, Object obj)

    Do anyone know where can I investigate the implementation of such native function.
    It should be some C++ code but I don’t want machine code...

    I went to
    https://www.vitamio.org/en/2013/Tutorial_0509/13.html

    but they didn’t have the thing that I want

    Thanks in advance !!!

  • Android HLS : Any way to edit the m3u8 file so that I know which segment is currently streaming/playing in my player using HLS

    1er août 2014, par hclee

    I am now working on a VOD project using HLS.

    I use the VideoViewBuffer in VitamioDemo to stream my video that is stored in my local server.
    The Vitamio library is awesome that I am able to stream the video, getting the bit rate, buffering percentage and some metadata.

    We use ffmpeg to convert the video into m3u8 and the corresponding ts files.
    But now our team want to know which segment (which ts file) of the video the HLS is currently streaming.

    That’s a very important part in our project but we get stuck in this point.

    I tried to use the MediaMetadata in Vitamio but only the Duration of the video is found.
    I am wondering if we can add some metadata in the m3u8 file so that we can retrieve the name of the current segment during streaming.
    eg :
    The original m3u8 is like this :
    #EXTINF:10.500000,
    stream00000.ts

    Is it possible for me to change it as follows :
    #EXTINF:10.500000, name of segment
    stream00000.ts

    But all I can get using MediaMetaDataRetriever is null except for the duration.

    It seems that no body have done this before so I can’t find any very useful information about this.

    Do anybody how to implement this ?
    Or should I use some packet sniffer to monitor the network traffic by myself ?
    Or Would MediaScanner be helpful ?
    Or I need to use code in android.os ?

    Thanks in advance !