Recherche avancée

Médias (1)

Mot : - Tags -/ogg

Autres articles (94)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8948)

  • Use FFMPEG to combine different MP4s with srt into one file

    17 juin 2021, par anonymous1a

    So...probably a very basic question for those of you familiar with FFMPEG (I'm really not). I know that you can combine multiple videos into one using FFMPEG, but what about if each video has its own srt file, saved separately in a 'subs' folder and NOT included in the video itself ?

    



    Is it possible for FFMPEG to also combine the srt files into a single one (and recalculate the timestamps), and then merge this into the final, combined video ? If so, what would the command be ?

    



    For example, I have video1.mp4 and video2.mp4. They have corresponding sub1.srt and sub2.srt. When video1.mp4 and video2.mp4 are merged, the timestamps for sub2.srt will, of course, be out of sync now and need to be corrected by adding the duration of video1.mp4 to the individual timestamps (i.e., if video1 is 30 seconds long, and the first subtitle in sub2.srt appears at the 2-second mark, then after the combination, it should now appear at the (30+2)=32-second mark, and so on.

    



    If it helps, all the files are mp4, and have the same dimensions (720p).

    


  • Merge commit ’d347a7b248d4ffdc278373fecf033b0ade030343’

    5 octobre 2013, par Michael Niedermayer
    Merge commit ’d347a7b248d4ffdc278373fecf033b0ade030343’
    

    * commit ’d347a7b248d4ffdc278373fecf033b0ade030343’ :
    ismindex : Use the individual stream duration instead of the global one

    Merged-by : Michael Niedermayer <michaelni@gmx.at>

    • [DH] tools/ismindex.c
  • How to interpret ndarray in a pyAV AudioFrame ?

    30 janvier 2024, par Sachin Dole

    I want to process streaming audio (coming in from a person speaking on the peer of a webRTC peer connection) to detect when the person is done talking. I have got the audio track and access to individual frames. I see that each frame can be converted to an nd_array using Frame.to_ndarray. I can also see values in the ndarray changing depending on what the person is speaking, what pitch, what volume etc. Now, I want to detect silence on the stream. My question is what is in the ndarray and how can I make sense of the data ?

    &#xA;

            while True:&#xA;            try:&#xA;                frame:AudioFrame = await track.recv()&#xA;                frame_nd_array = frame.to_ndarray() &#xA;

    &#xA;

    Where can I learn what is in the frame_nd_array ?

    &#xA;