Recherche avancée

Médias (91)

Autres articles (62)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (11769)

  • Use FFMPEG to combine different MP4s with srt into one file

    17 juin 2021, par anonymous1a

    So...probably a very basic question for those of you familiar with FFMPEG (I'm really not). I know that you can combine multiple videos into one using FFMPEG, but what about if each video has its own srt file, saved separately in a 'subs' folder and NOT included in the video itself ?

    



    Is it possible for FFMPEG to also combine the srt files into a single one (and recalculate the timestamps), and then merge this into the final, combined video ? If so, what would the command be ?

    



    For example, I have video1.mp4 and video2.mp4. They have corresponding sub1.srt and sub2.srt. When video1.mp4 and video2.mp4 are merged, the timestamps for sub2.srt will, of course, be out of sync now and need to be corrected by adding the duration of video1.mp4 to the individual timestamps (i.e., if video1 is 30 seconds long, and the first subtitle in sub2.srt appears at the 2-second mark, then after the combination, it should now appear at the (30+2)=32-second mark, and so on.

    



    If it helps, all the files are mp4, and have the same dimensions (720p).

    


  • Merge commit ’d347a7b248d4ffdc278373fecf033b0ade030343’

    5 octobre 2013, par Michael Niedermayer
    Merge commit ’d347a7b248d4ffdc278373fecf033b0ade030343’
    

    * commit ’d347a7b248d4ffdc278373fecf033b0ade030343’ :
    ismindex : Use the individual stream duration instead of the global one

    Merged-by : Michael Niedermayer <michaelni@gmx.at>

    • [DH] tools/ismindex.c
  • How to interpret ndarray in a pyAV AudioFrame ?

    30 janvier 2024, par Sachin Dole

    I want to process streaming audio (coming in from a person speaking on the peer of a webRTC peer connection) to detect when the person is done talking. I have got the audio track and access to individual frames. I see that each frame can be converted to an nd_array using Frame.to_ndarray. I can also see values in the ndarray changing depending on what the person is speaking, what pitch, what volume etc. Now, I want to detect silence on the stream. My question is what is in the ndarray and how can I make sense of the data ?

    &#xA;

            while True:&#xA;            try:&#xA;                frame:AudioFrame = await track.recv()&#xA;                frame_nd_array = frame.to_ndarray() &#xA;

    &#xA;

    Where can I learn what is in the frame_nd_array ?

    &#xA;