Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (106)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Sélection de projets utilisant MediaSPIP

    29 avril 2011, par

    Les exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
    Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
    Ferme MediaSPIP @ Infini
    L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)

Sur d’autres sites (5381)

  • How to detect the silence at the end of an audio file ?

    1er mars 2017, par Ali Akber

    I am trying to detect silence at the end of an audio file.
    I have made some progress with ffmpeg library. Here I used silencedetect to list all the silences in an audio file.

    ffmpeg -i audio.wav -af silencedetect=n=-50dB:d=0.5 -f null - 2> /home/aliakber/log.txt

    Here is the output of the command :

    —With silence at the front and end of the audio file—

    [silencedetect @ 0x1043060] silence_start: 0.484979
    [silencedetect @ 0x1043060] silence_end: 1.36898 | silence_duration: 0.884
    [silencedetect @ 0x1043060] silence_start: 2.57298
    [silencedetect @ 0x1043060] silence_end: 3.48098 | silence_duration: 0.908
    [silencedetect @ 0x1043060] silence_start: 4.75698
    size=N/A time=00:00:05.56 bitrate=N/A

    —Without silence at the front and end of the audio file—

    [silencedetect @ 0x106fd60] silence_start: 0.353333
    [silencedetect @ 0x106fd60] silence_end: 1.25867 | silence_duration: 0.905333
    [silencedetect @ 0x106fd60] silence_start: 2.46533
    [silencedetect @ 0x106fd60] silence_end: 3.37067 | silence_duration: 0.905333
    size=N/A time=00:00:04.61 bitrate=N/A

    But I want something more flexible so that I can manipulate the output and do further task depending on the result.
    I want to get the output something like true or false. If there is a certain period of silence exists at the end of the audio file it will return true and false otherwise.

    Can someone suggest me an easy way to achieve this ?

  • matplotlib ArtistAnimation returns a blank video

    28 mars 2017, par Mpaull

    I’m trying to produce an animation of a networkx graph changing over time. I’m using the networkx_draw utilities to create matplotlib figures of the graph, and matplotlib’s ArtistAnimation module to create an animation from the artists networkx produces. I’ve made a minimum reproduction of what I’m doing here :

    import numpy as np
    import networkx as nx
    import matplotlib.animation as animation
    import matplotlib.pyplot as plt

    # Instantiate the graph model
    G = nx.Graph()
    G.add_edge(1, 2)

    # Keep track of highest node ID
    G.maxNode = 2

    fig = plt.figure()
    nx.draw(G)
    ims = []

    for timeStep in xrange(10):

       G.add_edge(G.maxNode,G.maxNode+1)
       G.maxNode += 1

       pos = nx.drawing.spring_layout(G)
       nodes = nx.drawing.draw_networkx_nodes(G, pos)
       lines = nx.drawing.draw_networkx_edges(G, pos)

       ims.append((nodes,lines,))
       plt.pause(.2)
       plt.cla()

    im_ani = animation.ArtistAnimation(fig, ims, interval=200,            repeat_delay=3000,blit=True)
    im_ani.save('im.mp4', metadata={'artist':'Guido'})

    The process works fine while displaying the figures live, it produces exactly the animation I want. And it even produces a looping animation in a figure at the end of the script, again what I want, which would suggest that the animation process worked. However when I open the "im.mp4" file saved to disk, it is a blank white image which runs for the expected period of time, never showing any of the graph images which were showed live.

    I’m using networkx version 1.11, and matplotlib version 2.0. I’m using ffmpeg for the animation, and am running on a Mac, OSX 10.12.3.

    What am I doing incorrectly ?

  • How to cut the videos per every 2MB using FFMpeg

    5 mai 2017, par Krish

    Hi i have 20MB size video,I need to split this video as each part should be 2MB,
    For this i google it and found FFMpeg library for splitting videos.

    But this library splitting the videos based on time-limit suppose (00:00:02 to 00:00:06 seconds,Between this time period video splitted)

    My requirement is, I want to cut the video per every 2MB that’s what my exact requirement not based on time limit.

    I searched for this lot in google but i did not get solution can some one help me please.

    FFMpeg Command i used for splitting :-

    String cmd[] = new String[]{"-i", inputFileUrl, "-ss", "00:00:02", "-c", "copy", "-t", "00:00:06",
                   outputFileUrl};
           executeBinaryCommand(fFmpeg, cmd);

    public void executeBinaryCommand(FFmpeg ffmpeg, String[] command) {

           try {

               if (ffmpeg != null) {

                   ffmpeg.execute(command,
                           new ExecuteBinaryResponseHandler() {

                               @Override
                               public void onFailure(String response) {
                                   System.out.println("failure====>" + response.toString());
                               }

                               @Override
                               public void onSuccess(String response) {
                                   System.out.println("resposense====>" + response.toString());
                               }

                               @Override
                               public void onProgress(String response) {
                                   System.out.println("on progress");
                               }

                               @Override
                               public void onStart() {
                                   System.out.println("start");
                               }

                               @Override
                               public void onFinish() {
                                   System.out.println("Finish");
                               }
                           });
               }
           } catch (FFmpegCommandAlreadyRunningException exception) {
               exception.printStackTrace();
           }
       }