Recherche avancée

Médias (3)

Mot : - Tags -/collection

Autres articles (55)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

Sur d’autres sites (9150)

  • How can I speed up the generation of an MP4 using matplotlib's Animation Writer ?

    18 février 2019, par Victor 'Chris' Cabral

    I am using matplotlib to generate a graphical animation of some data. The data has about 4 hours of collection time so I expect the animation to be about 4 hours. However, generating a smaller 60 second video takes approximately 15 minutes. Thus, the total estimated run time for generating the 4 hour video is 2.5 days. I assume I am doing something incredibly inefficient. How can I speed up the creation of an animation with matplotlib ?

    create_graph.py

    import matplotlib.pyplot as plt
    import matplotlib.animation as animation
    import matplotlib
    import pandas as pd
    import numpy as np

    matplotlib.use("Agg")

    frame = pd.read_csv("tmp/total.csv")
    min_time = frame.iloc[0]["time"]
    max_time = frame.iloc[-1]["time"]
    total_time = max_time - min_time

    hertz_rate = 50
    window_length = 5
    save_count = hertz_rate * 100

    def data_gen():
       current_index_of_matching_ts = 0
       t = data_gen.t
       cnt = 0
       while cnt < save_count:
           print("Done: {}%".format(cnt/save_count*100.0))
           predicted = cnt * (1.0/hertz_rate)
           while frame.iloc[current_index_of_matching_ts]["time"] - min_time <= predicted and current_index_of_matching_ts < len(frame) - 1:
               current_index_of_matching_ts = current_index_of_matching_ts + 1

           y1 = frame.iloc[current_index_of_matching_ts]["var1"]
           y2 = frame.iloc[current_index_of_matching_ts]["var2"]
           y3 = frame.iloc[current_index_of_matching_ts]["var3"]
           y4 = frame.iloc[current_index_of_matching_ts]["var4"]
           y5 = frame.iloc[current_index_of_matching_ts]["var5"]
           y6 = frame.iloc[current_index_of_matching_ts]["var6"]
           y7 = frame.iloc[current_index_of_matching_ts]["var7"]
           y8 = frame.iloc[current_index_of_matching_ts]["var8"]
           y9 = frame.iloc[current_index_of_matching_ts]["var9"]
           t = frame.iloc[current_index_of_matching_ts]["time"] - min_time
           # adapted the data generator to yield both sin and cos
           yield t, y1, y2, y3, y4, y5, y6, y7, y8, y9
           cnt+=1

    data_gen.t = 0

    # create a figure with two subplots
    fig, (ax1, ax2, ax3, ax4, ax5, ax6, ax7, ax8, ax9) = plt.subplots(9,1,figsize=(7,14)) # produces a video of 700 × 1400

    # intialize two line objects (one in each axes)
    line1, = ax1.plot([], [], lw=2, color='b')
    line2, = ax2.plot([], [], lw=2, color='b')
    line3, = ax3.plot([], [], lw=2, color='b')
    line4, = ax4.plot([], [], lw=2, color='g')
    line5, = ax5.plot([], [], lw=2, color='g')
    line6, = ax6.plot([], [], lw=2, color='g')
    line7, = ax7.plot([], [], lw=2, color='r')
    line8, = ax8.plot([], [], lw=2, color='r')
    line9, = ax9.plot([], [], lw=2, color='r')
    line = [line1, line2, line3, line4, line5, line6, line7, line8, line9]

    # the same axes initalizations as before (just now we do it for both of them)
    for ax in [ax1, ax2, ax3, ax4, ax5, ax6, ax7, ax8,  ax9]:
       ax.set_ylim(-1.1, 1.1)
       ax.grid()

    # initialize the data arrays
    xdata, y1data, y2data, y3data, y4data, y5data, y6data, y7data, y8data, y9data = [], [], [], [], [], [], [], [], [], []

    my_gen = data_gen()
    for index in range(hertz_rate*window_length-1):
       t, y1, y2, y3, y4, y5, y6, y7, y8, y9 = my_gen.__next__()
       xdata.append(t)
       y1data.append(y1)
       y2data.append(y2)
       y3data.append(y3)
       y4data.append(y4)
       y5data.append(y5)
       y6data.append(y6)
       y7data.append(y7)
       y8data.append(y8)
       y9data.append(y9)


    def run(data):
       # update the data
       t, y1, y2, y3, y4, y5, y6, y7, y8, y9 = data
       xdata.append(t)
       y1data.append(y1)
       y2data.append(y2)
       y3data.append(y3)
       y4data.append(y4)
       y5data.append(y5)
       y6data.append(y6)
       y7data.append(y7)
       y8data.append(y8)
       y9data.append(y9)

       # axis limits checking. Same as before, just for both axes
       for ax in [ax1, ax2, ax3, ax4, ax5, ax6, ax7, ax8, ax9]:
           ax.set_xlim(xdata[-1]-5.0, xdata[-1])

       # update the data of both line objects
       line[0].set_data(xdata, y1data)
       line[1].set_data(xdata, y2data)
       line[2].set_data(xdata, y3data)
       line[3].set_data(xdata, y4data)
       line[4].set_data(xdata, y5data)
       line[5].set_data(xdata, y6data)
       line[6].set_data(xdata, y7data)
       line[7].set_data(xdata, y8data)
       line[8].set_data(xdata, y9data)

       return line

    ani = animation.FuncAnimation(fig, run, my_gen, blit=True, interval=20, repeat=False, save_count=save_count)

    Writer = animation.writers['ffmpeg']
    writer = Writer(fps=hertz_rate, metadata=dict(artist='Me'), bitrate=1800)
    ani.save('lines.mp4', writer=writer)
  • java Runtime.getRuntime().exec(ffmpeg)

    6 janvier 2016, par metzojack

    I’m using Java Runtime.getRuntime().exec(ffmpeg) to execute an ffmpeg command to transcode video from any format to mp4 format. When command starts, ps -ef shows ffmpeg process and top shows that ffmpeg uses most of cpu (90%). When input video for ffmpeg is short (less than 4 mn) the Runtime.getRuntime().exec(ffmpeg) command runs well. But when trying to transcode full HD video that ffmpeg command need more than 10 minutes to be complete, an unknown problem happens around 5 minutes after ffmpeg command starts. ps -ef command still shows ffmpeg process even after many hours, but top command shows that ffmpeg process doesn’t use more that 1% of cpu. So Runtime.getRuntime().exec(ffmpeg) never finishes.

  • Mute parts of Wave file using Python/FFMPEG/Pydub

    20 avril 2020, par Adil Azeem

    I am new to Python, please bear with me. I have been able to get so far with the help of Google/StackOverflow and youtube :). So I have a long (2 hours) *.wav file. I want to mute certain parts of that file. I have all of those [start], [stop] timestamps in the "Timestamps.txt" file in seconds. Like this :

    



       0001.000 0003.000
   0744.096 0747.096
   0749.003 0750.653
   0750.934 0753.170
   0753.210 0754.990
   0756.075 0759.075
   0760.096 0763.096
   0810.016 0811.016
   0815.849 0816.849


    



    What I have been able to do is read the file and isolate each tuple. I have just output the first tuple and printed it to check if everything looks good. It seems that the isolation of tuple works :) I plan to count the number of tuples (which is 674 in this case) and put in a 'for loop' according to that count and change the start and stop time according to the tuple. Perform the loop on that single *.wav file and output on file with muted sections as the timestamps. I have no idea how to implement my thinking with FFMPEG or any other utility in Python e.g pydub. Please help me.

    



       with open('Timestamps.txt') as f:
   data = [line.split() for line in f.readlines()]
   out = [(float(k), float(v)) for k, v in data]

   r = out[0] 
   x= r[0]
   y= r[1]
   #specific x and y values
   print(x)
   print(y)