Recherche avancée

Médias (91)

Autres articles (20)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

Sur d’autres sites (5633)

  • Could anyone help me understand why moviepy is rendering at 2.5 it/s ?

    23 décembre 2023, par tristan

    I'm writing a program that uses moviepy to make those weird reddit thread videos with mc parkour playing in the background (real original ik), and everything is good except for when im rendering video which seems to consume a ton of memory and moves really... really slow, like 2.5 it/s. could anyone help ? also im a novice programmer that has no bearing on what is conventional or proper, so sorry if my code is very bad.

    


    from moviepy.video.fx.all import resize
from moviepy.video.tools.subtitles import SubtitlesClip
from moviepy.editor import (
    CompositeVideoClip,
    AudioFileClip,
    VideoFileClip,
    ImageClip,
    TextClip
)
import random
import moviepy.config as cfg
import librosa
from imagegenerator import draw_title
from audioeditor import concatenate_audios
import soundfile as sf
import numpy as np

# Constants
VIDEO_FADE_DURATION = 0.4
SPEED_FACTOR = 1.1
TEXT_WIDTH = 600
MINIMUM_FONT_SIZE = 60
FONT_COLOR = "white"
OUTLINE_COLOR = "black"
TITLE_ANIMATION_DURATION = 0.25
ANIMATION_DURATION = 0.2

# Configure imagemagick binary
cfg.change_settings(
    {
        "IMAGEMAGICK_BINARY": "magick/magick.exe"
    }
)

# Ease-out function
def ease_out(t):
    return 1 - (1 - t) ** 2

# Overlap audio files
def overlap_audio_files(audio_path1, audio_path2):
    # Load the first audio file
    audio1, sr1 = librosa.load(audio_path1, sr=None)

    # Load the second audio file
    audio2, sr2 = librosa.load(audio_path2, sr=None)

    # Ensure both audio files have the same sample rate
    if sr1 != sr2:
        raise ValueError("Sample rates of the two audio files must be the same.")

    # Calculate the duration of audio2
    audio2_duration = len(audio2)

    # Tile audio1 to match the duration of audio2
    audio1 = np.tile(audio1, int(np.ceil(audio2_duration / len(audio1))))

    # Trim audio1 to match the duration of audio2
    audio1 = audio1[:audio2_duration]

    # Combine the audio files by superimposing them
    combined_audio = audio1 + audio2

    # Save the combined audio to a new file
    output_path = "temp/ttsclips/combined_audio.wav"
    sf.write(output_path, combined_audio, sr1)

    return output_path

# Generator function for subtitles with centered alignment and outline
def centered_text_generator_white(txt):
    return TextClip(
        txt,
        font=r"fonts/Invisible-ExtraBold.otf",
        fontsize=86,
        color=FONT_COLOR,
        bg_color='transparent',  # Use a transparent background
        align='center',  # Center the text
        size=(1072, 1682),
        method='caption',  # Draw a caption instead of a title
    )

# Generator function for subtitles with centered alignment and blurred outline
def centered_text_generator_black_blurred_outline(txt, blur_factor=3):
    outline_clip = TextClip(
        txt,
        font=r"fonts/Invisible-ExtraBold.otf",
        fontsize=86,
        color=OUTLINE_COLOR,
        bg_color='transparent',  # Use a transparent background
        align='center',  # Center the text
        size=(1080, 1688),
        method='caption',  # Draw a caption instead of a title
    )

    # Blur the black text (outline)
    blurred_outline_clip = outline_clip.fx(resize, 1.0 / blur_factor)
    blurred_outline_clip = blurred_outline_clip.fx(resize, blur_factor)

    return blurred_outline_clip

# Compile video function
def compile_video(title_content, upvotes, comments, tone, subreddit, video_num):
    # Set the dimensions of the video (720x1280 in this case)
    height = 1280

    # Concatenate the audios
    concatenate_audios()

    concatenated_audio_path = r"temp/ttsclips/concatenated_audio.mp3"
    title_audio_path = r"temp/ttsclips/title.mp3"

    title_audio = AudioFileClip(title_audio_path)
    concatenated_audio = AudioFileClip(concatenated_audio_path)

    # Calculate for video duration
    title_duration = title_audio.duration
    duration = concatenated_audio.duration

    # Set background
    background_path = f"saved_videos/newmcparkour.mp4"
    background = VideoFileClip(background_path)
    background_duration = background.duration
    random_start = random.uniform(0, background_duration - duration)
    background = background.subclip(random_start, random_start + duration)

    # Apply fade-out effect to both background clips
    background = background.crossfadeout(VIDEO_FADE_DURATION)

    # Generate the background image with rounded corners
    background_image_path = draw_title(title_content, upvotes, comments, subreddit)

    # Load the background image with rounded corners
    background_image = ImageClip(background_image_path)

    # Set the start of the animated title clip
    animated_background_clip = background_image.set_start(0)

    # Set the initial position of the text at the bottom of the screen
    initial_position = (90, height)

    # Calculate the final position of the text at the center of the screen
    final_position = [90, 630]

    # Animate the title clip to slide up over the course of the animation duration
    animated_background_clip = animated_background_clip.set_position(
        lambda t: (
            initial_position[0],
            initial_position[1]
            - (initial_position[1] - final_position[1])
            * ease_out(t / TITLE_ANIMATION_DURATION),
        )
    )

    # Set the duration of the animated title clip
    animated_background_clip = animated_background_clip.set_duration(
        TITLE_ANIMATION_DURATION
    )

    # Assign start times to title image
    stationary_background_clip = background_image.set_start(TITLE_ANIMATION_DURATION)

    # Assign positions to stationary title image
    stationary_background_clip = stationary_background_clip.set_position(final_position)

    # Assign durations to stationary title image
    stationary_background_clip = stationary_background_clip.set_duration(
        title_duration - TITLE_ANIMATION_DURATION
    )

    #  Select background music
    if tone == "normal":
        music_options = [
            "Anguish",
            "Garden",
            "Limerence",
            "Lost",
            "NoWayOut",
            "Summer",
            "Never",
            "Miss",
            "Touch",
            "Stellar"
        ]
    elif tone == "eerie":
        music_options = [
            "Creepy",
            "Scary",
            "Spooky",
            "Space",
            "Suspense"
        ]
    background_music_choice = random.choice(music_options)
    background_music_path = f"music/eeriemusic/{background_music_choice}.mp3"

    # Create final audio by overlapping background music and concatenated audio
    final_audio = AudioFileClip(
        overlap_audio_files(background_music_path, concatenated_audio_path)
    )

    # Release the concatenated audio
    concatenated_audio.close()

    # Create subtitles clip using the centered_text_generator
    subtitles = SubtitlesClip("temp/ttsclips/content_speechmarks.srt",
                            lambda txt: centered_text_generator_white(txt))
    subtitles_outline = SubtitlesClip("temp/ttsclips/content_speechmarks.srt",
                            lambda txt: centered_text_generator_black_blurred_outline(txt))

    # Overlay subtitles on the blurred background
    final_clip = CompositeVideoClip(
        [background, animated_background_clip, stationary_background_clip, subtitles_outline, subtitles]
    )

    # Set the final video dimensions and export the video
    final_clip = final_clip.set_duration(duration)
    final_clip = final_clip.set_audio(final_audio)

    final_clip.write_videofile(
        f"temp/videos/{video_num}.mp4",
        codec="libx264",
        fps=60,
        bitrate="8000k",
        audio_codec="aac",
        audio_bitrate="192k",
        preset="ultrafast",
        threads=8
    )

    # Release the concatenated audio
    concatenated_audio.close()

    # Release the title audio
    title_audio.close()

    # Release the background video and image
    background.close()
    background_image.close()

    # Release the final audio
    final_audio.close()

    # Release the subtitle clips
    subtitles.close()
    subtitles_outline.close()

    # Release the final video clip
    final_clip.close()


    


    ive tried turning down my settings, like setting it to "ultrafast" and dropping the bitrate, but nothing seems to work. the only thing I can think of now is that there is something Im doing wrong with moviepy.

    


  • How can I speed up the generation of an MP4 using matplotlib's Animation Writer ?

    18 février 2019, par Victor 'Chris' Cabral

    I am using matplotlib to generate a graphical animation of some data. The data has about 4 hours of collection time so I expect the animation to be about 4 hours. However, generating a smaller 60 second video takes approximately 15 minutes. Thus, the total estimated run time for generating the 4 hour video is 2.5 days. I assume I am doing something incredibly inefficient. How can I speed up the creation of an animation with matplotlib ?

    create_graph.py

    import matplotlib.pyplot as plt
    import matplotlib.animation as animation
    import matplotlib
    import pandas as pd
    import numpy as np

    matplotlib.use("Agg")

    frame = pd.read_csv("tmp/total.csv")
    min_time = frame.iloc[0]["time"]
    max_time = frame.iloc[-1]["time"]
    total_time = max_time - min_time

    hertz_rate = 50
    window_length = 5
    save_count = hertz_rate * 100

    def data_gen():
       current_index_of_matching_ts = 0
       t = data_gen.t
       cnt = 0
       while cnt < save_count:
           print("Done: {}%".format(cnt/save_count*100.0))
           predicted = cnt * (1.0/hertz_rate)
           while frame.iloc[current_index_of_matching_ts]["time"] - min_time <= predicted and current_index_of_matching_ts < len(frame) - 1:
               current_index_of_matching_ts = current_index_of_matching_ts + 1

           y1 = frame.iloc[current_index_of_matching_ts]["var1"]
           y2 = frame.iloc[current_index_of_matching_ts]["var2"]
           y3 = frame.iloc[current_index_of_matching_ts]["var3"]
           y4 = frame.iloc[current_index_of_matching_ts]["var4"]
           y5 = frame.iloc[current_index_of_matching_ts]["var5"]
           y6 = frame.iloc[current_index_of_matching_ts]["var6"]
           y7 = frame.iloc[current_index_of_matching_ts]["var7"]
           y8 = frame.iloc[current_index_of_matching_ts]["var8"]
           y9 = frame.iloc[current_index_of_matching_ts]["var9"]
           t = frame.iloc[current_index_of_matching_ts]["time"] - min_time
           # adapted the data generator to yield both sin and cos
           yield t, y1, y2, y3, y4, y5, y6, y7, y8, y9
           cnt+=1

    data_gen.t = 0

    # create a figure with two subplots
    fig, (ax1, ax2, ax3, ax4, ax5, ax6, ax7, ax8, ax9) = plt.subplots(9,1,figsize=(7,14)) # produces a video of 700 × 1400

    # intialize two line objects (one in each axes)
    line1, = ax1.plot([], [], lw=2, color='b')
    line2, = ax2.plot([], [], lw=2, color='b')
    line3, = ax3.plot([], [], lw=2, color='b')
    line4, = ax4.plot([], [], lw=2, color='g')
    line5, = ax5.plot([], [], lw=2, color='g')
    line6, = ax6.plot([], [], lw=2, color='g')
    line7, = ax7.plot([], [], lw=2, color='r')
    line8, = ax8.plot([], [], lw=2, color='r')
    line9, = ax9.plot([], [], lw=2, color='r')
    line = [line1, line2, line3, line4, line5, line6, line7, line8, line9]

    # the same axes initalizations as before (just now we do it for both of them)
    for ax in [ax1, ax2, ax3, ax4, ax5, ax6, ax7, ax8,  ax9]:
       ax.set_ylim(-1.1, 1.1)
       ax.grid()

    # initialize the data arrays
    xdata, y1data, y2data, y3data, y4data, y5data, y6data, y7data, y8data, y9data = [], [], [], [], [], [], [], [], [], []

    my_gen = data_gen()
    for index in range(hertz_rate*window_length-1):
       t, y1, y2, y3, y4, y5, y6, y7, y8, y9 = my_gen.__next__()
       xdata.append(t)
       y1data.append(y1)
       y2data.append(y2)
       y3data.append(y3)
       y4data.append(y4)
       y5data.append(y5)
       y6data.append(y6)
       y7data.append(y7)
       y8data.append(y8)
       y9data.append(y9)


    def run(data):
       # update the data
       t, y1, y2, y3, y4, y5, y6, y7, y8, y9 = data
       xdata.append(t)
       y1data.append(y1)
       y2data.append(y2)
       y3data.append(y3)
       y4data.append(y4)
       y5data.append(y5)
       y6data.append(y6)
       y7data.append(y7)
       y8data.append(y8)
       y9data.append(y9)

       # axis limits checking. Same as before, just for both axes
       for ax in [ax1, ax2, ax3, ax4, ax5, ax6, ax7, ax8, ax9]:
           ax.set_xlim(xdata[-1]-5.0, xdata[-1])

       # update the data of both line objects
       line[0].set_data(xdata, y1data)
       line[1].set_data(xdata, y2data)
       line[2].set_data(xdata, y3data)
       line[3].set_data(xdata, y4data)
       line[4].set_data(xdata, y5data)
       line[5].set_data(xdata, y6data)
       line[6].set_data(xdata, y7data)
       line[7].set_data(xdata, y8data)
       line[8].set_data(xdata, y9data)

       return line

    ani = animation.FuncAnimation(fig, run, my_gen, blit=True, interval=20, repeat=False, save_count=save_count)

    Writer = animation.writers['ffmpeg']
    writer = Writer(fps=hertz_rate, metadata=dict(artist='Me'), bitrate=1800)
    ani.save('lines.mp4', writer=writer)
  • ffmpeg file read permission denied in application but not in debug

    25 novembre 2019, par Purgitoria

    My application has a function of taking captured images and using an ffmpeg background worker to stitch these into a time lapse video. The GUI has some simple options for video quality and for the source folder and output file. I had an older version of my application written in VB.NET and that worked without issue but i am rewriting in C# as it supports additional capture and filter capability in the image processing but am having real trouble figuring out what is wrong with this function.

    I have tried relocating ffmpeg to different locations just in case it was a permissions issue but that had no effect and i also tried to put the function in a "try" with a message box to output any exceptions but i got different errors that prevented me from compiling the code. When i run the application from within VS 2015 in the debugging tool the function works just fine and it will create a video from a collection of still images and create a log but when i build and install the application it does not work at all and i cannot see what is causing it to fail. In the options for ffmpeg i used the -report to output a log of what happens in the background worker and in debug it creates this log but from the application it does not so i presumed it was not even running ffmpeg and going straight to the completed stage of the function.

    Function startConversion()

       CheckForIllegalCrossThreadCalls = False
       Dim quality As Integer = trbQuality.Value
       Dim input As String = tbFolderOpen.Text
       Dim output As String = tbFolderSave.Text
       Dim exepath As String = Application.StartupPath + "\\bin\ffmpeg.exe"
       input = input & "\SCAImg_%1d.bmp"
       input = Chr(34) & input & Chr(34)
       output = Chr(34) & output & Chr(34)

       Dim sr As StreamReader
       Dim ffmpegOutput As String

       ' all parameters required to run the process
       proc.StartInfo.UseShellExecute = False
       proc.StartInfo.CreateNoWindow = True
       proc.StartInfo.RedirectStandardError = True
       proc.StartInfo.FileName = exepath
       proc.StartInfo.Arguments = "-framerate 25 -start_number 0 -pattern_type sequence -framerate 10 -i " & input & " -r 10 -c:v libx264 -preset slow -crf " & quality & " " & output
       proc.Start()

       lblInfo.Text = "Conversion in progress... Please wait..."
       sr = proc.StandardError 'standard error is used by ffmpeg
       btnMakeVideo.Enabled = False
       Do
           ffmpegOutput = sr.ReadLine
           tbProgress.Text = ffmpegOutput
       Loop Until proc.HasExited And ffmpegOutput = Nothing Or ffmpegOutput = ""

       tbProgress.Text = "Finished !"
       lblInfo.Text = "Completed!"
       MsgBox("Completed!", MsgBoxStyle.Exclamation)
       btnMakeVideo.Enabled = True
       Return 0

    End Function

    I checked the application folder and it does contain a sub folder \bin with the ffmpeg.exe located within the folder so i then used cmd to run an instance of the installed ffmpeg from the application folder and it seemed to be throwing out permissions errors :

    Failed to open report "ffmpeg-20191101-191452.log" : Permission denied Failed to set value ’1’ for option ’report’ : Permission denied Error parsing global options : Permission denied

    This seems then like it is certainly a permissions problem but where i am not sure. I did not run in to this error when using VB.NET so i am wondering where i am going wrong now. I thought perhaps it would just be a write permission in the application folder so i the removed the -report and ran ffmpeg again using cmd from my application folder and it then gave the error

    C :\Users\CEAstro\Pictures\AnytimeCap : Permission denied

    Am i missing something really obvious in my code or is there something more fundamental i have wrong in my setup ?

    I should also add that i tried running ffmpeg via cmd from a copy that was manually placed elsewhere (i used the same file) and that actually created the report but again i got a permission denied when trying to read the input files, considering it was from the user "my pictures" folder which should not have restrictions on read/write access i am at a rael loss.