Recherche avancée

Médias (1)

Mot : - Tags -/wave

Autres articles (63)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (11306)

  • How do I properly save an animation involving circles with matplotlib.animation and ffmpeg ?

    12 juillet 2020, par bghost

    I recently tried out matplotlib.animation, and it's a wonderful tool. I can now make and save basic animations (ie that only involve straight lines) without any issues. However, when I made an animation involving circles, even though the interactive display was perfect, the saved mp4 file wasn't really satisfying. In the mp4 file, the edges of the circles were blurred, and if the circles were made semi-transparent (ie with an alpha value < 1), they all suddenly became completely opaque after a couple of frames. I suspected it was due to the fact that my bitrate wasn't high enough, but I went up to 10000 kb/s (instead of 1800), and exactly the same phenomenon occurred.

    &#xA;

    What can be done to solve these 2 issues (blurred edges + negated transparency) in the generated mp4 file ?

    &#xA;

    Here is a simple animation that describes what I just said :

    &#xA;

    import numpy as np&#xA;import matplotlib.pyplot as plt&#xA;import matplotlib.animation as animation&#xA;&#xA;fig = plt.figure(figsize=(11, 7))&#xA;ax = plt.axes(xlim=(-1.2, 1.2), ylim=(-0.7, 0.7))&#xA;ax.set_aspect(&#x27;equal&#x27;)&#xA;&#xA;dict_circles = {}&#xA;dict_circles[&#x27;ring&#x27;] = plt.Circle((-0.5, 0), 0.5, color=&#x27;b&#x27;, lw=2, fill=False)&#xA;dict_circles[&#x27;disk&#x27;] = plt.Circle((-0.5, 0), 0.5, color=&#x27;b&#x27;, alpha=0.2)&#xA;&#xA;def init():&#xA;    for circle in dict_circles.values():&#xA;        ax.add_patch(circle)&#xA;    return(dict_circles.values())&#xA;&#xA;nb_frames = 100&#xA;X_center = np.linspace(-0.5, 0.5, nb_frames)&#xA;&#xA;def animate(frame):&#xA;    for circle in dict_circles.values():&#xA;        circle.center = (X_center[frame], 0)&#xA;        ax.add_patch(circle)&#xA;    return(dict_circles.values())&#xA;&#xA;ani = animation.FuncAnimation(fig, animate, init_func=init, frames=nb_frames, blit=True, interval=10, repeat=False)&#xA;plt.show()&#xA;&#xA;plt.rcParams[&#x27;animation.ffmpeg_path&#x27;] = &#x27;C:\\ffmpeg\\bin\\ffmpeg.exe&#x27;&#xA;Writer = animation.writers[&#x27;ffmpeg&#x27;]&#xA;writer_ref = Writer(fps=15, bitrate=1800)&#xA;&#xA;ani.save(&#x27;Blue circle.mp4&#x27;, writer=writer_ref)&#xA;

    &#xA;

  • Lossless trim and crop of MJPEG video

    28 avril 2021, par prouast

    I am working on a project where I need to trim and crop MJPEG videos without any re-encoding. I have working code that accomplishes this by exporting the relevant frames as JPEGs, cropping them individually, and then joining them back together into an MJPEG.

    &#xA;

    However, this seems quite inefficient and slow. I am looking for pointers how to improve this approach. For example, would it be possible to store the JPEGs in-memory ?

    &#xA;

    import ffmpeg&#xA;import os&#xA;import shutil&#xA;import subprocess&#xA;&#xA;def lossless_trim_and_crop(path, output_path, start, end, x, y, width, height, fps):&#xA;  # Trim the video in time and export all individual jpeg with ffmpeg &#x2B; mjpeg2jpeg&#xA;  jpeg_folder = os.path.splitext(output_path)[0]&#xA;  jpeg_path = os.path.join(jpeg_folder, "frame_%03d.jpg")&#xA;  stream = ffmpeg.input(path, ss=start/fps, t=(end-start)/fps)&#xA;  stream = ffmpeg.output(stream, jpeg_path, vcodec=&#x27;copy&#x27;, **{&#x27;bsf:v&#x27;: &#x27;mjpeg2jpeg&#x27;})&#xA;  stream.run(quiet=True)&#xA;  # Crop all individual jpeg with jpegtran&#xA;  for filename in os.listdir(jpeg_folder):&#xA;    filepath = os.path.join(jpeg_folder, filename)&#xA;    out_filepath = os.path.splitext(filepath)[0] &#x2B; "_c.jpg"&#xA;    subprocess.call(&#xA;      "jpegtran -perfect -crop {}x{}&#x2B;{}&#x2B;{} -outfile {} {}".format(&#xA;        width, height, x, y, out_filepath, filepath), shell=True)&#xA;    os.remove(filepath)&#xA;  # Join individual jpg back together&#xA;  cropped_jpeg_path = os.path.join(jpeg_folder, "frame_%03d_c.jpg")&#xA;  stream = ffmpeg.input(cropped_jpeg_path, framerate=fps)&#xA;  stream = ffmpeg.output(stream, output_path, vcodec=&#x27;copy&#x27;)&#xA;  stream.run(quiet=True)&#xA;  # Delete jpeg directory&#xA;  shutil.rmtree(jpeg_folder)&#xA;

    &#xA;

  • Why is one ffmpeg webm dash stream much larger than the others ?

    5 janvier 2017, par ranvel

    Over the summer, I worked on putting together a script which took a x264 video/mp3 stream and broke it up into the different streams so that it would work via MSE-DASH. (Based heavily on the instructions on the webmproject.org website) Those same scripts have ceased to work, turning a 6GB video into several 25 Gb videos. I kept up with updates of ffmpeg and so I don’t know when it stopped working, but I am guessing it was due to the way that their DASH Webm implementation was updated.

    I found new method which works better, but still has a major problem with one stream. I was hoping someone could explain how this encoding works so that I could understand the underlying cause.

    #!/bin/bash
    COMMON_OPTS="-map 0:0 -an -threads 11 -cpu-used 4 -cmp chroma"
    WEBM_OPTS="-f webm -c:v vp9 -keyint_min 50 -g 50 -dash 1"

    ffmpeg -i $1 -vn -acodec libvorbis -ab 128k audio.webm &amp;
    ffmpeg -i $1 $COMMON_OPTS $WEBM_OPTS -b:v 500k -vf scale=1280:720 -y vid-500k.webm &amp;
    ffmpeg -i $1 $COMMON_OPTS $WEBM_OPTS -b:v 700k -vf scale=1280:720 -y vid-700k.webm &amp;
    ffmpeg -i $1 $COMMON_OPTS $WEBM_OPTS -b:v 1000k -vf scale=1280:720 -y vid-1000k.webm &amp;
    ffmpeg -i $1 $COMMON_OPTS $WEBM_OPTS -b:v 1500k -vf scale=1280:720 -y vid-1500k.webm  

    The transcode is not yet complete, but you can see where this is headed :

    -rw-r--r--  1 user  staff    87M Jan  4 23:27 audio.webm
    -rw-r--r--  1 user  staff    27M Jan  4 23:42 vid-1000k.webm
    -rw-r--r--  1 user  staff   285M Jan  4 23:42 vid-1500k.webm
    -rw-r--r--  1 user  staff    15M Jan  4 23:42 vid-500k.webm
    -rw-r--r--  1 user  staff    20M Jan  4 23:42 vid-700k.webm

    The 1500k variant is disproportionately larger than the other streams.

    The other problem is that when I use a shorter video, lets say eight or nine minutes, the above configuration runs as expected and everything is perfect. I don’t know where the limit for this is since each test costs a lot of processing power and time, but if it’s less than ten minutes, it works and if its longer than an hour, it produces massive files.