Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (36)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (3982)

  • Unable to install moviepy

    7 juillet 2022, par Jatin Shekara

    First I typed the following commands in the terminal to install the necessary packages :

    


    pip install moviepy
pip install ffmpeg


    


    Then when I tried to run the following code, I got this :

    


    from moviepy.editor import *


    


    Error : RuntimeError : No ffmpeg exe could be found. Install ffmpeg on your system, or set the IMAGEIO_FFMPEG_EXE environment variable.

    


    To fix the error, I typed the following code above the previous line of code :

    


    import os
os.environ["IMAGEIO_FFMPEG_EXE"] = "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/ffmpeg"
from moviepy.editor import *


    


    This fixed the issue I was having earlier and I was able to import it. The location you see typed in the code was directly copied from the output from the location attribute when I typed pip show ffmpeg in the terminal.
However, when I actually try and use the library, I get errors :

    


    import os
os.environ["IMAGEIO_FFMPEG_EXE"] = "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/ffmpeg"
from moviepy.editor import *

clip = VideoFileClip("master_video.mp4") 


    


    Error: &#xA;---------------------------------------------------------------------------&#xA;PermissionError                           Traceback (most recent call last)&#xA;Input In [2], in <cell 5="5">()&#xA;      2 os.environ["IMAGEIO_FFMPEG_EXE"] = "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/ffmpeg"&#xA;      3 from moviepy.editor import *&#xA;----> 5 clip = VideoFileClip("master_video.mp4") &#xA;      7 for x in range(0,10):&#xA;      8     print(randint(0, 2420))&#xA;&#xA;File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/moviepy/video/io/VideoFileClip.py:88, in VideoFileClip.__init__(self, filename, has_mask, audio, audio_buffersize, target_resolution, resize_algorithm, audio_fps, audio_nbytes, verbose, fps_source)&#xA;     86 # Make a reader&#xA;     87 pix_fmt = "rgba" if has_mask else "rgb24"&#xA;---> 88 self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt,&#xA;     89                                  target_resolution=target_resolution,&#xA;     90                                  resize_algo=resize_algorithm,&#xA;     91                                  fps_source=fps_source)&#xA;     93 # Make some of the reader&#x27;s attributes accessible from the clip&#xA;     94 self.duration = self.reader.duration&#xA;&#xA;File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_reader.py:35, in FFMPEG_VideoReader.__init__(self, filename, print_infos, bufsize, pix_fmt, check_duration, target_resolution, resize_algo, fps_source)&#xA;     33 self.filename = filename&#xA;     34 self.proc = None&#xA;---> 35 infos = ffmpeg_parse_infos(filename, print_infos, check_duration,&#xA;     36                            fps_source)&#xA;     37 self.fps = infos[&#x27;video_fps&#x27;]&#xA;     38 self.size = infos[&#x27;video_size&#x27;]&#xA;&#xA;File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_reader.py:257, in ffmpeg_parse_infos(filename, print_infos, check_duration, fps_source)&#xA;    254 if os.name == "nt":&#xA;    255     popen_params["creationflags"] = 0x08000000&#xA;--> 257 proc = sp.Popen(cmd, **popen_params)&#xA;    258 (output, error) = proc.communicate()&#xA;    259 infos = error.decode(&#x27;utf8&#x27;)&#xA;&#xA;File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py:969, in Popen.__init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, user, group, extra_groups, encoding, errors, text, umask, pipesize)&#xA;    965         if self.text_mode:&#xA;    966             self.stderr = io.TextIOWrapper(self.stderr,&#xA;    967                     encoding=encoding, errors=errors)&#xA;--> 969     self._execute_child(args, executable, preexec_fn, close_fds,&#xA;    970                         pass_fds, cwd, env,&#xA;    971                         startupinfo, creationflags, shell,&#xA;    972                         p2cread, p2cwrite,&#xA;    973                         c2pread, c2pwrite,&#xA;    974                         errread, errwrite,&#xA;    975                         restore_signals,&#xA;    976                         gid, gids, uid, umask,&#xA;    977                         start_new_session)&#xA;    978 except:&#xA;    979     # Cleanup if the child failed starting.&#xA;    980     for f in filter(None, (self.stdin, self.stdout, self.stderr)):&#xA;&#xA;File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py:1845, in Popen._execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, restore_signals, gid, gids, uid, umask, start_new_session)&#xA;   1843     if errno_num != 0:&#xA;   1844         err_msg = os.strerror(errno_num)&#xA;-> 1845     raise child_exception_type(errno_num, err_msg, err_filename)&#xA;   1846 raise child_exception_type(err_msg)&#xA;&#xA;PermissionError: [Errno 13] Permission denied: &#x27;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/ffmpeg&#x27;&#xA;</cell>

    &#xA;

    Thank you so much in advance

    &#xA;

  • ffmpeg delay in decoding h264

    19 mai 2020, par Mateen Ulhaq

    NOTE : Still looking for an answer !

    &#xA;&#xA;

    I am taking raw RGB frames, encoding them to h264, then decoding them back to raw RGB frames.

    &#xA;&#xA;

    [RGB frame] ------ encoder ------> [h264 stream] ------ decoder ------> [RGB frame]&#xA;              ^               ^                    ^               ^&#xA;        encoder_write    encoder_read        decoder_write    decoder_read&#xA;

    &#xA;&#xA;

    I would like to retrieve the decoded frames as soon as possible. However, it seems that there is always a one-frame delay no matter how long one waits.¹ In this example, I feed the encoder a frame every 2 seconds :

    &#xA;&#xA;

    $ python demo.py 2>/dev/null&#xA;time=0 frames=1 encoder_write&#xA;time=2 frames=2 encoder_write&#xA;time=2 frames=1 decoder_read   &lt;-- decoded output is delayed by extra frame&#xA;time=4 frames=3 encoder_write&#xA;time=4 frames=2 decoder_read&#xA;time=6 frames=4 encoder_write&#xA;time=6 frames=3 decoder_read&#xA;...&#xA;

    &#xA;&#xA;

    What I want instead :

    &#xA;&#xA;

    $ python demo.py 2>/dev/null&#xA;time=0 frames=1 encoder_write&#xA;time=0 frames=1 decoder_read   &lt;-- decode immediately after encode&#xA;time=2 frames=2 encoder_write&#xA;time=2 frames=2 decoder_read&#xA;time=4 frames=3 encoder_write&#xA;time=4 frames=3 decoder_read&#xA;time=6 frames=4 encoder_write&#xA;time=6 frames=4 decoder_read&#xA;...&#xA;

    &#xA;&#xA;

    The encoder and decoder ffmpeg processes are run with the following arguments :

    &#xA;&#xA;

    encoder: ffmpeg -f rawvideo -pix_fmt rgb24 -s 224x224 -i pipe: \&#xA;                -f h264 -tune zerolatency pipe:&#xA;&#xA;decoder: ffmpeg -probesize 32 -flags low_delay \&#xA;                -f h264 -i pipe: \&#xA;                -f rawvideo -pix_fmt rgb24 -s 224x224 pipe:&#xA;

    &#xA;&#xA;

    Complete reproducible example below. No external video files needed. Just copy, paste, and run python demo.py 2>/dev/null !

    &#xA;&#xA;

    import subprocess&#xA;from queue import Queue&#xA;from threading import Thread&#xA;from time import sleep, time&#xA;import numpy as np&#xA;&#xA;WIDTH = 224&#xA;HEIGHT = 224&#xA;NUM_FRAMES = 256&#xA;&#xA;def t(epoch=time()):&#xA;    return int(time() - epoch)&#xA;&#xA;def make_frames(num_frames):&#xA;    x = np.arange(WIDTH, dtype=np.uint8)&#xA;    x = np.broadcast_to(x, (num_frames, HEIGHT, WIDTH))&#xA;    x = x[..., np.newaxis].repeat(3, axis=-1)&#xA;    x[..., 1] = x[:, :, ::-1, 1]&#xA;    scale = np.arange(1, len(x) &#x2B; 1, dtype=np.uint8)&#xA;    scale = scale[:, np.newaxis, np.newaxis, np.newaxis]&#xA;    x *= scale&#xA;    return x&#xA;&#xA;def encoder_write(writer):&#xA;    """Feeds encoder frames to encode"""&#xA;    frames = make_frames(num_frames=NUM_FRAMES)&#xA;    for i, frame in enumerate(frames):&#xA;        writer.write(frame.tobytes())&#xA;        writer.flush()&#xA;        print(f"time={t()} frames={i &#x2B; 1:&lt;3} encoder_write")&#xA;        sleep(2)&#xA;    writer.close()&#xA;&#xA;def encoder_read(reader, queue):&#xA;    """Puts chunks of encoded bytes into queue"""&#xA;    while chunk := reader.read1():&#xA;        queue.put(chunk)&#xA;        # print(f"time={t()} chunk={len(chunk):&lt;4} encoder_read")&#xA;    queue.put(None)&#xA;&#xA;def decoder_write(writer, queue):&#xA;    """Feeds decoder bytes to decode"""&#xA;    while chunk := queue.get():&#xA;        writer.write(chunk)&#xA;        writer.flush()&#xA;        # print(f"time={t()} chunk={len(chunk):&lt;4} decoder_write")&#xA;    writer.close()&#xA;&#xA;def decoder_read(reader):&#xA;    """Retrieves decoded frames"""&#xA;    buffer = b""&#xA;    frame_len = HEIGHT * WIDTH * 3&#xA;    targets = make_frames(num_frames=NUM_FRAMES)&#xA;    i = 0&#xA;    while chunk := reader.read1():&#xA;        buffer &#x2B;= chunk&#xA;        while len(buffer) >= frame_len:&#xA;            frame = np.frombuffer(buffer[:frame_len], dtype=np.uint8)&#xA;            frame = frame.reshape(HEIGHT, WIDTH, 3)&#xA;            psnr = 10 * np.log10(255**2 / np.mean((frame - targets[i])**2))&#xA;            buffer = buffer[frame_len:]&#xA;            i &#x2B;= 1&#xA;            print(f"time={t()} frames={i:&lt;3} decoder_read  psnr={psnr:.1f}")&#xA;&#xA;cmd = (&#xA;    "ffmpeg "&#xA;    "-f rawvideo -pix_fmt rgb24 -s 224x224 "&#xA;    "-i pipe: "&#xA;    "-f h264 "&#xA;    "-tune zerolatency "&#xA;    "pipe:"&#xA;)&#xA;encoder_process = subprocess.Popen(&#xA;    cmd.split(), stdin=subprocess.PIPE, stdout=subprocess.PIPE&#xA;)&#xA;&#xA;cmd = (&#xA;    "ffmpeg "&#xA;    "-probesize 32 "&#xA;    "-flags low_delay "&#xA;    "-f h264 "&#xA;    "-i pipe: "&#xA;    "-f rawvideo -pix_fmt rgb24 -s 224x224 "&#xA;    "pipe:"&#xA;)&#xA;decoder_process = subprocess.Popen(&#xA;    cmd.split(), stdin=subprocess.PIPE, stdout=subprocess.PIPE&#xA;)&#xA;&#xA;queue = Queue()&#xA;&#xA;threads = [&#xA;    Thread(target=encoder_write, args=(encoder_process.stdin,),),&#xA;    Thread(target=encoder_read, args=(encoder_process.stdout, queue),),&#xA;    Thread(target=decoder_write, args=(decoder_process.stdin, queue),),&#xA;    Thread(target=decoder_read, args=(decoder_process.stdout,),),&#xA;]&#xA;&#xA;for thread in threads:&#xA;    thread.start()&#xA;

    &#xA;&#xA;


    &#xA;&#xA;

    ¹ I did some testing and it seems the decoder is waiting for the next frame's NAL header 00 00 00 01 41 88 (in hex) before it decodes the current frame. One would hope that the prefix 00 00 00 01 would be enough, but it also waits for the next two bytes !

    &#xA;&#xA;

    ² Prior revision of question.

    &#xA;

  • Colors messed up when converting an image sequence to video using ffmpeg

    10 février 2024, par Form

    I'm generating a video from a PNG sequence using ffmpeg but the resulting video has wrong colors compared to the source files. Getting the correct colors is important because we're using our video assets side-by-side with image assets and the colors must match perfectly (or at least, be as visually similar as possible so as not to be jarring).

    &#xA;

    Our PNG input files are in the sRGB color profile.

    &#xA;

    Here's the command we're running :

    &#xA;

    ffmpeg -r 30 -f image2 -s 1920x1080 -i bg_analyse_%05d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p output5.mp4&#xA;

    &#xA;

    And here's a comparison of a source PNG (left) and the same frame in the video (right) :

    &#xA;

    enter image description here

    &#xA;

    From what we've gathered, H.264 does not support sRGB as a built-in color profile so I suppose ffmpeg must perform some kind of color conversion. However, the default ffmpeg settings seem to get the conversion wrong.

    &#xA;

    How can I get ffmpeg to export a video with colors as close visually as our PNG source files in the native H.264 color profile ? I've tried various flags to try and specify input color profiles and more but nothing produced the expected results yet.

    &#xA;

    I didn't see many mention color profiles when it comes to generating videos from PNG sequences using ffmpeg. Must be because most aren't too picky on the output colors or simply don't notice ? When putting our source assets with the video output side-by-side in our app, however, the difference is clear.

    &#xA;

    I already tried playing the video file in multiple players to make sure it's not a display issue (QuickTime Player X, Chrome, etc.). The video is exactly the same (lighter than the source PNGs) in all players.

    &#xA;


    &#xA;

    Edit 1 :

    &#xA;

    In the end, the image and video will be displayed in Electron (Chromium). If that changes anything to how the video should be generated.

    &#xA;


    &#xA;

    Edit 2 :

    &#xA;

    We have an AfterEffects project from which the files are exported. We couldn't find any way to have that output correct colors so we hoped that using ffmpeg with a sequence of PNGs (which AE exports correctly) would give us more control over the final colors. Open to ideas on how to manage colors properly in AE, too.

    &#xA;