
Recherche avancée
Médias (29)
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (36)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (3982)
-
Unable to install moviepy
7 juillet 2022, par Jatin ShekaraFirst I typed the following commands in the terminal to install the necessary packages :


pip install moviepy
pip install ffmpeg



Then when I tried to run the following code, I got this :


from moviepy.editor import *



Error : RuntimeError : No ffmpeg exe could be found. Install ffmpeg on your system, or set the IMAGEIO_FFMPEG_EXE environment variable.


To fix the error, I typed the following code above the previous line of code :


import os
os.environ["IMAGEIO_FFMPEG_EXE"] = "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/ffmpeg"
from moviepy.editor import *



This fixed the issue I was having earlier and I was able to import it. The location you see typed in the code was directly copied from the output from the location attribute when I typed pip show ffmpeg in the terminal.
However, when I actually try and use the library, I get errors :


import os
os.environ["IMAGEIO_FFMPEG_EXE"] = "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/ffmpeg"
from moviepy.editor import *

clip = VideoFileClip("master_video.mp4") 



Error: 
---------------------------------------------------------------------------
PermissionError Traceback (most recent call last)
Input In [2], in <cell 5="5">()
 2 os.environ["IMAGEIO_FFMPEG_EXE"] = "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/ffmpeg"
 3 from moviepy.editor import *
----> 5 clip = VideoFileClip("master_video.mp4") 
 7 for x in range(0,10):
 8 print(randint(0, 2420))

File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/moviepy/video/io/VideoFileClip.py:88, in VideoFileClip.__init__(self, filename, has_mask, audio, audio_buffersize, target_resolution, resize_algorithm, audio_fps, audio_nbytes, verbose, fps_source)
 86 # Make a reader
 87 pix_fmt = "rgba" if has_mask else "rgb24"
---> 88 self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt,
 89 target_resolution=target_resolution,
 90 resize_algo=resize_algorithm,
 91 fps_source=fps_source)
 93 # Make some of the reader's attributes accessible from the clip
 94 self.duration = self.reader.duration

File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_reader.py:35, in FFMPEG_VideoReader.__init__(self, filename, print_infos, bufsize, pix_fmt, check_duration, target_resolution, resize_algo, fps_source)
 33 self.filename = filename
 34 self.proc = None
---> 35 infos = ffmpeg_parse_infos(filename, print_infos, check_duration,
 36 fps_source)
 37 self.fps = infos['video_fps']
 38 self.size = infos['video_size']

File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_reader.py:257, in ffmpeg_parse_infos(filename, print_infos, check_duration, fps_source)
 254 if os.name == "nt":
 255 popen_params["creationflags"] = 0x08000000
--> 257 proc = sp.Popen(cmd, **popen_params)
 258 (output, error) = proc.communicate()
 259 infos = error.decode('utf8')

File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py:969, in Popen.__init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, user, group, extra_groups, encoding, errors, text, umask, pipesize)
 965 if self.text_mode:
 966 self.stderr = io.TextIOWrapper(self.stderr,
 967 encoding=encoding, errors=errors)
--> 969 self._execute_child(args, executable, preexec_fn, close_fds,
 970 pass_fds, cwd, env,
 971 startupinfo, creationflags, shell,
 972 p2cread, p2cwrite,
 973 c2pread, c2pwrite,
 974 errread, errwrite,
 975 restore_signals,
 976 gid, gids, uid, umask,
 977 start_new_session)
 978 except:
 979 # Cleanup if the child failed starting.
 980 for f in filter(None, (self.stdin, self.stdout, self.stderr)):

File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py:1845, in Popen._execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, restore_signals, gid, gids, uid, umask, start_new_session)
 1843 if errno_num != 0:
 1844 err_msg = os.strerror(errno_num)
-> 1845 raise child_exception_type(errno_num, err_msg, err_filename)
 1846 raise child_exception_type(err_msg)

PermissionError: [Errno 13] Permission denied: '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/ffmpeg'
</cell>


Thank you so much in advance


-
ffmpeg delay in decoding h264
19 mai 2020, par Mateen UlhaqNOTE : Still looking for an answer !



I am taking raw RGB frames, encoding them to h264, then decoding them back to raw RGB frames.



[RGB frame] ------ encoder ------> [h264 stream] ------ decoder ------> [RGB frame]
 ^ ^ ^ ^
 encoder_write encoder_read decoder_write decoder_read




I would like to retrieve the decoded frames as soon as possible. However, it seems that there is always a one-frame delay no matter how long one waits.¹ In this example, I feed the encoder a frame every 2 seconds :



$ python demo.py 2>/dev/null
time=0 frames=1 encoder_write
time=2 frames=2 encoder_write
time=2 frames=1 decoder_read <-- decoded output is delayed by extra frame
time=4 frames=3 encoder_write
time=4 frames=2 decoder_read
time=6 frames=4 encoder_write
time=6 frames=3 decoder_read
...




What I want instead :



$ python demo.py 2>/dev/null
time=0 frames=1 encoder_write
time=0 frames=1 decoder_read <-- decode immediately after encode
time=2 frames=2 encoder_write
time=2 frames=2 decoder_read
time=4 frames=3 encoder_write
time=4 frames=3 decoder_read
time=6 frames=4 encoder_write
time=6 frames=4 decoder_read
...




The encoder and decoder ffmpeg processes are run with the following arguments :



encoder: ffmpeg -f rawvideo -pix_fmt rgb24 -s 224x224 -i pipe: \
 -f h264 -tune zerolatency pipe:

decoder: ffmpeg -probesize 32 -flags low_delay \
 -f h264 -i pipe: \
 -f rawvideo -pix_fmt rgb24 -s 224x224 pipe:




Complete reproducible example below. No external video files needed. Just copy, paste, and run
python demo.py 2>/dev/null
!


import subprocess
from queue import Queue
from threading import Thread
from time import sleep, time
import numpy as np

WIDTH = 224
HEIGHT = 224
NUM_FRAMES = 256

def t(epoch=time()):
 return int(time() - epoch)

def make_frames(num_frames):
 x = np.arange(WIDTH, dtype=np.uint8)
 x = np.broadcast_to(x, (num_frames, HEIGHT, WIDTH))
 x = x[..., np.newaxis].repeat(3, axis=-1)
 x[..., 1] = x[:, :, ::-1, 1]
 scale = np.arange(1, len(x) + 1, dtype=np.uint8)
 scale = scale[:, np.newaxis, np.newaxis, np.newaxis]
 x *= scale
 return x

def encoder_write(writer):
 """Feeds encoder frames to encode"""
 frames = make_frames(num_frames=NUM_FRAMES)
 for i, frame in enumerate(frames):
 writer.write(frame.tobytes())
 writer.flush()
 print(f"time={t()} frames={i + 1:<3} encoder_write")
 sleep(2)
 writer.close()

def encoder_read(reader, queue):
 """Puts chunks of encoded bytes into queue"""
 while chunk := reader.read1():
 queue.put(chunk)
 # print(f"time={t()} chunk={len(chunk):<4} encoder_read")
 queue.put(None)

def decoder_write(writer, queue):
 """Feeds decoder bytes to decode"""
 while chunk := queue.get():
 writer.write(chunk)
 writer.flush()
 # print(f"time={t()} chunk={len(chunk):<4} decoder_write")
 writer.close()

def decoder_read(reader):
 """Retrieves decoded frames"""
 buffer = b""
 frame_len = HEIGHT * WIDTH * 3
 targets = make_frames(num_frames=NUM_FRAMES)
 i = 0
 while chunk := reader.read1():
 buffer += chunk
 while len(buffer) >= frame_len:
 frame = np.frombuffer(buffer[:frame_len], dtype=np.uint8)
 frame = frame.reshape(HEIGHT, WIDTH, 3)
 psnr = 10 * np.log10(255**2 / np.mean((frame - targets[i])**2))
 buffer = buffer[frame_len:]
 i += 1
 print(f"time={t()} frames={i:<3} decoder_read psnr={psnr:.1f}")

cmd = (
 "ffmpeg "
 "-f rawvideo -pix_fmt rgb24 -s 224x224 "
 "-i pipe: "
 "-f h264 "
 "-tune zerolatency "
 "pipe:"
)
encoder_process = subprocess.Popen(
 cmd.split(), stdin=subprocess.PIPE, stdout=subprocess.PIPE
)

cmd = (
 "ffmpeg "
 "-probesize 32 "
 "-flags low_delay "
 "-f h264 "
 "-i pipe: "
 "-f rawvideo -pix_fmt rgb24 -s 224x224 "
 "pipe:"
)
decoder_process = subprocess.Popen(
 cmd.split(), stdin=subprocess.PIPE, stdout=subprocess.PIPE
)

queue = Queue()

threads = [
 Thread(target=encoder_write, args=(encoder_process.stdin,),),
 Thread(target=encoder_read, args=(encoder_process.stdout, queue),),
 Thread(target=decoder_write, args=(decoder_process.stdin, queue),),
 Thread(target=decoder_read, args=(decoder_process.stdout,),),
]

for thread in threads:
 thread.start()






¹ I did some testing and it seems the decoder is waiting for the next frame's NAL header
00 00 00 01 41 88
(in hex) before it decodes the current frame. One would hope that the prefix00 00 00 01
would be enough, but it also waits for the next two bytes !




-
Colors messed up when converting an image sequence to video using ffmpeg
10 février 2024, par FormI'm generating a video from a PNG sequence using ffmpeg but the resulting video has wrong colors compared to the source files. Getting the correct colors is important because we're using our video assets side-by-side with image assets and the colors must match perfectly (or at least, be as visually similar as possible so as not to be jarring).


Our PNG input files are in the sRGB color profile.


Here's the command we're running :


ffmpeg -r 30 -f image2 -s 1920x1080 -i bg_analyse_%05d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p output5.mp4



And here's a comparison of a source PNG (left) and the same frame in the video (right) :




From what we've gathered, H.264 does not support sRGB as a built-in color profile so I suppose ffmpeg must perform some kind of color conversion. However, the default ffmpeg settings seem to get the conversion wrong.


How can I get ffmpeg to export a video with colors as close visually as our PNG source files in the native H.264 color profile ? I've tried various flags to try and specify input color profiles and more but nothing produced the expected results yet.


I didn't see many mention color profiles when it comes to generating videos from PNG sequences using ffmpeg. Must be because most aren't too picky on the output colors or simply don't notice ? When putting our source assets with the video output side-by-side in our app, however, the difference is clear.


I already tried playing the video file in multiple players to make sure it's not a display issue (QuickTime Player X, Chrome, etc.). The video is exactly the same (lighter than the source PNGs) in all players.



Edit 1 :


In the end, the image and video will be displayed in Electron (Chromium). If that changes anything to how the video should be generated.



Edit 2 :


We have an AfterEffects project from which the files are exported. We couldn't find any way to have that output correct colors so we hoped that using ffmpeg with a sequence of PNGs (which AE exports correctly) would give us more control over the final colors. Open to ideas on how to manage colors properly in AE, too.