
Recherche avancée
Autres articles (60)
-
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
Configuration spécifique d’Apache
4 février 2011, parModules spécifiques
Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
Création d’un (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (7747)
-
Speeding up videocomposing in pymovie
10 février 2024, par rawlungI'm trying to resize videos similarly to what this api provides
https://creatomate.com/docs/json/quick-start/blur-video-background
I accomplished the result more or less but the problem is it takes ages to render out.
I'm a total beginner when it comes to video processing and for the life of me i can't figure out how to speed it up. When the rendering is running python only uses CPU at about 20% utilization.


from moviepy.editor import VideoFileClip, concatenate_videoclips,CompositeVideoClipimport datetimefrom skimage.filters import gaussian

def _blur(image):
 return gaussian(image.astype(float), sigma=25,preserve_range=True,channel_axis=-1)

def blurVideos(filenames):
 clips = [VideoFileClip(c) for c in filenames]
 overlay_clips = [VideoFileClip((c), has_mask=True) for c in filenames]
 overlay = concatenate_videoclips(overlay_clips,"chain")
 output = concatenate_videoclips(clips, method="chain")
 print("Bluring video")
 blured_output = output.fl_image( _blur )
 print("Done")
 print("Resizing video")
 resized_output = blured_output.resize((1920,1080))
 print("Done")
 composited_output = CompositeVideoClip([resized_output.without_audio(),overlay.set_position("center","center")])
 composited_output.write_videofile(f"output/out_{datetime.datetime.today().strftime('%Y-%m-%d')}.mp4",fps=20,threads=16,codec="h264_nvenc",preset="fast")



I've tried to use GPU accelerated codecs like h264_nvenc, I've tried to modify ffmpeg arguments under the hood of moviepy to use cuda also no succses
What can i do to speed this up ?


-
perspective correction example
10 juillet 2022, par alessandroI have some videos taken of a display, with the camera not perfectly oriented, so that the result shows a strong trapezoidal effect.
I know that there is a perspective filter in ffmpeg https://ffmpeg.org/ffmpeg-filters.html#perspective, but I'm too dumb to understand how it works from the docs - and I cannot find a single example.



Somebody can show me how it works ?


-
How to get webam frames one by one but also compressed ?
29 mars, par VoracI need to grab frames from the webcam of a laptop, transmit them one by one and the receiving side stitch them into a video. I picked
ffmpeg-python
as wrapper of choice and the example from the docs works right away :

#!/usr/bin/env python

# In this file: reading frames one by one from the webcam.


import ffmpeg

width = 640
height = 480


reader = (
 ffmpeg
 .input('/dev/video0', s='{}x{}'.format(width, height))
 .output('pipe:', format='rawvideo', pix_fmt='yuv420p')
 .run_async(pipe_stdout=True)
)

# This is here only to test the reader.
writer = (
 ffmpeg
 .input('pipe:', format='rawvideo', pix_fmt='yuv420p', s='{}x{}'.format(width, height))
 .output('/tmp/test.mp4', format='h264', pix_fmt='yuv420p')
 .overwrite_output()
 .run_async(pipe_stdin=True)
)


while True:
 chunk = reader.stdout.read(width * height * 1.5) # yuv
 print(len(chunk))
 writer.stdin.write(chunk)



Now for the compression part.


My reading of the docs is that the input to the reader perhaps needs be
rawvideo
but nothing else does. I tried replacingrawvideo
withh264
in my code but that resulted in empty frames. I'm considering a third invocation looking like this but is that really the correct approach ?

encoder = ( 
 ffmpeg 
 .input('pipe:', format='rawvideo', pix_fmt='yuv420p', s='{}x{}'.format(width, height))
 .output('pipe:', format='h264', pix_fmt='yuv420p') 
 .run_async(pipe_stdin=True, pipe_stdout=True)