
Recherche avancée
Autres articles (72)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (12038)
-
Speeding up videocomposing in pymovie
10 février 2024, par rawlungI'm trying to resize videos similarly to what this api provides
https://creatomate.com/docs/json/quick-start/blur-video-background
I accomplished the result more or less but the problem is it takes ages to render out.
I'm a total beginner when it comes to video processing and for the life of me i can't figure out how to speed it up. When the rendering is running python only uses CPU at about 20% utilization.


from moviepy.editor import VideoFileClip, concatenate_videoclips,CompositeVideoClipimport datetimefrom skimage.filters import gaussian

def _blur(image):
 return gaussian(image.astype(float), sigma=25,preserve_range=True,channel_axis=-1)

def blurVideos(filenames):
 clips = [VideoFileClip(c) for c in filenames]
 overlay_clips = [VideoFileClip((c), has_mask=True) for c in filenames]
 overlay = concatenate_videoclips(overlay_clips,"chain")
 output = concatenate_videoclips(clips, method="chain")
 print("Bluring video")
 blured_output = output.fl_image( _blur )
 print("Done")
 print("Resizing video")
 resized_output = blured_output.resize((1920,1080))
 print("Done")
 composited_output = CompositeVideoClip([resized_output.without_audio(),overlay.set_position("center","center")])
 composited_output.write_videofile(f"output/out_{datetime.datetime.today().strftime('%Y-%m-%d')}.mp4",fps=20,threads=16,codec="h264_nvenc",preset="fast")



I've tried to use GPU accelerated codecs like h264_nvenc, I've tried to modify ffmpeg arguments under the hood of moviepy to use cuda also no succses
What can i do to speed this up ?


-
perspective correction example
10 juillet 2022, par alessandroI have some videos taken of a display, with the camera not perfectly oriented, so that the result shows a strong trapezoidal effect.
I know that there is a perspective filter in ffmpeg https://ffmpeg.org/ffmpeg-filters.html#perspective, but I'm too dumb to understand how it works from the docs - and I cannot find a single example.



Somebody can show me how it works ?


-
How to get webam frames one by one but also compressed ?
29 mars, par VoracI need to grab frames from the webcam of a laptop, transmit them one by one and the receiving side stitch them into a video. I picked
ffmpeg-python
as wrapper of choice and the example from the docs works right away :

#!/usr/bin/env python

# In this file: reading frames one by one from the webcam.


import ffmpeg

width = 640
height = 480


reader = (
 ffmpeg
 .input('/dev/video0', s='{}x{}'.format(width, height))
 .output('pipe:', format='rawvideo', pix_fmt='yuv420p')
 .run_async(pipe_stdout=True)
)

# This is here only to test the reader.
writer = (
 ffmpeg
 .input('pipe:', format='rawvideo', pix_fmt='yuv420p', s='{}x{}'.format(width, height))
 .output('/tmp/test.mp4', format='h264', pix_fmt='yuv420p')
 .overwrite_output()
 .run_async(pipe_stdin=True)
)


while True:
 chunk = reader.stdout.read(width * height * 1.5) # yuv
 print(len(chunk))
 writer.stdin.write(chunk)



Now for the compression part.


My reading of the docs is that the input to the reader perhaps needs be
rawvideo
but nothing else does. I tried replacingrawvideo
withh264
in my code but that resulted in empty frames. I'm considering a third invocation looking like this but is that really the correct approach ?

encoder = ( 
 ffmpeg 
 .input('pipe:', format='rawvideo', pix_fmt='yuv420p', s='{}x{}'.format(width, height))
 .output('pipe:', format='h264', pix_fmt='yuv420p') 
 .run_async(pipe_stdin=True, pipe_stdout=True)