
Recherche avancée
Autres articles (36)
-
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
Sur d’autres sites (12319)
-
FFmpeg dnn_processing with tensorflow backend : difficulties applying a filter on an image
9 avril 2023, par ArnoBenI am trying to perform a video segmentation for background blurring similar to Google Meet or Zoom using FFmpeg, and I'm not very familiar with it.


Google's MediaPipe model is available as a tensorflow .pb file here (using
download_hhxwww.sh
).

I can load it in python and it works as expected, though I do need to format the input frames : scaling to the model input dimension, adding a batch dimension, dividing the pixel values by 255 to have a range 0-1.


FFmpeg has a filter that can use tensorflow models thanks to dnn_processing, but I'm wondering about these preprocessing steps. I tried to read the dnn_backend_tf.c file in ffmpeg's github repo, but C is not my forte. I'm guessing it adds a batch dimension somewhere otherwise the model wouldn't run, but I'm not sure about the rest.


Here is my current command :


ffmpeg \
 -i $FILE -filter_complex \
 "[0:v]scale=160:96,format=rgb24,dnn_processing=dnn_backend=tensorflow:model=$MODEL:input=input_1:output=segment[masks];[masks]extractplanes=2[mask]" \
 -map "[mask]" output.png



- 

- I'm already applying a scaling to match the input dimension.
- I wrote this
[masks]extractplanes=2[mask]
because the model outputs a HxWx2 tensor (background mask and foreground mask) and I want to keep the foreground mask.






The result I get with this command is the following (input-output) :




I'm not sure how to interpret the problems in this output. In python I can easily get a nice grayscale output :




I'm trying to obtain something similar with FFmpeg.


Any suggestion or insights to obtain a correct output with FFmpeg would be greatly appreciated.


PS : If I try to apply this on a video file, it hits a Segmentation Fault somewhere before getting any output so I stick with testing on an image for now.


-
configure : add support for mips32r5, p5600 cpu and msa
9 avril 2015, par Shivraj Patilconfigure : add support for mips32r5, p5600 cpu and msa
Imagination Technologies has come up with MIPS Warrior Processor Cores.
More details can be found at-
http://www.imgtec.com/mips/warrior/pclass.asp
http://www.imgtec.com/mips/warrior/iclass.aspThis is a preparation patch to submit optimized code for MSA (MIPS-SIMD-Architecture)
This patch set is adding support for P5600 and I6400 CPUs.MIPS ’generic’ case is added, with mips32r2 arch as default (fpu and dsp opt enabled).
Sample configurations for new MSA architectures-
$ ./configure —enable-cross-compile —cross-prefix=<PATH> —arch=mips —target-os=linux —cpu=p5600
$ ./configure —enable-cross-compile —cross-prefix=<PATH> —arch=mips —target-os=linux —cpu=i6400Signed-off-by : Shivraj Patil <shivraj.patil@imgtec.com>
Reviewed-by : Nedeljko Babic <Nedeljko.Babic@imgtec.com>
Signed-off-by : Michael Niedermayer <michaelni@gmx.at> -
Has anyone used the speech driven animation and can you make it work ?
16 août 2020, par hopw JanI'm talking about this repo. I installed all the dependencies but I can't make it work. Any help is highly appreciated ( :


I'm running python 3.7.5.


This is my code :


import sda
import scipy.io.wavfile as wav
from PIL import Image

va = sda.VideoAnimator(gpu=0, model_path="crema")# Instantiate the animator
fs, audio_clip = wav.read("example/audio.wav")
still_frame = Image.open("example/image.bmp")
vid, aud = va(frame, audio_clip, fs=fs)
va.save_video(vid, aud, "generated.mp4")



Sadly it doesn't seem to work and it gives me this error :


Warning (from warnings module):
 File "C:\Users\Alex\AppData\Local\Programs\Python\Python37\lib\site-packages\pydub\utils.py", line 170
 warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)
RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work
Traceback (most recent call last):
 File "C:\Users\Alex\Desktop\test\test.py", line 8, in <module>
 vid, aud = va(frame, audio_clip, fs=fs)
NameError: name 'frame' is not defined
</module>


Spent about 2 hours and I can't do anything, I'm out of ideas.
If you take the time to help me thank you from the bottom of my heart.