
Recherche avancée
Autres articles (77)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)
Sur d’autres sites (10797)
-
Mix original clip audio with audio of an overlay clip
28 octobre 2019, par Mr. MessyI have a video clip on which I want to add commentary videos (someone talking in a bubble).
I have 3 commentary videos I need to insert in specific times.The video rendering is working well, but I can’t seem to add the audio tracks.
I tried both amix and amerge, but I got the same issue.When I added "[0:1][2:1]amerge ;" I get the follwing :
and the process freezes.
The full ffmpeg command is as follows :
ffmpeg -y -i story.mp4
-loop 1 -i mask.png
-itsoffset 10 -i commentary1.mp4
-itsoffset 22 -i commentary2.mp4
-itsoffset 34 -i commentary3.mp4
-filter_complex "[0:v]scale=w=1/2*in_w:h=1/2*in_h[vid1],
[2:v]crop=w=480:h=480:x=0:y=120[vid2in],
[1:v]fifo[2af],[2af]alphaextract[alf2],[vid2in][alf2]alphamerge[vid2alf],
[vid2alf]format=yuva420p,fade=in:st=10:d=0.5:alpha=1,fade=out:st=22.7294:d=0.5:alpha=1[vid2fade],
[vid2fade]scale=w=-1:h=160[vid2],
[vid1][vid2]overlay=790:10:enable='between(t\,10,21)'[out2],
[3:v]crop=w=480:h=480:x=0:y=120[vid3in],
[1:v]fifo[3af],[3af]alphaextract[alf3],[vid3in][alf3]alphamerge[vid3alf],
[vid3alf]format=yuva420p,fade=in:st=22:d=0.5:alpha=1,fade=out:st=32.768733:d=0.5:alpha=1[vid3fade],
[vid3fade]scale=w=-1:h=160[vid3],
[out2][vid3]overlay=790:10:enable='between(t\,22,33)'[out3],
[4:v]crop=w=480:h=480:x=0:y=120[vid4in],
[1:v]fifo[4af],[4af]alphaextract[alf4],[vid4in][alf4]alphamerge[vid4alf],
[vid4alf]format=yuva420p,fade=in:st=34:d=0.5:alpha=1,fade=out:st=44.598189:d=0.5:alpha=1[vid4fade],
[vid4fade]scale=w=-1:h=160[vid4],
[out3][vid4]overlay=790:10:enable='between(t\,34,45)'[out4]"
-map [out4] -pix_fmt yuv420p -c:v libx264 -crf 18
final_video.mp4(mask.png is a circle on a transparent image that crops the video to a bubble)
Thank you for your help.
-
FFMPEG images to video with reverse sequence with other filters
4 juillet 2019, par Archimedes TrajanoSimilar to this ffmpeg - convert image sequence to video with reversed order
But I was wondering if I can create a video loop by specifying the image range and have the reverse order appended in one command.
Ideally I’d like to combine it with this Make an Alpha Mask video from PNG files
What I am doing now is generating the reverse using https://stackoverflow.com/a/43301451/242042 and combining the video files together.
However, I am thinking it would be similar to Concat a video with itself, but in reverse, using ffmpeg
My current attempt was assuming 60 images. which makes vframes x2
ffmpeg -y -framerate 20 -f image2 -i \
running_gear/%04d.png -start_number 0 -vframes 120 \
-filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [v]" \
-filter_complex alphaextract[a]
-map 0:v -b:v 5M -crf 20 running_gear.webm
-map [a] -b:v 5M -crf 20 running_gear-alpha.webWithout the alpha masking I can get it working using
ffmpeg -y -framerate 20 -f image2 -i running_gear/%04d.png \
-start_number 0 -vframes 120 \
-filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [v]" \
-map "[v]" -b:v 5M -crf 20 running_gear.webmWith just the alpha masking I can do
ffmpeg -y -framerate 20 -f image2 -i running_gear/%04d.png \
-start_number 0 -vframes 120 \
-filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [vc];[vc]alphaextract[a]"
-map [a] -b:v 5M -crf 20 alpha.webmSo I am trying to do it so the alpha mask is done at the same time.
Although my ultimate ideal would be to take the images, reverse it get an alpha mask and put it side-by-side so it can be used in Ren’py
-
Remove random background from video using ffmpeg or Python
20 avril 2024, par Raheel ShahzadI want to remove background from a person's video using
ffmpeg
orPython
. If I record a video at any place, detect the person in the video and then remove anything except that person. Not asking for green or single color background as that can be done through chromakey and I am not looking for that.


I've tried this (https://tryolabs.com/blog/2018/04/17/announcing-luminoth-0-1/) approach but it is giving me output of rectangular box. It is informative enough as area to explore is narrow down enough but still need to remove total background.
I've also tried
grabcut
(https://docs.opencv.org/4.1.0/d8/d83/tutorial_py_grabcut.html) but that need user interaction otherwise result isn't too good.
I've also tried to useffmpeg
and found this example (http://oioiiooixiii.blogspot.com/2016/09/ffmpeg-extract-foreground-moving.html) but it needs still image so I tried to take background picture before recording video with a person but there are many things required to take difference between background image and video frame.


For
opencv
approach, I've tried this.


img = cv.imread('pic.png')
mask = np.zeros(img.shape[:2], np.uint8)
bgdModel = np.zeros((1, 65), np.float64)
fgdModel = np.zeros((1, 65), np.float64)
rect = (39, 355, 1977, 2638)
cv.grabCut(img, mask, rect, bgdModel, fgdModel, 5, cv.GC_INIT_WITH_RECT)
mask2 = np.where((mask==2)|(mask==0), 0, 1).astype('uint8')
img = img*mask2[:, :, np.newaxis]
plt.imshow(img), plt.colorbar(), plt.show()




But it is removing some of person's part too.
Also tried
ffmpeg
way but not a good result.


ffmpeg -report -y -i "img.jpg" -i "vid.mov" -filter_complex "[1:v]format=yuva444p,lut=c3=128[video2withAlpha],[0:v][video2withAlpha]blend=all_mode=difference[out]" -map "[out]" "output.mp4"




All I need is just a person's image/video take under any normal background without user interaction like area selection or any other thing like that.
Luminoth
has trained data but that is giving box of person not exact person so that I can remove. Any help or guidance to remove background will be appreciated.