
Recherche avancée
Médias (91)
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#1 The Wires
11 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
ED-ME-5 1-DVD
11 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (45)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)
Sur d’autres sites (9113)
-
FFMPEG use audio volume as param for filters
2 septembre 2016, par user3878395Can someone help me with FFMPEG’s
filter_complex
? I need to use the volume from an audio channel as param for filters (logo size dependent on audio volume).I have this command and I need it to change size of [3:v] (logo.png) by audio volume from [2:a] (test.mp3)
bin/ffmpeg -loop 1 -f image2 -i img.jpg -i hp.png -ss 20 -t 4 -i test.mp3 -i logo.png -filter_complex "[2:a]showwaves=s=780x140:mode=p2p,format=rgba,colorkey=black[sw];[0:v]scale=1920:-1,crop=iw:1080[bg];[bg][sw]overlay=(main_w/2-overlay_w/2):(main_h/2-overlay_h/2+200),format=yuv420p[ff];[ff][1:v]overlay=(main_w/2-overlay_w/2+30):(main_h/2-overlay_h/2-50),format=yuv420p[ff2];[ff2][3:v]overlay=(main_w/2-overlay_w/2+30):(main_h/2-overlay_h/2-50),format=yuv420p" -shortest -y -c:a aac -vcodec libx264 -strict -experimental -b:a 192k video.mp4
-
Ffmpeg video of images in loop ?
4 février 2024, par DanielI am trying to make a video from 2 images repeating in loop :
image1.png to be shown 1.2 seconds.
Then image2.png to be shown for 3 seconds.
Video should be long 180 seconds.
png images are different resolution, image1 is smaller, image2 is 1080*1920, video should use resolution of a image2, and image1 should be shown in its original size, not stretched.


ffmpeg -loop 1 -t 1.2 -i image1.png -loop 1 -t 3 -i image2.png -filter_complex "[0:v]scale=1920:1080:force_original_aspect_ratio=decrease[img1];[1:v][img1]overlay=eof_action=repeat[video]" -map "[video]" -t 180 -r 30 -y output.mp4


Output is 3 seconds long and show only image1 ?


-
how to get exact position in video as image view ?
16 février 2024, par Dhruvisha JoshiI want to give photo editing feature in my app. so I am allowing user to add text to photo and after that I want to convert it to video using Ffmpeg command.


here is my command that adds text and convert photo to video.

ffmpeg -loop 1 -i /var/mobile/Containers/Data/Application/88F535C3-A300-456C-97BB-1A9B83EAEE7B/Documents/Compress_Picture/input.jpg -filter_complex "[0]scale=1080:trunc(ow/a/2)*2[video0];[video0]drawtext=text='Dyjfyjyrjyfjyfkyfk':fontfile=/private/var/containers/Bundle/Application/DE5C8DAA-4D66-4345-834A-89F8AC19DF9B/Clear Status.app/avenyt.ttf:fontsize=66.55112651646448:fontcolor=#FFFFFF:x=349.92:y=930.051993067591" -c:v libx264 -t 5 -pix_fmt yuv420p -y /var/mobile/Containers/Data/Application/88F535C3-A300-456C-97BB-1A9B83EAEE7B/Documents/Compress_Picture/output0.mp4


here is my swift code to generate command.


var filterComplex = ""
var inputs = ""
var audioIndex = ""

if currentPhotoTextDataArray.contains(where: { $0.isLocation }) {
 // At least one element has isLocation set to true
 // Do something here
 print("There's at least one element with isLocation == true")
 inputs = "-i \(inputPath) -i \(self.locImagePath)"
 audioIndex = "2"
 
 } else {
 // No elements have isLocation == true
 print("No elements have isLocation set to true")
 inputs = "-i \(inputPath)"
 audioIndex = "1"
 }
 
 for (index, textData) in currentPhotoTextDataArray.enumerated() {
 print("x: \(textData.xPosition), y: \(textData.yPosition)")
 let x = (textData.xPosition) * 1080 / self.photoViewWidth
 let y = (textData.yPosition) * 1920 / self.photoViewHeight
 
 let fontSizeForWidth = (textData.fontSize * 1080) / self.photoViewWidth
 let fontSizeForHeight = (textData.fontSize * 1920) / self.photoViewHeight
 print("fontSizeForWidth: \(fontSizeForWidth)")
 print("fontSizeForHeight: \(fontSizeForHeight)")
 
 let fontPath = textData.font.fontPath
 let fontColor = textData.fontColor.toHexOrASS(format: "hex")
 let backColor = textData.backColor?.toHexOrASS(format: "hex")
 print("fontPath: \(fontPath)")
 print("fontColor: \(fontColor)")
 
 let breakedText = self.addBreaks(in: textData.text, with: UIFont(name: textData.font.fontName, size: fontSizeForHeight) ?? UIFont(), forWidth: 1080, padding: Int(x))
 
 if textData.isLocation {
 print("Location is there.")
 
 let textFont = UIFont(name: textData.font.fontName, size: fontSizeForHeight)
 let attributes: [NSAttributedString.Key: Any] = [NSAttributedString.Key.font: textFont ?? UIFont()]
 let size = (textData.text as NSString).size(withAttributes: attributes)
 let textWidth = Int(size.width) + 130
 
 var endTimeLoc = 0.0
 if let audioData = self.audioDataArray.first(where: { $0.photoIndex == mainIndex }) {
 let duration = audioData.audioEndTime - audioData.audioStartTime
 endTimeLoc = duration
 } else {
 endTimeLoc = 5
 }
 
 let layerFilter = "color=color=black@.38:size=\(textWidth)x130[layer0];[video\(index)][layer0]overlay=enable='between(t,0,\(endTimeLoc))':x=\(x):y=(\(y)-(overlay_h/2))[layer1];"
 filterComplex += layerFilter
 let imageFilter = "[1:v]scale=80:80[image];[layer1][image]overlay=enable='between(t,0,\(endTimeLoc))':x=\(x)+10:y=(\(y)-(overlay_h/2))[v\(index)];"
 filterComplex += imageFilter
 
 if index == currentPhotoTextDataArray.count - 1 {
 let textFilter = "[v\(index)]drawtext=text='\(breakedText)':fontfile=\(fontPath):fontsize=\(fontSizeForHeight):fontcolor=\(fontColor):x=(\(x)+100):y=(\(y)-(text_h/2))"
 filterComplex += textFilter
 } else {
 let textFilter = "[v\(index)]drawtext=text='\(breakedText)':fontfile=\(fontPath):fontsize=\(fontSizeForHeight):fontcolor=\(fontColor):x=(\(x)+100):y=(\(y)-(text_h/2))[video\(index + 1)];"
 filterComplex += textFilter
 }
 
 } else {
 
 let textBack = textData.backColor != nil ? ":box=1:boxcolor=\(backColor ?? "")@0.8:boxborderw=25" : ""
 
 if index == currentPhotoTextDataArray.count - 1 {
 let textFilter = "[video\(index)]drawtext=text='\(breakedText)':fontfile=\(fontPath):fontsize=\(fontSizeForHeight):fontcolor=\(fontColor):x=\(x):y=\(y)\(textBack)"
 filterComplex += textFilter
 } else {
 let textFilter = "[video\(index)]drawtext=text='\(breakedText)':fontfile=\(fontPath):fontsize=\(fontSizeForHeight):fontcolor=\(fontColor):x=\(x):y=\(y)\(textBack)[video\(index + 1)];"
 filterComplex += textFilter
 }
 }
 
 }
 
 if let audioData = self.audioDataArray.first(where: { $0.photoIndex == mainIndex }) {
 
 let audioSTime = self.getSTimeAudio(index: mainIndex, secondsPhoto: Int(audioData.audioStartTime))
 let audioETime = self.getETimeAudio(index: mainIndex, secondsPhoto: Int(audioData.audioEndTime))
 let duration = audioData.audioEndTime - audioData.audioStartTime
 
 command = "-loop 1 \(inputs) -ss \(audioSTime) -to \(audioETime) -i \"\(audioData.audioURL.path)\" -filter_complex \"[0]scale=1080:trunc(ow/a/2)*2[video0];\(filterComplex)[final_video]\" -map \"[final_video]\":v -map \(audioIndex):a -c:v libx264 -t \(duration) -pix_fmt yuv420p -y \(outputURL.path)"
 
 } else {
 command = "-loop 1 \(inputs) -filter_complex \"[0]scale=1080:trunc(ow/a/2)*2[video0];\(filterComplex)\" -c:v libx264 -t 5 -pix_fmt yuv420p -y \(outputURL.path)"
 }
 }



I am not getting exact position of text in generated video as added by user. if anyone knows please help me with this.