
Recherche avancée
Médias (91)
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (105)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
MediaSPIP Player : les contrôles
26 mai 2010, parLes contrôles à la souris du lecteur
En plus des actions au click sur les boutons visibles de l’interface du lecteur, il est également possible d’effectuer d’autres actions grâce à la souris : Click : en cliquant sur la vidéo ou sur le logo du son, celui ci se mettra en lecture ou en pause en fonction de son état actuel ; Molette (roulement) : en plaçant la souris sur l’espace utilisé par le média (hover), la molette de la souris n’exerce plus l’effet habituel de scroll de la page, mais diminue ou (...)
Sur d’autres sites (11424)
-
how to get exact position in video as image view ?
16 février 2024, par Dhruvisha JoshiI want to give photo editing feature in my app. so I am allowing user to add text to photo and after that I want to convert it to video using Ffmpeg command.


here is my command that adds text and convert photo to video.

ffmpeg -loop 1 -i /var/mobile/Containers/Data/Application/88F535C3-A300-456C-97BB-1A9B83EAEE7B/Documents/Compress_Picture/input.jpg -filter_complex "[0]scale=1080:trunc(ow/a/2)*2[video0];[video0]drawtext=text='Dyjfyjyrjyfjyfkyfk':fontfile=/private/var/containers/Bundle/Application/DE5C8DAA-4D66-4345-834A-89F8AC19DF9B/Clear Status.app/avenyt.ttf:fontsize=66.55112651646448:fontcolor=#FFFFFF:x=349.92:y=930.051993067591" -c:v libx264 -t 5 -pix_fmt yuv420p -y /var/mobile/Containers/Data/Application/88F535C3-A300-456C-97BB-1A9B83EAEE7B/Documents/Compress_Picture/output0.mp4


here is my swift code to generate command.


var filterComplex = ""
var inputs = ""
var audioIndex = ""

if currentPhotoTextDataArray.contains(where: { $0.isLocation }) {
 // At least one element has isLocation set to true
 // Do something here
 print("There's at least one element with isLocation == true")
 inputs = "-i \(inputPath) -i \(self.locImagePath)"
 audioIndex = "2"
 
 } else {
 // No elements have isLocation == true
 print("No elements have isLocation set to true")
 inputs = "-i \(inputPath)"
 audioIndex = "1"
 }
 
 for (index, textData) in currentPhotoTextDataArray.enumerated() {
 print("x: \(textData.xPosition), y: \(textData.yPosition)")
 let x = (textData.xPosition) * 1080 / self.photoViewWidth
 let y = (textData.yPosition) * 1920 / self.photoViewHeight
 
 let fontSizeForWidth = (textData.fontSize * 1080) / self.photoViewWidth
 let fontSizeForHeight = (textData.fontSize * 1920) / self.photoViewHeight
 print("fontSizeForWidth: \(fontSizeForWidth)")
 print("fontSizeForHeight: \(fontSizeForHeight)")
 
 let fontPath = textData.font.fontPath
 let fontColor = textData.fontColor.toHexOrASS(format: "hex")
 let backColor = textData.backColor?.toHexOrASS(format: "hex")
 print("fontPath: \(fontPath)")
 print("fontColor: \(fontColor)")
 
 let breakedText = self.addBreaks(in: textData.text, with: UIFont(name: textData.font.fontName, size: fontSizeForHeight) ?? UIFont(), forWidth: 1080, padding: Int(x))
 
 if textData.isLocation {
 print("Location is there.")
 
 let textFont = UIFont(name: textData.font.fontName, size: fontSizeForHeight)
 let attributes: [NSAttributedString.Key: Any] = [NSAttributedString.Key.font: textFont ?? UIFont()]
 let size = (textData.text as NSString).size(withAttributes: attributes)
 let textWidth = Int(size.width) + 130
 
 var endTimeLoc = 0.0
 if let audioData = self.audioDataArray.first(where: { $0.photoIndex == mainIndex }) {
 let duration = audioData.audioEndTime - audioData.audioStartTime
 endTimeLoc = duration
 } else {
 endTimeLoc = 5
 }
 
 let layerFilter = "color=color=black@.38:size=\(textWidth)x130[layer0];[video\(index)][layer0]overlay=enable='between(t,0,\(endTimeLoc))':x=\(x):y=(\(y)-(overlay_h/2))[layer1];"
 filterComplex += layerFilter
 let imageFilter = "[1:v]scale=80:80[image];[layer1][image]overlay=enable='between(t,0,\(endTimeLoc))':x=\(x)+10:y=(\(y)-(overlay_h/2))[v\(index)];"
 filterComplex += imageFilter
 
 if index == currentPhotoTextDataArray.count - 1 {
 let textFilter = "[v\(index)]drawtext=text='\(breakedText)':fontfile=\(fontPath):fontsize=\(fontSizeForHeight):fontcolor=\(fontColor):x=(\(x)+100):y=(\(y)-(text_h/2))"
 filterComplex += textFilter
 } else {
 let textFilter = "[v\(index)]drawtext=text='\(breakedText)':fontfile=\(fontPath):fontsize=\(fontSizeForHeight):fontcolor=\(fontColor):x=(\(x)+100):y=(\(y)-(text_h/2))[video\(index + 1)];"
 filterComplex += textFilter
 }
 
 } else {
 
 let textBack = textData.backColor != nil ? ":box=1:boxcolor=\(backColor ?? "")@0.8:boxborderw=25" : ""
 
 if index == currentPhotoTextDataArray.count - 1 {
 let textFilter = "[video\(index)]drawtext=text='\(breakedText)':fontfile=\(fontPath):fontsize=\(fontSizeForHeight):fontcolor=\(fontColor):x=\(x):y=\(y)\(textBack)"
 filterComplex += textFilter
 } else {
 let textFilter = "[video\(index)]drawtext=text='\(breakedText)':fontfile=\(fontPath):fontsize=\(fontSizeForHeight):fontcolor=\(fontColor):x=\(x):y=\(y)\(textBack)[video\(index + 1)];"
 filterComplex += textFilter
 }
 }
 
 }
 
 if let audioData = self.audioDataArray.first(where: { $0.photoIndex == mainIndex }) {
 
 let audioSTime = self.getSTimeAudio(index: mainIndex, secondsPhoto: Int(audioData.audioStartTime))
 let audioETime = self.getETimeAudio(index: mainIndex, secondsPhoto: Int(audioData.audioEndTime))
 let duration = audioData.audioEndTime - audioData.audioStartTime
 
 command = "-loop 1 \(inputs) -ss \(audioSTime) -to \(audioETime) -i \"\(audioData.audioURL.path)\" -filter_complex \"[0]scale=1080:trunc(ow/a/2)*2[video0];\(filterComplex)[final_video]\" -map \"[final_video]\":v -map \(audioIndex):a -c:v libx264 -t \(duration) -pix_fmt yuv420p -y \(outputURL.path)"
 
 } else {
 command = "-loop 1 \(inputs) -filter_complex \"[0]scale=1080:trunc(ow/a/2)*2[video0];\(filterComplex)\" -c:v libx264 -t 5 -pix_fmt yuv420p -y \(outputURL.path)"
 }
 }



I am not getting exact position of text in generated video as added by user. if anyone knows please help me with this.


-
Ffmpeg video of images in loop ?
4 février 2024, par DanielI am trying to make a video from 2 images repeating in loop :
image1.png to be shown 1.2 seconds.
Then image2.png to be shown for 3 seconds.
Video should be long 180 seconds.
png images are different resolution, image1 is smaller, image2 is 1080*1920, video should use resolution of a image2, and image1 should be shown in its original size, not stretched.


ffmpeg -loop 1 -t 1.2 -i image1.png -loop 1 -t 3 -i image2.png -filter_complex "[0:v]scale=1920:1080:force_original_aspect_ratio=decrease[img1];[1:v][img1]overlay=eof_action=repeat[video]" -map "[video]" -t 180 -r 30 -y output.mp4


Output is 3 seconds long and show only image1 ?


-
Use FFmpeg to crop a video to square format in Android
22 août 2016, par Cédric PortmannI am currently working on an app that is supposed to crop a 16:9 video into a 1:1 video. However I dont get the code working. If possible the software should convert the inputed video as fast as possible. The resolution can vary between 480x480 and 720x720.
If I could choose the postion of the crop frame it would be perfect.
The error I get :
E/FFmpeg: Exception while trying to run: [Ljava.lang.String;@f0c91b8
java.io.IOException: Error running exec(). Command: [/data/user/0/com.android.grafika/files/ffmpeg, -i /storage/emulated/0/Alphacrypt1.mp4 -vf crop=640:256:0:400 -threads 5 -preset ultrafast -strict -2 /storage/emulated/0/YourCroppedMovie.mp4] Working Directory: null Environment: null
at java.lang.ProcessManager.exec(ProcessManager.java:215)
at java.lang.Runtime.exec(Runtime.java:174)
at java.lang.Runtime.exec(Runtime.java:129)
at com.github.hiteshsondhi88.libffmpeg.ShellCommand.run(ShellCommand.java:10)
at com.github.hiteshsondhi88.libffmpeg.FFmpegExecuteAsyncTask.doInBackground(FFmpegExecuteAsyncTask.java:38)
at com.github.hiteshsondhi88.libffmpeg.FFmpegExecuteAsyncTask.doInBackground(FFmpegExecuteAsyncTask.java:10)
at android.os.AsyncTask$2.call(AsyncTask.java:295)
at java.util.concurrent.FutureTask.run(FutureTask.java:237)
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:234)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1113)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:588)
at java.lang.Thread.run(Thread.java:818)
Caused by: java.io.IOException: No such file or directory
at java.lang.ProcessManager.exec(Native Method)
at java.lang.ProcessManager.exec(ProcessManager.java:213)
at java.lang.Runtime.exec(Runtime.java:174)
at java.lang.Runtime.exec(Runtime.java:129)
at com.github.hiteshsondhi88.libffmpeg.ShellCommand.run(ShellCommand.java:10)
at com.github.hiteshsondhi88.libffmpeg.FFmpegExecuteAsyncTask.doInBackground(FFmpegExecuteAsyncTask.java:38)
at com.github.hiteshsondhi88.libffmpeg.FFmpegExecuteAsyncTask.doInBackground(FFmpegExecuteAsyncTask.java:10)
at android.os.AsyncTask$2.call(AsyncTask.java:295)
at java.util.concurrent.FutureTask.run(FutureTask.java:237)
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:234)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1113)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:588)
at java.lang.Thread.run(Thread.java:818)The code I am using :
final String[] cmd = new String[]{"-i /storage/emulated/0/Alphacrypt1.mp4 -vf crop=640:256:0:400 -threads 5 -preset ultrafast -strict -2 /storage/emulated/0/YourCroppedMovie.mp4"};
try {
final FFmpeg ffmpeg = FFmpeg.getInstance(this);
ffmpeg.execute(cmd, new FFmpegExecuteResponseHandler() {
@Override
public void onSuccess(String message) {
Toast.makeText(getApplicationContext(), "Successfully converted!",
Toast.LENGTH_LONG).show();
}
@Override
public void onProgress(String message) {
}
@Override
public void onFailure(String message) {
Toast.makeText(getApplicationContext(), "Fail!"+ message,
Toast.LENGTH_LONG).show();
}
@Override
public void onStart() {
Toast.makeText(getApplicationContext(), "Started!",
Toast.LENGTH_LONG).show();
}
@Override
public void onFinish() {
Toast.makeText(getApplicationContext(), "Stopped!",
Toast.LENGTH_LONG).show();
}
});
} catch (FFmpegCommandAlreadyRunningException e) {
}Thank you for your help.
SOLUTION :
- I did not add the "Load Binary" part, which is necessary to run the
FFmpeg library (http://writingminds.github.io/ffmpeg-android-java/). - The command needs to be splitted using
.split(" ");
as already told by printfmyname - For now I use -vf crop=1080:1080:0:0 to crop the video to a square. (Without messing up the ratio)
- I did not add the "Load Binary" part, which is necessary to run the