
Recherche avancée
Autres articles (52)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (15075)
-
Is there a way to horizontal flip video captured from flutter front camera
5 mai 2024, par JoyJoyBasically, I'm trying to flip video horizontally after capturing it from flutter front camera. So I start recording, stop recording, flip the video and pass it to another page. I'm fairly new and would appreciate any assistance as my code isn't working


I've tried doing so using the new ffmpeg_kit flutter


Future<void> flipVideo(String inputPath, String outputPath) async{
final ffmpegCommand = "-i $inputPath -vf hflip $outputPath";
final session = FFmpegKit.executeAsync(ffmpegCommand);
await session.then((session) async {
 final returnCode = await session.getReturnCode();
 if (ReturnCode.isSuccess(returnCode)) {
 print('Video flipping successful');
 } else {
 print('Video flipping failed: ${session.getAllLogs()}');
 }
});}

void stopVideoRecording() async {
XFile videopath = await cameraController.stopVideoRecording();

try {
 final Directory appDocDir = await 
 getApplicationDocumentsDirectory();
 final String outputDirectory = appDocDir.path;
 final String timeStamp = DateTime.now().millisecondsSinceEpoch.toString();
 final String outputPath = '$outputDirectory/flipped_video_$timeStamp.mp4';

 await flipVideo(videopath.path, outputPath);

 // Once completed,
 Navigator.push(
 context,
 MaterialPageRoute(
 builder: (builder) => VideoViewPage(
 path: File(outputPath),
 fromFrontCamera: iscamerafront,
 flash: flash,
 )));
 print('Video flipping completed');
} catch (e) {
 print('Error flipping video: $e');
}
</void>


}


-
In Flutter, how to get image pixel
12 janvier 2024, par Pianonemy code here


var response = await Dio().get(
 url,
 options: Options(responseType: ResponseType.bytes)
);
Uint8List? srcImage = Uint8List.fromList(response.data);
Uint8List? watermark = await captureWaterMark();
Image i = Image.memory(srcImage!);
//how can I get the pixel (Image i) such like 1920*1080 or just width/hight pixel
...tell me how to do...
srcImage = await addWaterMarkByFfmpegCommand(srcImage, watermark);
 final result = await ImageGallerySaver.saveImage(
 srcImage!, name: name,
 );



i need get the pic pixel so that i can use it in ffmpeg command, it a func that add a watermark into srcImage, but cause their pixel ratio too diff to adapted watermark


i try to get pixel from ffmpeg... but i failed


/// addWaterMark by using ffmpeg Command
Future addWaterMarkByFfmpegCommand(Uint8List srcImg, Uint8List watermark) async {
 try {
 final Directory tempDir = await Directory.systemTemp.createTemp();
 final File image1File = File('${tempDir.path}/srcImg.jpg');
 await image1File.writeAsBytes(srcImg);
 final File image2File = File('${tempDir.path}/watermark.png');
 await image2File.writeAsBytes(watermark);

 final String outputFilePath = '${tempDir.path}/output.jpg';
 //when i get srcImage pixel, the positions in bold(iw*1) in the following commands will be replaced to make watermark adapted
 final String command =
 '-i ${image1File.path} -i ${image2File.path} -filter_complex "[1:v]scale=**iw*1**:-1[v1];[0:v][v1]overlay=10:10" -frames:v 1 $outputFilePath';
 await FFmpegKit.execute(command);

 final File outputFile = File(outputFilePath);
 final Uint8List outputBytes = await outputFile.readAsBytes();
 return outputBytes;
 } catch (e) {
 print('Error executing ffmpeg command: $e');
 }
 return null;
}



ps : i am new to flutter and ffmpeg, plz help me, I'd appreciate, thanks alot


-
How to Convert 16:9 Video to 9:16 Ratio While Ensuring Speaker Presence in Frame ?
28 avril 2024, par shreeshaI am tried so many time to figure out the problem in detecting the face and also it's not so smooth enough to like other tools out there.


So basically I am using python and Yolo in this project but I want the person who is talking and who the ROI (region of interest) is.


Here is the code :


from ultralytics import YOLO
from ultralytics.engine.results import Results
from moviepy.editor import VideoFileClip, concatenate_videoclips
from moviepy.video.fx.crop import crop

# Load the YOLOv8 model
model = YOLO("yolov8n.pt")

# Load the input video
clip = VideoFileClip("short_test.mp4")

tacked_clips = []

for frame_no, frame in enumerate(clip.iter_frames()):
 # Process the frame
 results: list[Results] = model(frame)

 # Get the bounding box of the main object
 if results[0].boxes:
 objects = results[0].boxes
 main_obj = max(
 objects, key=lambda x: x.conf
 ) # Assuming the first detected object is the main one

 x1, y1, x2, y2 = [int(val) for val in main_obj.xyxy[0].tolist()]

 # Calculate the crop region based on the object's position and the target aspect ratio
 w, h = clip.size
 new_w = int(h * 9 / 16)
 new_h = h

 x_center = x2 - x1
 y_center = y2 - y1

 # Adjust x_center and y_center if they would cause the crop region to exceed the bounds
 if x_center + (new_w / 2) > w:
 x_center -= x_center + (new_w / 2) - w
 elif x_center - (new_w / 2) < 0:
 x_center += abs(x_center - (new_w / 2))

 if y_center + (new_h / 2) > h:
 y_center -= y_center + (new_h / 2) - h
 elif y_center - (new_h / 2) < 0:
 y_center += abs(y_center - (new_h / 2))

 # Create a subclip for the current frame
 start_time = frame_no / clip.fps
 end_time = (frame_no + 1) / clip.fps
 subclip = clip.subclip(start_time, end_time)

 # Apply cropping using MoviePy
 cropped_clip = crop(
 subclip, x_center=x_center, y_center=y_center, width=new_w, height=new_h
 )

 tacked_clips.append(cropped_clip)

reframed_clip = concatenate_videoclips(tacked_clips, method="compose")
reframed_clip.write_videofile("output_video.mp4")



So basically I want to fix the face detection with ROI detection where it can detect the face and make that face and the body on to the frame and making sure that the speaker who is speaking is brought to the frame