
Recherche avancée
Autres articles (73)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (6850)
-
ffmpeg converting video to images while video file is being written
20 décembre 2018, par user3398227Hopefully an easy question for an ffmpeg expert !
I’m currently converting large (+6GB) mpeg video into an image sequence - which is working well using the below ffmpeg command :
ffmpeg -i "input.mpeg" -vf - fps=fps=2 -f image2 -qscale 1 -s 1026x768
"output%6d.jpg"however i have to wait for the file to finish being written to disk before i kick off ffmpeg - but this takes a good hour or so to finish writing, but what i’ve noticed is that ffmpeg can start reading the file while its being written to disk - the only snag here is it gets to the end of the file and stops before the file has finished being written...
Question is, is there a way that ffmpeg can convert to an image sequence at the same pace the video is being written (and not exit out ?)... or know to wait for the next frame to be written from the source. (unfortunately the input doesn’t support streaming, I only get a network drive and file to work off.. ) I thought i read somewhere that ffmpeg can process at the video frame rate but cant seem to find this command for love or money in the doco !!
Thanks !
-
Thumbnails from S3 Videos using FFMPEG - "No such file or directory : '/bin/ffmpeg'"
28 juin 2022, par NicoI am trying to generate thumbnails from videos in an S3 bucket every x frames by following this documentation : https://aws.amazon.com/blogs/media/processing-user-generated-content-using-aws-lambda-and-ffmpeg/


I am at the point where I'm testing the Lambda code provided in the documentation, but receive this error in CloudWatch Logs :




Here is the portion of the Lambda code associated with this error :




Any help is appreciated. Thanks !


-
javacv and moviepy comparison for video generation
15 septembre 2024, par VikramI am trying to generate video using images, where image have some overlays text and png icons. I am using javacv library for this.
Final output video seems pixelated, i don't understand what is it since i do not have video processing domain knowledge, i am beginner to this.
I know that video bitrate and choice of video encoder are important factor which contributes to video quality and there are many more factors too.


I am providing you two output for comparison, one of them is generated using javacv and another one is from moviepy library


Please watch it in full screen since the problem i am talking about only gets highlighted in full screen, you will see the pixel dancing in javacv generated video, but python output seems stable


https://imgur.com/a/aowNnKg - javacv generated


https://imgur.com/a/eiLXrbk - Moviepy generated


I am using same encoder in both of the implementation


Encoder - libx264
bitrate - 
 800 Kbps for javacv 
 500 Kbps for moviepy

frame rate - 24fps for both of them

output video size -> 
 7MB (javacv)
 5MB (Moviepy)





generated output size from javacv is bigger then moviepy generated video.


here is my java configuration for FFmpegFrameRecorder


FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(this.outputPath, 
 this.screensizeX, this.screensizeY);
 if(this.videoCodecName!=null && "libx264".equals(this.videoCodecName)) {
 recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
 }
 recorder.setFormat("mp4"); 
 recorder.setPixelFormat(avutil.AV_PIX_FMT_YUV420);
 recorder.setVideoBitrate(800000);
 recorder.setImageWidth(this.screensizeX);
 recorder.setFrameRate(24);




and here is python configuration for writing video file


Full_final_clip.write_videofile(
 f"{video_folder_path}/{FILE_ID}_INTERMEDIATE.mp4",
 codec="libx264",
 audio_codec="aac",
 temp_audiofile=f"{FILE_ID}_INTER_temp-audio.m4a",
 remove_temp=True,
 fps=24,
 )




as you can see i am not specifying bitrate in python, but i checked that bitrate of final output is around 500 kbps, which is lower then what i specified in java, yet java generated video quality seems poor.


I have tried setting crf value also , but it seems it does not have any impact when used.


increasing bitrate improve quality somewhat but at the cost of file size, still generated output seems pixelated.


Can someone please highlight what might be the issue, and how python is generating better quality video, when both of the libraries use ffmpeg at the backend.


Edit 1 : also, I am adding code which is being used to make zoom animation for continuous frames, As somewhere i read that this might be the cause for pixel jitter, please see and let me know if there is any improvement we can do to remove pixel jittering


private Mat applyZoomEffect(Mat frame, int currentFrame, long effectFrames, int width, int height, String mode, String position, double speed) {
 long totalFrames = effectFrames;
 double i = currentFrame;
 if ("out".equals(mode)) {
 i = totalFrames - i;
 }
 double zoom = 1 + (i * ((0.1 * speed) / totalFrames));

 double originalCenterX = width/2.0;
 double originalCenterY = height/2.0;
 

 // Resize image
 //opencv_imgproc.resize(frame, resizedMat, new Size(newWidth, newHeight));

 // Determine crop region based on position
 double x = 0, y = 0;
 switch (position.toLowerCase()) {
 case "center":
 // Adjusting for center zoom
 x = originalCenterX - originalCenterX * zoom;
 y = originalCenterY - originalCenterY * zoom;
 
 x= (width-(width*zoom))/2.0;
 y= (height-(height*zoom))/2.0;
 break;
 }

 double[][] rowData = {{zoom, 0, x},{0,zoom,y}};

 double[] flatData = flattenArray(rowData);

 // Create a DoublePointer from the flattened array
 DoublePointer doublePointer = new DoublePointer(flatData);

 // Create a Mat object with two rows and three columns
 Mat mat = new Mat(2, 3, org.bytedeco.opencv.global.opencv_core.CV_64F); // CV_64F is for double type

 // Put the data into the Mat object
 mat.data().put(doublePointer);
 Mat transformedFrame = new Mat();
 opencv_imgproc.warpAffine(frame, transformedFrame, mat, new Size(frame.cols(), frame.rows()),opencv_imgproc.INTER_LANCZOS4,0,new Scalar(0,0,0,0));
 return transformedFrame;
 }