
Recherche avancée
Autres articles (68)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)
Sur d’autres sites (5916)
-
javacv and moviepy comparison for video generation
15 septembre 2024, par VikramI am trying to generate video using images, where image have some overlays text and png icons. I am using javacv library for this.
Final output video seems pixelated, i don't understand what is it since i do not have video processing domain knowledge, i am beginner to this.
I know that video bitrate and choice of video encoder are important factor which contributes to video quality and there are many more factors too.


I am providing you two output for comparison, one of them is generated using javacv and another one is from moviepy library


Please watch it in full screen since the problem i am talking about only gets highlighted in full screen, you will see the pixel dancing in javacv generated video, but python output seems stable


https://imgur.com/a/aowNnKg - javacv generated


https://imgur.com/a/eiLXrbk - Moviepy generated


I am using same encoder in both of the implementation


Encoder - libx264
bitrate - 
 800 Kbps for javacv 
 500 Kbps for moviepy

frame rate - 24fps for both of them

output video size -> 
 7MB (javacv)
 5MB (Moviepy)





generated output size from javacv is bigger then moviepy generated video.


here is my java configuration for FFmpegFrameRecorder


FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(this.outputPath, 
 this.screensizeX, this.screensizeY);
 if(this.videoCodecName!=null && "libx264".equals(this.videoCodecName)) {
 recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
 }
 recorder.setFormat("mp4"); 
 recorder.setPixelFormat(avutil.AV_PIX_FMT_YUV420);
 recorder.setVideoBitrate(800000);
 recorder.setImageWidth(this.screensizeX);
 recorder.setFrameRate(24);




and here is python configuration for writing video file


Full_final_clip.write_videofile(
 f"{video_folder_path}/{FILE_ID}_INTERMEDIATE.mp4",
 codec="libx264",
 audio_codec="aac",
 temp_audiofile=f"{FILE_ID}_INTER_temp-audio.m4a",
 remove_temp=True,
 fps=24,
 )




as you can see i am not specifying bitrate in python, but i checked that bitrate of final output is around 500 kbps, which is lower then what i specified in java, yet java generated video quality seems poor.


I have tried setting crf value also , but it seems it does not have any impact when used.


increasing bitrate improve quality somewhat but at the cost of file size, still generated output seems pixelated.


Can someone please highlight what might be the issue, and how python is generating better quality video, when both of the libraries use ffmpeg at the backend.


Edit 1 : also, I am adding code which is being used to make zoom animation for continuous frames, As somewhere i read that this might be the cause for pixel jitter, please see and let me know if there is any improvement we can do to remove pixel jittering


private Mat applyZoomEffect(Mat frame, int currentFrame, long effectFrames, int width, int height, String mode, String position, double speed) {
 long totalFrames = effectFrames;
 double i = currentFrame;
 if ("out".equals(mode)) {
 i = totalFrames - i;
 }
 double zoom = 1 + (i * ((0.1 * speed) / totalFrames));

 double originalCenterX = width/2.0;
 double originalCenterY = height/2.0;
 

 // Resize image
 //opencv_imgproc.resize(frame, resizedMat, new Size(newWidth, newHeight));

 // Determine crop region based on position
 double x = 0, y = 0;
 switch (position.toLowerCase()) {
 case "center":
 // Adjusting for center zoom
 x = originalCenterX - originalCenterX * zoom;
 y = originalCenterY - originalCenterY * zoom;
 
 x= (width-(width*zoom))/2.0;
 y= (height-(height*zoom))/2.0;
 break;
 }

 double[][] rowData = {{zoom, 0, x},{0,zoom,y}};

 double[] flatData = flattenArray(rowData);

 // Create a DoublePointer from the flattened array
 DoublePointer doublePointer = new DoublePointer(flatData);

 // Create a Mat object with two rows and three columns
 Mat mat = new Mat(2, 3, org.bytedeco.opencv.global.opencv_core.CV_64F); // CV_64F is for double type

 // Put the data into the Mat object
 mat.data().put(doublePointer);
 Mat transformedFrame = new Mat();
 opencv_imgproc.warpAffine(frame, transformedFrame, mat, new Size(frame.cols(), frame.rows()),opencv_imgproc.INTER_LANCZOS4,0,new Scalar(0,0,0,0));
 return transformedFrame;
 }



-
YUV Raw frames to video stream
30 septembre 2014, par Ahmed NassarIm trying to stream raw YUV frames in an array generated in a C++ program to video using FFPEG. Can anyone point me to the right direction ?
-
How to draw a waveform from an RTSP audio using ffmpeg and Python
9 mars 2023, par S AndrewI have a Hikvision camera. Using ffmpeg, I can extract the audio from it and save it in
wav
file using below code :

import os
os.system("ffmpeg -i rtsp://admin:password@192.168.0.27:554/Streaming/Channels/101/ -q:a 0 -map a -t 10 file.wav")



It creates
file.wav
file and when played I can hear the audio recorded from camera. Now I am planning to draw the waveform of these audio's. For this I have below code :

os.system("ffmpeg -i rtsp://admin:password@192.168.0.27:554/Streaming/Channels/101/ -filter_complex showwavespic -frames:v 1 output.png")



and below is the output I get after pressing q


[q] command received. Exiting.

Finishing stream 0:0 without any data written to it.
Output #0, image2, to 'output.png':
 Metadata:
 title : Media Presentation
 encoder : Lavf59.26.100
 Stream #0:0: Video: png, rgba, 600x240 [SAR 1:1 DAR 5:2], q=2-31, 200 kb/s, 1 fps, 1 tbn
 Metadata:
 encoder : Lavc59.36.100 png
frame= 0 fps=0.0 q=0.0 Lsize=N/A time=00:00:00.00 bitrate=N/A speed= 0x 
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used)



and there is no file generated. I tried the above code with and mp3 file and it generated the output.png with waveform. How can I resolve the issue ?