
Recherche avancée
Médias (91)
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
-
USGS Real-time Earthquakes
8 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (93)
-
L’utiliser, en parler, le critiquer
10 avril 2011La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
Une liste de discussion est disponible pour tout échange entre utilisateurs. -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (13653)
-
What is the least CPU-intensive format to pass high resolution frames from ffmpeg to openCV ? [closed]
3 octobre 2024, par DocticoI'm developing an application to process a high-resolution (2560x1440) RTSP stream from an IP camera using OpenCV.


What I've Tried


- 

-
OpenCV's
VideoCapture
:

- 

- Performance was poor, even with
CAP_PROP_FFMPEG
.




- Performance was poor, even with
-
FFmpeg with MJPEG :


- 

- Decoded the stream as MJPEG and created OpenCV Mats from the
image2pipe
JPEG buffer. - Resulted in lower CPU usage for OpenCV but higher for FFmpeg.






- Decoded the stream as MJPEG and created OpenCV Mats from the
-
Current Approach :


- 

- Output raw video in YUV420p format from FFmpeg.
- Construct OpenCV Mats from each frame buffer.
- Achieves low FFmpeg CPU usage and moderately high OpenCV CPU usage.
















Current Implementation


import subprocess
import cv2
import numpy as np

def stream_rtsp(rtsp_url):
 # FFmpeg command to stream RTSP and output to pipe
 ffmpeg_command = [
 'ffmpeg',
 '-hwaccel', 'auto',
 '-i', rtsp_url,
 '-pix_fmt', 'yuv420p', # Use YUV420p format
 '-vcodec', 'rawvideo',
 '-an', # Disable audio
 '-sn', # Disable subtitles
 '-f', 'rawvideo',
 '-' # Output to pipe
 ]

 # Start FFmpeg process
 process = subprocess.Popen(ffmpeg_command, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)

 # Frame dimensions
 width, height = 2560, 1440
 frame_size = width * height * 3 // 2 # YUV420p uses 1.5 bytes per pixel

 while True:
 # Read raw video frame from FFmpeg output
 raw_frame = process.stdout.read(frame_size)
 if not raw_frame:
 break

 yuv = np.frombuffer(raw_frame, np.uint8).reshape((height * 3 // 2, width))
 frame = cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR_I420)
 
 processFrame(frame)

 # Clean up
 process.terminate()
 cv2.destroyAllWindows()



Question


Are there any other ways to improve performance when processing high-resolution frames from an RTSP stream ?


-
-
Output resolution substantially below input resolution [closed]
29 août 2024, par bobfordI have written an Android app to overlay a watermark png using FFMpeg 6.0 which works fine on 1k and 4k videos. In both cases the output resolution substantially deteriorates albeit consistent with the reduction in file size. In both cases, the original width-height pixel sizes are retained.


The ffmpeg command is :


String[] array = new String[] {"-i ", inputFile, " -i ", watermarkFile, " -filter_complex ", overlayPosition, " -codec:a copy ", outputFile};
String delimiter = "";
String command = String.join(delimiter, array);



I would like to retain the original resolution, or as close as possible, even with the larger file size.
It would seem there are default parameters which I am unaware of, and have absolutely no idea of how to find them, even after extensive searching. Thank you for your help !


-
FFMPEG in Android Kotlin - processed video should have specific resolution
31 mai 2024, par UtsavI'm recording video from both the front and back cameras and I get a PIP video and a horizontal stacked video. I need to merge both videos after that. The problem with merging is that it requires both the videos (PIP and stacked) to have the same resolution and aspect ratio. This is not the case. So the FFMPEG command being executed in code to generate both these videos needs to be modified to make the resolution and aspect ratio the same.


//app -> build.gradle
implementation "com.writingminds:FFmpegAndroid:0.3.2"



private fun connectFfmPeg() {
 val overlayX = 10
 val overlayY = 10
 val overlayWidth = 200
 val overlayHeight = 350

 outputFile1 = createVideoPath().absolutePath
 outputFile2 = createVideoPath().absolutePath
 //Command to generate PIP video
 val cmd1 = arrayOf(
 "-y",
 "-i",
 videoPath1,
 "-i",
 videoPath2,
 "-filter_complex",
 "[1:v]scale=$overlayWidth:$overlayHeight [pip]; [0:v][pip] overlay=$overlayX:$overlayY",
 "-preset",
 "ultrafast",
 outputFile1
 )

 //Command to generate horizontal stack video
 val cmd2 = arrayOf(
 "-y",
 "-i",
 videoPath1,
 "-i",
 videoPath2,
 "-filter_complex",
 "hstack",
 "-preset",
 "ultrafast",
 outputFile2
 )

 val ffmpeg = FFmpeg.getInstance(this)
 //Both commands are executed
 //Following execution code is OK
 //Omitted for brevity
 }



Here is
mergeVideos()
executed lastly.

private fun mergeVideos(ffmpeg: FFmpeg) {
 //Sample command:
 /*
 ffmpeg -y -i output_a.mp4 -i output_b.mp4 \
 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[outv][outa]" \
 -map "[outv]" -map "[outa]" -preset "ultrafast" output.mp4
 */
 finalOutputFile = createVideoPath().absolutePath

 val cmd = arrayOf(
 "-y",
 "-i",
 outputFile1,
 "-i",
 outputFile2,
 "-filter_complex",
 "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[outv][outa]",
 "-map", "[outv]",
 "-map", "[outa]",
 "-preset", "ultrafast",
 finalOutputFile
 )
 //Execution code omitted for brevity
}



Error : Upon execution of
mergeVideos()
, there is no progress or failure method called. The Logcat stays where it is and the app does not crash either.

Possible solution :
Once I got the generated PIP and horizontal stacked videos to my device's local storage, I tried out some FFMPEG commands on the prompt to process them after moving them to my laptop and it works on the command line :


//First two commands can't be executed in Kotlin code
//This is the main problem
ffmpeg -i v1.mp4 -vf "scale=640:640,setdar=1:1" output_a.mp4
ffmpeg -i v2.mp4 -vf "scale=640:640,setdar=1:1" output_b.mp4
ffmpeg -y -i output_a.mp4 -i output_b.mp4 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[outv][outa]" -map "[outv]" -map "[outa]" -preset "ultrafast" output.mp4
//Merge is successful via command prompt



Please suggest a solution