
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (88)
-
Le plugin : Gestion de la mutualisation
2 mars 2010, parLe plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
Installation basique
On installe les fichiers de SPIP sur le serveur.
On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
< ?php (...) -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...) -
Formulaire personnalisable
21 juin 2013, parCette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire. (...)
Sur d’autres sites (8925)
-
What is the least CPU-intensive format to pass high resolution frames from ffmpeg to openCV ? [closed]
3 octobre 2024, par DocticoI'm developing an application to process a high-resolution (2560x1440) RTSP stream from an IP camera using OpenCV.


What I've Tried


- 

-
OpenCV's
VideoCapture
:

- 

- Performance was poor, even with
CAP_PROP_FFMPEG
.




- Performance was poor, even with
-
FFmpeg with MJPEG :


- 

- Decoded the stream as MJPEG and created OpenCV Mats from the
image2pipe
JPEG buffer. - Resulted in lower CPU usage for OpenCV but higher for FFmpeg.






- Decoded the stream as MJPEG and created OpenCV Mats from the
-
Current Approach :


- 

- Output raw video in YUV420p format from FFmpeg.
- Construct OpenCV Mats from each frame buffer.
- Achieves low FFmpeg CPU usage and moderately high OpenCV CPU usage.
















Current Implementation


import subprocess
import cv2
import numpy as np

def stream_rtsp(rtsp_url):
 # FFmpeg command to stream RTSP and output to pipe
 ffmpeg_command = [
 'ffmpeg',
 '-hwaccel', 'auto',
 '-i', rtsp_url,
 '-pix_fmt', 'yuv420p', # Use YUV420p format
 '-vcodec', 'rawvideo',
 '-an', # Disable audio
 '-sn', # Disable subtitles
 '-f', 'rawvideo',
 '-' # Output to pipe
 ]

 # Start FFmpeg process
 process = subprocess.Popen(ffmpeg_command, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)

 # Frame dimensions
 width, height = 2560, 1440
 frame_size = width * height * 3 // 2 # YUV420p uses 1.5 bytes per pixel

 while True:
 # Read raw video frame from FFmpeg output
 raw_frame = process.stdout.read(frame_size)
 if not raw_frame:
 break

 yuv = np.frombuffer(raw_frame, np.uint8).reshape((height * 3 // 2, width))
 frame = cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR_I420)
 
 processFrame(frame)

 # Clean up
 process.terminate()
 cv2.destroyAllWindows()



Question


Are there any other ways to improve performance when processing high-resolution frames from an RTSP stream ?


-
-
Output resolution substantially below input resolution [closed]
29 août 2024, par bobfordI have written an Android app to overlay a watermark png using FFMpeg 6.0 which works fine on 1k and 4k videos. In both cases the output resolution substantially deteriorates albeit consistent with the reduction in file size. In both cases, the original width-height pixel sizes are retained.


The ffmpeg command is :


String[] array = new String[] {"-i ", inputFile, " -i ", watermarkFile, " -filter_complex ", overlayPosition, " -codec:a copy ", outputFile};
String delimiter = "";
String command = String.join(delimiter, array);



I would like to retain the original resolution, or as close as possible, even with the larger file size.
It would seem there are default parameters which I am unaware of, and have absolutely no idea of how to find them, even after extensive searching. Thank you for your help !


-
FFMPEG in Android Kotlin - processed video should have specific resolution
31 mai 2024, par UtsavI'm recording video from both the front and back cameras and I get a PIP video and a horizontal stacked video. I need to merge both videos after that. The problem with merging is that it requires both the videos (PIP and stacked) to have the same resolution and aspect ratio. This is not the case. So the FFMPEG command being executed in code to generate both these videos needs to be modified to make the resolution and aspect ratio the same.


//app -> build.gradle
implementation "com.writingminds:FFmpegAndroid:0.3.2"



private fun connectFfmPeg() {
 val overlayX = 10
 val overlayY = 10
 val overlayWidth = 200
 val overlayHeight = 350

 outputFile1 = createVideoPath().absolutePath
 outputFile2 = createVideoPath().absolutePath
 //Command to generate PIP video
 val cmd1 = arrayOf(
 "-y",
 "-i",
 videoPath1,
 "-i",
 videoPath2,
 "-filter_complex",
 "[1:v]scale=$overlayWidth:$overlayHeight [pip]; [0:v][pip] overlay=$overlayX:$overlayY",
 "-preset",
 "ultrafast",
 outputFile1
 )

 //Command to generate horizontal stack video
 val cmd2 = arrayOf(
 "-y",
 "-i",
 videoPath1,
 "-i",
 videoPath2,
 "-filter_complex",
 "hstack",
 "-preset",
 "ultrafast",
 outputFile2
 )

 val ffmpeg = FFmpeg.getInstance(this)
 //Both commands are executed
 //Following execution code is OK
 //Omitted for brevity
 }



Here is
mergeVideos()
executed lastly.

private fun mergeVideos(ffmpeg: FFmpeg) {
 //Sample command:
 /*
 ffmpeg -y -i output_a.mp4 -i output_b.mp4 \
 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[outv][outa]" \
 -map "[outv]" -map "[outa]" -preset "ultrafast" output.mp4
 */
 finalOutputFile = createVideoPath().absolutePath

 val cmd = arrayOf(
 "-y",
 "-i",
 outputFile1,
 "-i",
 outputFile2,
 "-filter_complex",
 "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[outv][outa]",
 "-map", "[outv]",
 "-map", "[outa]",
 "-preset", "ultrafast",
 finalOutputFile
 )
 //Execution code omitted for brevity
}



Error : Upon execution of
mergeVideos()
, there is no progress or failure method called. The Logcat stays where it is and the app does not crash either.

Possible solution :
Once I got the generated PIP and horizontal stacked videos to my device's local storage, I tried out some FFMPEG commands on the prompt to process them after moving them to my laptop and it works on the command line :


//First two commands can't be executed in Kotlin code
//This is the main problem
ffmpeg -i v1.mp4 -vf "scale=640:640,setdar=1:1" output_a.mp4
ffmpeg -i v2.mp4 -vf "scale=640:640,setdar=1:1" output_b.mp4
ffmpeg -y -i output_a.mp4 -i output_b.mp4 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[outv][outa]" -map "[outv]" -map "[outa]" -preset "ultrafast" output.mp4
//Merge is successful via command prompt



Please suggest a solution