
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (88)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Le plugin : Gestion de la mutualisation
2 mars 2010, parLe plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
Installation basique
On installe les fichiers de SPIP sur le serveur.
On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
< ?php (...)
Sur d’autres sites (7584)
-
How to create a video from svg files using a pipe with ffmpeg ?
5 juillet 2023, par Max Chr.I want to create a video from svg graphic stored in a database. My procedure would be the following to archive that :


- 

- connect to database
- create ffmpeg command which takes a pipe as input
- spawn the ffmpeg child process
- Wait for the output of the process.










start another thread :
For all svg files do :


- 

- download the svg from the database into a byte buffer
- write the byte buffer to stdin of the ffmpeg child process






When running my code I encounter a problem when piping the svg files to ffmpeg.
Another possibility would be to download all the svg files to a temp directory and then using ffmpeg, but I want to avoid this.


I used
ffmpeg -f image2pipe -c:v svg -i - -c:v libx264 -y Downloads/out.mp4
. But then ffmpeg gives me the following error :

[image2pipe @ 0x562ebd74c300] Could not find codec parameters for stream 0 (Video: svg (librsvg), none): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options



I found out that there's a
svg_pipe
format in ffmpeg, so I tried that - without success, same error.

Do you have any ideas how to make this work ?


-
can't pipe in numpy arrays (images) to ffmpeg subprocess in python
21 juin 2023, par KevinI'm trying to capture webcam video stream using opencv and pipe raw frames into ffmpeg subprocess, apply 3d .cube lut, bring back those lut applied frames into opencv and display it using cv2.imshow.


This is my code :


import cv2
import subprocess as sp
import numpy as np

lut_cmd = [
 'ffmpeg', '-f', 'rawvideo', '-pixel_format', 'bgr24', '-s', '1280x720', '-framerate', '30', '-i', '-', '-an', '-vf',
 'lut3d=file=lut/luts/lut.cube', '-f', 'rawvideo', 'pipe:1'
 ]

lut_process = sp.Popen(lut_cmd, stdin=sp.PIPE, stdout=sp.PIPE)

width = 1280
height = 720

video_capture = cv2.VideoCapture(0)

while True:
 ret, frame = video_capture.read()

 if not ret:
 break
 
 # Write raw video frame to input stream of ffmpeg sub-process.
 lut_process.stdin.write(frame.tobytes())
 lut_process.stdin.flush()
 print("flushed")

 # Read the processed frame from the ffmpeg subprocess
 raw_frame = lut_process.stdout.read(width * height * 3)
 print("read")
 frame = np.frombuffer(raw_frame, dtype=np.uint8).reshape(height, width, 3)

 cv2.imshow('Video', frame)

 if cv2.waitKey(1) & 0xFF == ord('q'):
 break

lut_process.terminate()
video_capture.release()

cv2.destroyAllWindows()




code gets stuck at reading from ffmpeg part :

raw_frame = lut_process.stdout.read(width * height * 3)


this is what i get when i run the code :


flushed
Input #0, rawvideo, from 'fd:':
 Duration: N/A, start: 0.000000, bitrate: 663552 kb/s
 Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 1280x720, 663552 kb/s, 30 tbr, 30 tbn
Stream mapping:
 Stream #0:0 -> #0:0 (rawvideo (native) -> rawvideo (native))
Output #0, rawvideo, to 'pipe:1':
 Metadata:
 encoder : Lavf60.3.100
 Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24(progressive), 1280x720, q=2-31, 663552 kb/s, 30 fps, 30 tbn
 Metadata:
 encoder : Lavc60.3.100 rawvideo
frame= 0 fps=0.0 q=0.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s speed=N/A 



"read" never gets printed. ffmpeg is stuck at 0fps. cv2.imshow doesn't show up.


I tried changing
lut_process.stdin.write(frame.tobytes())

tolut_process.stdin.write(frame.tostring())
, but result was same.

I tried adding 3 seconds pause before first write to ffmpeg begin, thinking maybe ffmpeg was not ready to process frames, but result was same.


I'm sure that my webcam is working, and I know it's video stream is 1280x720 30fps.


I was successful at
Displaying webcam stream just using opencv,
set FFmpeg input directly to my webcam and get output result using stdout.read, displaying it using opencv.


have no idea what should I try next.


I am using macOS 12.6, openCV 4.7.0, ffmpeg 6.0, python 3.10.11, and visual studio code.


Any help would be greatly appreciated.


-
can't pipe in opencv frames to ffmpeg subprocess in python
19 juin 2023, par KevinI'm trying to capture webcam video stream using opencv and pipe raw frames into ffmpeg subprocess, apply 3d .cube lut, bring back those lut applied frames into opencv and display it using cv2.imshow.


This is my code :


import cv2
import subprocess as sp
import numpy as np

lut_cmd = [
 'ffmpeg', '-f', 'rawvideo', '-pixel_format', 'bgr24', '-s', '1280x720', '-framerate', '30', '-i', '-', '-an', '-vf',
 'lut3d=file=lut/luts/lut.cube', '-f', 'rawvideo', 'pipe:1'
 ]

lut_process = sp.Popen(lut_cmd, stdin=sp.PIPE, stdout=sp.PIPE)

width = 1280
height = 720

video_capture = cv2.VideoCapture(0)

while True:
 ret, frame = video_capture.read()

 if not ret:
 break
 
 # Write raw video frame to input stream of ffmpeg sub-process.
 lut_process.stdin.write(frame.tobytes())
 lut_process.stdin.flush()
 print("flushed")

 # Read the processed frame from the ffmpeg subprocess
 raw_frame = lut_process.stdout.read(width * height * 3)
 print("read")
 frame = np.frombuffer(raw_frame, dtype=np.uint8).reshape(height, width, 3)

 cv2.imshow('Video', frame)

 if cv2.waitKey(1) & 0xFF == ord('q'):
 break

lut_process.terminate()
video_capture.release()

cv2.destroyAllWindows()




code gets stuck at reading from ffmpeg part :

raw_frame = lut_process.stdout.read(width * height * 3)


this is what i get when i run the code :


flushed
Input #0, rawvideo, from 'fd:':
 Duration: N/A, start: 0.000000, bitrate: 663552 kb/s
 Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 1280x720, 663552 kb/s, 30 tbr, 30 tbn
Stream mapping:
 Stream #0:0 -> #0:0 (rawvideo (native) -> rawvideo (native))
Output #0, rawvideo, to 'pipe:1':
 Metadata:
 encoder : Lavf60.3.100
 Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24(progressive), 1280x720, q=2-31, 663552 kb/s, 30 fps, 30 tbn
 Metadata:
 encoder : Lavc60.3.100 rawvideo
frame= 0 fps=0.0 q=0.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s speed=N/A 



"read" never gets printed. ffmpeg is stuck at 0fps. cv2.imshow doesn't show up.


I tried changing
lut_process.stdin.write(frame.tobytes())

tolut_process.stdin.write(frame.tostring())
, but result was same.

I tried adding 3 seconds pause before first write to ffmpeg begin, thinking maybe ffmpeg was not ready to process frames, but result was same.


I'm sure that my webcam is working, and I know it's video stream is 1280x720 30fps.


I was successful at
Displaying webcam stream just using opencv,
set FFmpeg input directly to my webcam and get output result using stdout.read, displaying it using opencv.


have no idea what should I try next.


I am using macOS 12.6, openCV 4.7.0, ffmpeg 6.0, python 3.10.11, and visual studio code.


Any help would be greatly appreciated.