
Recherche avancée
Médias (1)
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (112)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (15305)
-
Open cv.VideoCapture(index) - ffmpeg list camera names - How to match ?
15 novembre 2024, par Chris Pdef fetch_camera_input_settings(self):
 try:
 self.database_functions = database_functions

 self.camera_input_device_name = database_functions.read_setting("camera_input_device_name")["value"]
 self.camera_input_device_number = int(self.database_functions.read_setting("camera_input_device_number")["value"])

 self.camera_input_devices = [[0,-1,"Καμία συσκευή κάμερας"]]
 self.available_cameras = [{"device_index":-1,"device_name":"Καμία συσκευή κάμερας"}]

 # FFmpeg command to list video capture devices on Windows
 cmd = ["ffmpeg", "-list_devices", "true", "-f", "dshow", "-i", "dummy"]
 result = subprocess.run(cmd, stderr=subprocess.PIPE, stdout=subprocess.PIPE, text=True)
 output = result.stderr # FFmpeg sends device listing to stderr

 # Updated regular expression to capture both video and audio devices
 device_pattern = re.compile(r'\[dshow @ .+?\] "(.*?)" \(video\)')
 cameras = device_pattern.findall(output)
 counter = 0
 for camera in cameras:
 counter += 1
 self.camera_input_devices.append([counter,counter-1,camera])
 self.available_cameras.append({"device_index": counter-1, "device_name": camera})

 self.to_emitter.send({"type":"available_devices","devices":self.camera_input_devices,"device_index":self.camera_input_device_number})
 except:
 error_message = traceback.format_exc()
 self.to_emitter.send({"type":"error","error_message":error_message})




How to match ffmpeg camera device names output with cv2.VideoCapture which wants camera index as input ?


-
Low FPS output using ffmpeg and a raspi camera
23 novembre 2019, par NeweI am building a surveillance camera for a school project, which is based on a raspberry pi and a infrared raspberry Pi camera.
I am capturing the camera’s video stream and outputting it as an HLS stream directly from ffmpeg. However, the resulting video is really low fps ( 5 at max)
Strangely, raspivid manages to ouput a 60fps 720p stream without any problem, but when put through ffmpeg for streaming, the video is cropped in half and i cannot get it to show up entirely.
Here is the ffmpeg command i use :
#!/bin/bash
ffmpeg -v verbose \
-re \
-i /dev/video0 \
-c:v libx264 \
-an \
-f hls \
-g 10 \
-sc_threshold 0 \
-hls_time 1 \
-hls_list_size 4 \
-hls_delete_threshold 1 \
-hls_flags delete_segments \
-hls_start_number_source datetime \
-hls_wrap 15 \
-preset superfast \
-start_number 1 \
/home/pi/serv/assets/stream.m3u8And the resulting log output (notice the fps)
Here is the command using raspivid that i tested, based on a blog post i read :
raspivid -n \
-t 0 \
-w 960 \
-h 540 \
-fps 25 \
-o - | ffmpeg \
-v verbose \
-i - \
-vcodec copy \
-an \
-f hls \
-g 10 \
-sc_threshold 0 \
-hls_time 1 \
-hls_list_size 4 \
-hls_delete_threshold 1 \
-hls_flags delete_segments \
-hls_start_number_source datetime \
-hls_wrap 15 \
-preset superfast \
-start_number 1 \
/home/pi/serv/assets/stream.m3u8I am not a ffmpeg expert and am open to any suggestions that would help improve the stream’s quality and stability :)
-
How to decode in C a stream from this noname almost-UVC grayscale camera
18 janvier 2019, par scriptfooEdit : I found the cause. The stream always begins with something which is not a JPEG. Only after it there is a normal MJPEG stream. Interestingly, not all of the small examples of using V4L2/MJPEG decoders can divide what the camera produces properly into frames. Something called
capturev4l2.c
is a rare example of doing it properly. Possibly there is some detail, which decides if the camera’s bugginess is worked around or not.I have a noname almost-UVC-compliant camera (it fails several compatibility tests). This is a relatively cheap global shutter camera, and thus I would like to use it instead of something properly documented. It outputs what is reported (and properly played) by
mplayer
asOpening video decoder: [ffmpeg] FFmpeg's libavcodec codec family
libavcodec version 57.107.100 (external)
Selected video codec: [ffmjpeg] vfm: ffmpeg (FFmpeg MJPEG)ffprobe
shows the following :[mjpeg @ 0x55c086dcc080] Format mjpeg detected only with low score of 25, misdetection possible!
Input #0, mjpeg, from '/home/sc/Desktop/a.raw':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 640x480, 25 tbr, 1200k tbn, 25 tbcBut as opposed to
mplayer
, it is unable to play it.I tried
decode_jpeg_raw
frommjpegtools
, it complains about the header, which seems to change with each captured stream. So does not look like an unwrapped stream of JPEG images.I thus tried
0_hello_world.c
from libavcodec/libavformat, but its stops atavformat_open_input()
with an errorInvalid data found when processing input
. A 100-frame sample file is sitting here a.raw. Do you have any idea how to determine a method of decoding it in C into anything plain bitmap ?The file is grayscale, does not begin with a constant value,
guvcview
andmplayer
are the only players I know, which can decode it without artifacts...