
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (63)
-
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (11718)
-
Efficiently Fetching the Latest Frame from a Live Stream using OpenCV in Python
10 juillet 2023, par Nicolantonio De BariProblem :


I have a FastAPI server that connects to a live video feed using cv2.VideoCapture. The server then uses OpenCV to process the frames for object detection and sends the results to a WebSocket. Here's the relevant part of my code :


class VideoProcessor:
 # ...

 def update_frame(self, url):
 logger.info("update_frame STARTED")
 cap = cv2.VideoCapture(url)
 while self.capture_flag:
 ret, frame = cap.read()
 if ret:
 frame = cv2.resize(frame, (1280, 720))
 self.current_frame = frame
 else:
 logger.warning("Failed to read frame, retrying connection...")
 cap.release()
 time.sleep(1)
 cap = cv2.VideoCapture(url)

 async def start_model(self, url, websocket: WebSocket):
 # ...
 threading.Thread(target=self.update_frame, args=(url,), daemon=True).start()
 while self.capture_flag:
 if self.current_frame is not None:
 frame = cv2.resize(self.current_frame, (1280, 720))
 bbx = await self.process_image(frame)
 await websocket.send_text(json.dumps(bbx))
 await asyncio.sleep(0.1)



Currently, I'm using a separate thread (update_frame) to continuously fetch frames from the live feed and keep the most recent one in self.current_frame.


The issue with this method is that it uses multi-threading and continuously reads frames in the background, which is quite CPU-intensive. The cv2.VideoCapture.read() function fetches the oldest frame in the buffer, and OpenCV does not provide a direct way to fetch the latest frame.


Goal


I want to optimize this process by eliminating the need for a separate thread and fetching the latest frame directly when I need it. I want to ensure that each time I process a frame in start_model, I'm processing the most recent frame from the live feed.


I have considered a method of continuously calling cap.read() in a tight loop to "clear" the buffer, but I understand that this is inefficient and can lead to high CPU usage.


Attempt :


What I then tried to do is use
ffmpeg
&subprocess
to get the latest frame, but I dont seem to understand how to get the latest frame then.

async def start_model(self, url, websocket: WebSocket):
 try:
 logger.info("Model Started")
 self.capture_flag = True

 command = ["ffmpeg", "-i", url, "-f", "image2pipe", "-pix_fmt", "bgr24", "-vcodec", "rawvideo", "-"]
 pipe = subprocess.Popen(command, stdout=subprocess.PIPE, bufsize=10**8)

 while self.capture_flag:
 raw_image = pipe.stdout.read(1280*720*3) # read 720p frame (BGR)
 if raw_image == b'':
 logger.warning("Failed to read frame, retrying connection...")
 await asyncio.sleep(1) # delay for 1 second before retrying
 continue
 
 frame = np.fromstring(raw_image, dtype='uint8')
 frame = frame.reshape((720, 1280, 3))
 if frame is not None:
 self.current_frame = frame
 frame = cv2.resize(self.current_frame, (1280, 720))
 bbx = await self.process_image(frame)
 await websocket.send_text(json.dumps(bbx))
 await asyncio.sleep(0.1)
 
 pipe.terminate()

 except WebSocketDisconnect:
 logger.info("WebSocket disconnected during model operation")
 self.capture_flag = False # Ensure to stop the model operation when the WebSocket disconnects



Question


Is there a more efficient way to fetch the latest frame from a live stream using OpenCV in Python ? Can I modify my current setup to get the newest frame without having to read all the frames in a separate thread ?
Or is there another library that I could use ?


I know a similar question has been asked, but not related to video streaming.


-
OpenCV is able to read the stream but VLC not
25 avril 2023, par Ahmet ÇavdarI'm trying to stream my webcam frames to an UDP address. Here is my sender code.


cmd = ['ffmpeg', '-y', '-f', 'rawvideo', '-pixel_format', 'bgr24', '-video_size', f'{width}x{height}', 
 '-i', '-', '-c:v', 'mpeg4','-preset', 'ultrafast', '-tune', 'zerolatency','-b:v', '1.5M',
 '-f', 'mpegts', f'udp://@{ip_address}:{port}']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE)
camera = cv2.VideoCapture(0)
while True:
 ret, frame = camera.read()
 cv2.imshow("Sender",frame)
 if not ret:
 break
 p.stdin.write(frame.tobytes())
 p.stdin.flush()
 if cv2.waitKey(1) & 0xFF == ord('q'):
 break



This Python code can make stream successfully. I can read the stream with this receiver code.


q = queue.Queue()
def receive():
 cap = cv2.VideoCapture('udp://@xxx.x.xxx.xxx:5000')
 ret, frame = cap.read()
 q.put(frame)
 while ret:
 ret, frame = cap.read()
 q.put(frame)
def display():
 while True:
 if q.empty() != True:
 frame = q.get()
 cv2.imshow('Receiver', frame)
 k = cv2.waitKey(1) & 0xff
 if k == 27: # press 'ESC' to quit
 break
tr = threading.Thread(target=receive, daemon=True)
td = threading.Thread(target=display)
tr.start()
td.start()
td.join()



But I can not watch the stream from VLC. I'm going to Media->Open Network Stream->
udp ://@xxx.x.xxx.xxx:5000 to watch stream. After some seconds, the timer that located bottom left of VLC starts to increase but there are no frames in screen, just VLC icon.


I checked firewall rules, opened all ports to UDP connections. I am using my IP address to send frames and watch them.
Also, I tried other video codecs like h264, hvec, mpeg4, rawvideo.
Additionally, I tried to watch stream by using Windows Media Player but it didn't work.


What should I do to fix this issue ?


-
threading with open cv and FFMPEG
3 février 2023, par share2020 uisI'm working on a project which get several CCTV streams and preform some processing with OpenCV. Then I want to get those streams back with rtmp/rtsp protocols.


I can use openCV with threading in python and preform my processing and return in scale of 4 frames from each stream sequentially.
Is there any way to use this python library and FFMPEG to send each stream to corresponding rtmp/rtsp with FFMPG ?


`class LoadStreams: # multiple IP or RTSP cameras
 def __init__(self, sources='streams.txt', img_size =(1290,720)):
 self.mode = 'images'
 self.img_size = img_size


 if os.path.isfile(sources):
 with open(sources, 'r') as f:
 sources = [x.strip() for x in f.read().splitlines() if len(x.strip())]
 else:
 sources = [sources]

 n = len(sources)
 self.imgs = [None] * n
 self.sources = sources
 for i, s in enumerate(sources):
 cap = cv2.VideoCapture(eval(s) if s.isnumeric() else s)
 assert cap.isOpened(), 'Failed to open %s' % s
 w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
 h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
 self.fps = cap.get(cv2.CAP_PROP_FPS) % 100
 _, self.imgs[i] = cap.read() # guarantee first frame
 thread = Thread(target=self.update, args=([i, cap]), daemon=True)
 print(' success (%gx%g at %.2f FPS).' % (w, h, self.fps))
 thread.start()


 def update(self, index, cap):
 n = 0
 while cap.isOpened():
 n += 1
 # _, self.imgs[index] = cap.read()
 cap.grab()
 if n == 4: # read every 4th frame
 _, self.imgs[index] = cap.retrieve()
 n = 0
 time.sleep(0.01) # wait time

 def __iter__(self):
 self.count = -1
 return self

 def __next__(self):
 self.count += 1
 img0 = self.imgs.copy()
 if cv2.waitKey(1) == ord('q'): # q to quit
 cv2.destroyAllWindows()
 raise StopIteration`





Being able to use ffmpeg for n frames from A_in streaming to A_out url and get n from B_in url stream to B_out url.