
Recherche avancée
Médias (1)
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
Autres articles (35)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
D’autres logiciels intéressants
12 avril 2011, parOn ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
Videopress
Site Internet : (...) -
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...)
Sur d’autres sites (6262)
-
Efficiently Fetching the Latest Frame from a Live Stream using OpenCV in Python
10 juillet 2023, par Nicolantonio De BariProblem :


I have a FastAPI server that connects to a live video feed using cv2.VideoCapture. The server then uses OpenCV to process the frames for object detection and sends the results to a WebSocket. Here's the relevant part of my code :


class VideoProcessor:
 # ...

 def update_frame(self, url):
 logger.info("update_frame STARTED")
 cap = cv2.VideoCapture(url)
 while self.capture_flag:
 ret, frame = cap.read()
 if ret:
 frame = cv2.resize(frame, (1280, 720))
 self.current_frame = frame
 else:
 logger.warning("Failed to read frame, retrying connection...")
 cap.release()
 time.sleep(1)
 cap = cv2.VideoCapture(url)

 async def start_model(self, url, websocket: WebSocket):
 # ...
 threading.Thread(target=self.update_frame, args=(url,), daemon=True).start()
 while self.capture_flag:
 if self.current_frame is not None:
 frame = cv2.resize(self.current_frame, (1280, 720))
 bbx = await self.process_image(frame)
 await websocket.send_text(json.dumps(bbx))
 await asyncio.sleep(0.1)



Currently, I'm using a separate thread (update_frame) to continuously fetch frames from the live feed and keep the most recent one in self.current_frame.


The issue with this method is that it uses multi-threading and continuously reads frames in the background, which is quite CPU-intensive. The cv2.VideoCapture.read() function fetches the oldest frame in the buffer, and OpenCV does not provide a direct way to fetch the latest frame.


Goal


I want to optimize this process by eliminating the need for a separate thread and fetching the latest frame directly when I need it. I want to ensure that each time I process a frame in start_model, I'm processing the most recent frame from the live feed.


I have considered a method of continuously calling cap.read() in a tight loop to "clear" the buffer, but I understand that this is inefficient and can lead to high CPU usage.


Attempt :


What I then tried to do is use
ffmpeg
&subprocess
to get the latest frame, but I dont seem to understand how to get the latest frame then.

async def start_model(self, url, websocket: WebSocket):
 try:
 logger.info("Model Started")
 self.capture_flag = True

 command = ["ffmpeg", "-i", url, "-f", "image2pipe", "-pix_fmt", "bgr24", "-vcodec", "rawvideo", "-"]
 pipe = subprocess.Popen(command, stdout=subprocess.PIPE, bufsize=10**8)

 while self.capture_flag:
 raw_image = pipe.stdout.read(1280*720*3) # read 720p frame (BGR)
 if raw_image == b'':
 logger.warning("Failed to read frame, retrying connection...")
 await asyncio.sleep(1) # delay for 1 second before retrying
 continue
 
 frame = np.fromstring(raw_image, dtype='uint8')
 frame = frame.reshape((720, 1280, 3))
 if frame is not None:
 self.current_frame = frame
 frame = cv2.resize(self.current_frame, (1280, 720))
 bbx = await self.process_image(frame)
 await websocket.send_text(json.dumps(bbx))
 await asyncio.sleep(0.1)
 
 pipe.terminate()

 except WebSocketDisconnect:
 logger.info("WebSocket disconnected during model operation")
 self.capture_flag = False # Ensure to stop the model operation when the WebSocket disconnects



Question


Is there a more efficient way to fetch the latest frame from a live stream using OpenCV in Python ? Can I modify my current setup to get the newest frame without having to read all the frames in a separate thread ?
Or is there another library that I could use ?


I know a similar question has been asked, but not related to video streaming.


-
Cannot stream video from VLC docker
4 avril 2023, par Snake EyesI have Dockerfile :


FROM fedora:34

ARG VLC_UID="1000"
ARG VLC_GID="1000"

ENV HOME="/data"


RUN groupadd -g "${VLC_GID}" vlc && \
 useradd -m -d /data -s /bin/sh -u "${VLC_UID}" -g "${VLC_GID}" vlc && \
 dnf upgrade -y && \
 rpm -ivh "https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-34.noarch.rpm" && \
 dnf upgrade -y && \
 dnf install -y vlc && \
 dnf install -y libaacs libbdplus && \
 dnf install -y libbluray-bdj && \
 dnf clean all

USER "vlc"

WORKDIR "/data"

VOLUME ["/data"]

ENTRYPOINT ["/usr/bin/cvlc"]



And then run :


docker run -d -v "d:\path":/data -p 8787:8787 myrepo/myvlc:v1 file:///data/Sample.mkv --sout '#transcode {vcodec=h264,acodec=mp3,samplerate=44100}:std{access=http,mux=ffmpeg{mux=flv},dst=0.0.0.0:8787/stream.flv}'



I get error :


2023-04-04 12:19:11 [000055933c090060] vlcpulse audio output error: PulseAudio server connection failure: Connection refused
2023-04-04 12:19:11 [000055933c09d680] dbus interface error: Failed to connect to the D-Bus session daemon: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
2023-04-04 12:19:11 [000055933c09d680] main interface error: no suitable interface module
2023-04-04 12:19:11 [000055933bf26ad0] main libvlc error: interface "dbus,none" initialization failed
2023-04-04 12:19:11 [000055933c08d7a0] main interface error: no suitable interface module
2023-04-04 12:19:11 [000055933bf26ad0] main libvlc error: interface "globalhotkeys,none" initialization failed
2023-04-04 12:19:11 [000055933c08d7a0] dummy interface: using the dummy interface module...
2023-04-04 12:19:11 [00007f3b84001250] stream_out_standard stream out error: no mux specified or found by extension
2023-04-04 12:19:11 [00007f3b84000f30] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#transcode"}'
2023-04-04 12:19:11 [00007f3b90000c80] main input error: cannot start stream output instance, aborting
2023-04-04 12:19:11 [00007f3b7c001990] stream_out_standard stream out error: no mux specified or found by extension
2023-04-04 12:19:11 [00007f3b7c001690] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#transcode"}'
2023-04-04 12:19:11 [00007f3b90000c80] main input error: cannot start stream output instance, aborting



I mention that I'm using cvlc and I can't stream that mkv file.


I tried as well
--sout '#transcode{scodec=none}:http{mux=ffmpeg{mux=flv},dst=:8787/}'
but same errors.

How can I solve it ?


-
threading with open cv and FFMPEG
3 février 2023, par share2020 uisI'm working on a project which get several CCTV streams and preform some processing with OpenCV. Then I want to get those streams back with rtmp/rtsp protocols.


I can use openCV with threading in python and preform my processing and return in scale of 4 frames from each stream sequentially.
Is there any way to use this python library and FFMPEG to send each stream to corresponding rtmp/rtsp with FFMPG ?


`class LoadStreams: # multiple IP or RTSP cameras
 def __init__(self, sources='streams.txt', img_size =(1290,720)):
 self.mode = 'images'
 self.img_size = img_size


 if os.path.isfile(sources):
 with open(sources, 'r') as f:
 sources = [x.strip() for x in f.read().splitlines() if len(x.strip())]
 else:
 sources = [sources]

 n = len(sources)
 self.imgs = [None] * n
 self.sources = sources
 for i, s in enumerate(sources):
 cap = cv2.VideoCapture(eval(s) if s.isnumeric() else s)
 assert cap.isOpened(), 'Failed to open %s' % s
 w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
 h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
 self.fps = cap.get(cv2.CAP_PROP_FPS) % 100
 _, self.imgs[i] = cap.read() # guarantee first frame
 thread = Thread(target=self.update, args=([i, cap]), daemon=True)
 print(' success (%gx%g at %.2f FPS).' % (w, h, self.fps))
 thread.start()


 def update(self, index, cap):
 n = 0
 while cap.isOpened():
 n += 1
 # _, self.imgs[index] = cap.read()
 cap.grab()
 if n == 4: # read every 4th frame
 _, self.imgs[index] = cap.retrieve()
 n = 0
 time.sleep(0.01) # wait time

 def __iter__(self):
 self.count = -1
 return self

 def __next__(self):
 self.count += 1
 img0 = self.imgs.copy()
 if cv2.waitKey(1) == ord('q'): # q to quit
 cv2.destroyAllWindows()
 raise StopIteration`





Being able to use ffmpeg for n frames from A_in streaming to A_out url and get n from B_in url stream to B_out url.