Recherche avancée

Médias (0)

Mot : - Tags -/objet éditorial

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (35)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (5752)

  • Efficiently Fetching the Latest Frame from a Live Stream using OpenCV in Python

    10 juillet 2023, par Nicolantonio De Bari

    Problem :

    


    I have a FastAPI server that connects to a live video feed using cv2.VideoCapture. The server then uses OpenCV to process the frames for object detection and sends the results to a WebSocket. Here's the relevant part of my code :

    


    class VideoProcessor:
    # ...

    def update_frame(self, url):
        logger.info("update_frame STARTED")
        cap = cv2.VideoCapture(url)
        while self.capture_flag:
            ret, frame = cap.read()
            if ret:
                frame = cv2.resize(frame, (1280, 720))
                self.current_frame = frame
            else:
                logger.warning("Failed to read frame, retrying connection...")
                cap.release()
                time.sleep(1)
                cap = cv2.VideoCapture(url)

    async def start_model(self, url, websocket: WebSocket):
        # ...
        threading.Thread(target=self.update_frame, args=(url,), daemon=True).start()
        while self.capture_flag:
            if self.current_frame is not None:
                frame = cv2.resize(self.current_frame, (1280, 720))
                bbx = await self.process_image(frame)
                await websocket.send_text(json.dumps(bbx))
                await asyncio.sleep(0.1)


    


    Currently, I'm using a separate thread (update_frame) to continuously fetch frames from the live feed and keep the most recent one in self.current_frame.

    


    The issue with this method is that it uses multi-threading and continuously reads frames in the background, which is quite CPU-intensive. The cv2.VideoCapture.read() function fetches the oldest frame in the buffer, and OpenCV does not provide a direct way to fetch the latest frame.

    


    Goal

    


    I want to optimize this process by eliminating the need for a separate thread and fetching the latest frame directly when I need it. I want to ensure that each time I process a frame in start_model, I'm processing the most recent frame from the live feed.

    


    I have considered a method of continuously calling cap.read() in a tight loop to "clear" the buffer, but I understand that this is inefficient and can lead to high CPU usage.

    


    Attempt :

    


    What I then tried to do is use ffmpeg & subprocess to get the latest frame, but I dont seem to understand how to get the latest frame then.

    


    async def start_model(self, url, websocket: WebSocket):
    try:
        logger.info("Model Started")
        self.capture_flag = True

        command = ["ffmpeg", "-i", url, "-f", "image2pipe", "-pix_fmt", "bgr24", "-vcodec", "rawvideo", "-"]
        pipe = subprocess.Popen(command, stdout=subprocess.PIPE, bufsize=10**8)

        while self.capture_flag:
            raw_image = pipe.stdout.read(1280*720*3)  # read 720p frame (BGR)
            if raw_image == b'':
                logger.warning("Failed to read frame, retrying connection...")
                await asyncio.sleep(1) # delay for 1 second before retrying
                continue
            
            frame = np.fromstring(raw_image, dtype='uint8')
            frame = frame.reshape((720, 1280, 3))
            if frame is not None:
                self.current_frame = frame
                frame = cv2.resize(self.current_frame, (1280, 720))
                bbx = await self.process_image(frame)
                await websocket.send_text(json.dumps(bbx))
                await asyncio.sleep(0.1)
                
        pipe.terminate()

    except WebSocketDisconnect:
        logger.info("WebSocket disconnected during model operation")
        self.capture_flag = False  # Ensure to stop the model operation when the WebSocket disconnects


    


    Question

    


    Is there a more efficient way to fetch the latest frame from a live stream using OpenCV in Python ? Can I modify my current setup to get the newest frame without having to read all the frames in a separate thread ?
Or is there another library that I could use ?

    


    I know a similar question has been asked, but not related to video streaming.

    


  • How can i get the live stream from my gopro hero 5 on c# wpf

    8 avril 2022, par Kanoto

    I'm trying to livestream with my GoPro and showing it on wpf application.

    


    var inputArgs = "-framerate 20 -f rawvideo -pix_fmt rgb32 -video_size 1920x1080 -i udp://10.5.5.100:8554";
var outputArgs = "-vcodec libx264 -crf 23 -pix_fmt yuv420p -preset ultrafast -r 20 out.mp4";

var process = new Process
{
    StartInfo =
    {
        FileName = "ffmpeg.exe",
        Arguments = $"{inputArgs} {outputArgs}",
        UseShellExecute = false,
        CreateNoWindow = false,
        RedirectStandardInput = true
    }
};


    


    But I can't make it work, even using that command on cmd/powershell :

    


    ffplay -fflags nobuffer -f:v mpegts -probesize 8192 -i udp://10.5.5.100:8554 -f mpegts -vcodec copy udp://localhost:10000


    


    But for some reasons, when i try with the py script over there : https://github.com/KonradIT/GoProStream/blob/master/GoProStream.py
    
it works but I can't explain why, I copy/paste the same command but doesn't work.

    


  • How to extract good quality frame in pixel format from live mpeg-ts stream ?

    2 février 2024, par Aven

    I use the command below to stream my ts file for my programe :

    


    ffmpeg -re -i independent_frame.ts -f mpegts udp://127.0.0.1:10002


    


    Below is the code of my programe, I'm trying to extract each frame in pixel format from the stream by using FFmpeg, but I found the quality is not good as it should be of the extracted frame ( I know OpenCV can do this, But I have other reason have to use FFmpeg ). I tried to put -qscale:v 2 in FFmpeg command, but it seems change nothing. Does anyone know how to enhance the frame quality in this case ? Thanks !

    


    stream = 'udp://127.0.0.1:10002'
H, W = 720, 1280

command = [ 'ffmpeg',
            '-i', stream ,
            '-pix_fmt', 'bgr24', # brg24 for matching OpenCV
            '-qscale:v', '2', 
            '-f', 'rawvideo',
            'pipe:' ]

# Execute FFmpeg as sub-process with stdout as a pipe
process = subprocess.Popen(command, stdout=subprocess.PIPE, bufsize=10**8)#10**8

# Load individual frames in a loop
nb_img = H*W*3

# Read decoded video frames from the PIPE until no more frames to read
num = 0
while True:
    # Read decoded video frame (in raw video format) from stdout process.
    buffer = process.stdout.read(W*H*3)

    # Break the loop if buffer length is not W*H*3 (when FFmpeg streaming ends).
    if len(buffer) != W*H*3:
        break

    img = np.frombuffer(buffer, np.uint8).reshape(H, W, 3)
    num +=1

    if num == 435:
        cv2.imwrite('435.png', img)
        cv2.imshow('img', img)  # Show the image for testing
        cv2.waitKey(0)
        cv2.destroyAllWindows()
        break

process.stdout.close()
process.wait()
cv2.destroyAllWindows()