Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (15150)

  • Connect a remote Ip camera as a Webrtc client

    5 avril 2017, par idosh

    I have 2 cameras :

    • An internal webcam embedded in my laptop.
    • A remote IP camera that is connected to my laptop through Wifi (transmits TCP, raw H264 data - no container). I’m getting the stream using node.js.

    My goal is to create a Webrtc network and connect the remote camera as another client.

    I’m trying to figure out possible solutions :

    • My naive thinking was that I would stream the remote camera payload to the browser. But as I came to understand the browser can’t handle the stream without a container. Fair enough. But I don’t understand why it does handle the video stream that arrives from my internal camera (from the navigator.getUserMedia() function). what’s the difference between the two streams ? why can’t I mimic the stream from the remote camera as the input ?
    • To bypass this problem I thought about creating a virtual camera using Manycam (or Manycam like app). To accomplish that I need to convert my TCP stream into an RTP stream (in order to feed Manycam). Though I did saw some info in ffmpeg command line, I couldn’t find info in their node.js api package "fluent-ffmpeg". Is it possible to do it using fluent-ffmpeg ? Or only using the command line tool ? Would it require another rtp server in the middle such as this one ?.
    • Third option I read about is using node.js as a client in Webrtc. I saw it was implemented in "simple-peer". I tried it out using their co-work with socket.io (socket.io-p2p). unfortunately I couldn’t get it to work / : When i’m trying to create a socket/peer in the server - it throws errors, as it expect options that are only available on the client-side (like window, location, etc.). Am I doing something wrong ? maybe there is more suitable framework for this matter ?
    • Forth option is to use a streaming server in the middle such as Kurnto. From my understanding it receives rtp as an input and transmits it as a webrtc client. I feel it’s the most excessive option, but maybe it’s not so bad (I have to admit that I haven’t investigate this option yet).

    any thoughts ?

    thanks !

  • OpenCV reading from live camera creates a short video that moves quickly

    17 novembre 2022, par user19019404

    I am reading in a live vide stream from a CCTV camera. The camera is set to 5 fps, another is set to 25fps and another to 30fps. Irrespective of the FPS that the camera is set, I can record 5 minutes but end up with a 30 second recorded clip where everyone is running around the scene.

    


    My code is the 'typical' read in video and write video code that you would find online such as (code below simplified for readability) :

    


    import cv2

video = cv2.VideoCapture(live RTSP address of camera)

if (video.isOpened() == False):
    print("Error reading video file")
else:
    frame_width = video.get(cv2.CAP_PROP_FRAME_WIDTH)
    frame_height = video.get(cv2.CAP_PROP_FRAME_HEIGHT)
    frame_fps = video.get(cv2.CAP_PROP_FPS)
    size = (frame_width, frame_height)
    result = cv2.VideoWriter('filename.avi',cv2.VideoWriter_fourcc(*'MJPG'),frame_fps , size)

    while(True):
        ret, frame = video.read()
        if ret == True:
            result.write(frame)
            cv2.imshow('Frame', frame)
        if cv2.waitKey(1) & 0xFF == ord('s'):
            break
        else:
            break
    video.release()
    result.release()
    cv2.destroyAllWindows()
print("The video was successfully saved with new fps")


    


    I have tried playing with the FPS by reading in the FPS from the live camera and using the same FPS in the video write, but all that results is a video that is a fraction of the real time and with people zooming around the scene. So watching a 5 minute smooth video results in a 20 second recorded video with everyone zooming around.

    


    Is this something that I need to fix in the writing of the video or do I need a second pass with ffmpeg to readjust the video ?

    


    Much appreciated

    


    Update, corrected the code above and :
When printing the frames read and the frame written the numbers are the same, showing that each frame that is read is being written (so I am not losing frames along the way thereby writing half the amount of frames).

    


  • Open cv.VideoCapture(index) - ffmpeg list camera names - How to match ?

    15 novembre 2024, par Chris P
        def fetch_camera_input_settings(self):
        try:
            self.database_functions = database_functions

            self.camera_input_device_name = database_functions.read_setting("camera_input_device_name")["value"]
            self.camera_input_device_number = int(self.database_functions.read_setting("camera_input_device_number")["value"])

            self.camera_input_devices = [[0,-1,"Καμία συσκευή κάμερας"]]
            self.available_cameras = [{"device_index":-1,"device_name":"Καμία συσκευή κάμερας"}]

            # FFmpeg command to list video capture devices on Windows
            cmd = ["ffmpeg", "-list_devices", "true", "-f", "dshow", "-i", "dummy"]
            result = subprocess.run(cmd, stderr=subprocess.PIPE, stdout=subprocess.PIPE, text=True)
            output = result.stderr  # FFmpeg sends device listing to stderr

            # Updated regular expression to capture both video and audio devices
            device_pattern = re.compile(r'\[dshow @ .+?\] "(.*?)" \(video\)')
            cameras = device_pattern.findall(output)
            counter = 0
            for camera in cameras:
                counter += 1
                self.camera_input_devices.append([counter,counter-1,camera])
                self.available_cameras.append({"device_index": counter-1, "device_name": camera})

            self.to_emitter.send({"type":"available_devices","devices":self.camera_input_devices,"device_index":self.camera_input_device_number})
        except:
            error_message = traceback.format_exc()
            self.to_emitter.send({"type":"error","error_message":error_message})



    


    How to match ffmpeg camera device names output with cv2.VideoCapture which wants camera index as input ?