Recherche avancée

Médias (1)

Mot : - Tags -/stallman

Autres articles (13)

  • Liste des distributions compatibles

    26 avril 2011, par

    Le tableau ci-dessous correspond à la liste des distributions Linux compatible avec le script d’installation automatique de MediaSPIP. Nom de la distributionNom de la versionNuméro de version Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    Si vous souhaitez nous aider à améliorer cette liste, vous pouvez nous fournir un accès à une machine dont la distribution n’est pas citée ci-dessus ou nous envoyer le (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

Sur d’autres sites (4214)

  • How to set RTSP transport mode to TCP using Xuggler

    1er novembre 2013, par Anurag Joshi

    I am working with a Sanyo VCC HD2300P IP camera and trying to capture the frame from the live stream on the camera. I have another camera that provides an rtsp stream and the Xuggler code works fantastically on that. However, on the sanyo camera I get the error below

    could not open stream: rtsp://admin:admin@192.168.0.3:554/VideoInput/1/h264/1

    I use this stream on iSpy and VLC and am able to get the stream. Using the functionality from these 2 programs I am also able to capture a frame from the stream.
    The URL for the other camera that works well is rtsp ://admin:123456@192.168.0.246/mpeg4cif. So on the face of it I don't see a reason why it should not work.

    Any help will be highly appreciated.

    EDIT : I searched and experimented a lot. Earlier I was using the inbuilt support for Sanyo cameras in iSpy using which the camera stream was visible. However, when I tried to directly provide the URL for the H.264 stream, the stream would not work unless I changed RTSP mode to TCP. This gave me a hint that probably I need to call the IContainer.open method in a manner that uses RTSP over TCP.
    Googling about it threw some possible solutions like

    using rtsp://192.168.0.3:554/VideoInput/1/h264/1?tcp

    It says that this would force the stream to open on TCP. However, nothing has changed for me. I still get the same error.

    EDIT2 : So essentially it boils down to how do I set the RTSP transport mode to TCP ? By default the transport mode for RTSP is UDP.

    EDIT3 : I am still searching for a solution. Looks like in Xuggler 1.19 all these features were provided directly on the IContainer, IStreamCoder and IVideoSampler objects using the setProperty method which implements the IConfigurable interface. However,

    setProperty("rtsp_transport", "tcp")

    always returns a -ve value indicating that the setProperty command failed. This is because if we call the getPropertyNames() function on each of the IContainer, IStreamCoder and IVideoSampler objects, the list of properties returned doesn't contain a property called rtsp_transport.
    At the same time, https://github.com/artclarke/xuggle-xuggler/blob/master/captive/ffmpeg/csrc/doc/protocols.texi this article suggests that it is indeed possible to set the rtsp_transport value to tcp. I am however, unable to understand it.
    Please help !!

  • Video Manipulation with ffmpeg : Troubleshooting Conversion Issues

    26 janvier 2024, par Barno

    I want to manipulate my video using ffmpeg. I retrieve the stream from S3 with the following function :

    


    async function getImageBufferFromS3(imageUrl) {
    const { bucketName, objectKey } = extractS3InfoFromUrl(imageUrl);
    const s3Client = configureS3Client();

    const getObjectCommand = new GetObjectCommand({
        Bucket: bucketName,
        Key: objectKey
    });

    const data = await s3Client.send(getObjectCommand);
    const imageBuffer = await streamToBuffer(data.Body);
    return imageBuffer;
}

async function streamToBuffer(stream) {
    return new Promise((resolve, reject) => {
        const chunks = [];
        stream.on('data', (chunk) => chunks.push(chunk));
        stream.on('error', reject);
        stream.on('end', () => resolve(Buffer.concat(chunks)));
    });
}


    


    Now, I want to use ffmpeg to add text to it. First, I'd like to obtain the "clean" video :

    


    module.exports.createVideoWithTextAndBackground = async (videoBuffer, customText = null) => {
  try {
    if (!customText) {
      return videoBuffer;
    }
    
    const fontPath = __dirname + '/../public/fonts/Satoshi-Medium.ttf';

    try {
      return await new Promise((resolve, reject) => {
        const input = new stream.PassThrough();
        input.end(videoBuffer);

        const output = new stream.Writable();
        const chunks = [];

        output._write = (chunk, encoding, next) => {
          chunks.push(chunk);
          next();
        };

        output.on('finish', () => {
          const resultBuffer = Buffer.concat(chunks);
          resolve(resultBuffer);
        });

        output.on('error', (err) => {
          reject(err);
        });

        ffmpeg()
          .input(input)
          .inputFormat('mp4')
          .toFormat('mp4')
          .pipe(output);
      });
    } catch (error) {
      console.error(error);
      throw error;
    }

  } catch (error) {
    console.error(error);
    throw error;
  }
};


    


    However, I encountered the following error :

    


    Error: ffmpeg exited with code 183: frame=    0 fps=0.0 q=0.0 Lsize=       0kB time=N/A bitrate=N/A speed=N/A


    


    Conversion failed !

    


    I don't face any issues when I don't use ffmpeg. I even tried ffmpeg -i to create a video with text using my console, confirming that ffmpeg works on my computer.

    


  • Live stream is gets delayed while processing frame in opencv + python

    18 mars 2021, par Himanshu sharma

    I capture and process an IP camera RTSP stream in a OpenCV 4.4.0.46 on Ubuntu.
Unfortunately the processing takes quite a lot of time, roughly 0.2s per frame, and the stream quickly gets delayed.
Video file have to save for 5 min but by this delaying video file is saved for 3-4 min only.

    


    Can we process faster to overcome delays ?

    


    I have two IP camera which have two diffrent fps_rate(Camera 1 have 18000 and camera 2 have 20 fps)

    


    I am implementing this code in difference Ubuntu PCs

    


      

    • Python 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0] on linux
    • 


    • Django==3.1.2
    • 


    • Ubuntu = 18.04 and 20.04
    • 


    • opencv-contrib-python==4.4.0.46
    • 


    • opencv-python==4.4.0.46
    • 


    


    input_stream = 'rtsp://'+username+':'+password+'@'+ip+'/user='+username+'_password='+password+'_channel=0channel_number_stream=0.sdp'
input_stream---> rtsp://admin:Admin123@192.168.1.208/user=admin_password=Admin123_channel=0channel_number_stream=0.sdp

input_stream---> rtsp://Admin:@192.168.1.209/user=Admin_password=_channel=0channel_number_stream=0.sdp

vs = cv2.VideoCapture(input_stream)
fps_rate = int(vs.get(cv2.CAP_PROP_FPS))
I have two IP camera which have two diffrent fps_rate(Camera 1 have 18000 and camera 2 have 20 fps)

video_file_name = 0
start_time = time.time()
while(True):
    ret, frame = vs.read()
    time.sleep(0.2)     # <= Simulate processing time (mask detection, face detection and many detection is hapning)


    ###  Start of  writing a video to disk          
    minute = 5  ## saving a file for 5 minute only then saving another file for 5 min
    second  = 60
    minite_to_save_video = int(minute) * int(second)


    # if we are supposed to be writing a video to disk, initialize
    if time.time() - start_time >= minite_to_save_video or  video_file_name == 0 :
        ## where H = heigth, W = width, C = channel 
        H, W, C = frame.shape
        
        print('time.time()-->',time.time(),'video_file_name-->', video_file_name,  ' #####')
        start_time = time.time()

        video_file_name = str(time.mktime(datetime.datetime.now().timetuple())).replace('.0', '')
        output_save_directory = output_stream+str(int(video_file_name))+'.mp4'


        fourcc = cv2.VideoWriter_fourcc(*'avc1')
        
        writer = cv2.VideoWriter(output_save_directory, fourcc,20.0,(W, H), True)

    # check to see if we should write the frame to disk
    if writer is not None:
        
        try:
            writer.write(frame)

        except Exception as e:
            print('Error in writing video output---> ', e)