Recherche avancée

Médias (1)

Mot : - Tags -/publicité

Autres articles (109)

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (11917)

  • Do not memcpy raw video frames when using null muxer

    29 novembre 2011, par Mans Rullgard

    Do not memcpy raw video frames when using null muxer

  • Stream video from ffmpeg and capture with OpenCV

    10 décembre 2014, par chembrad

    I have a video stream coming in on rtp to ffmpeg and I want to pipe this to my OpenCV tools for live streaming processing. The rtp linkage is working because I am able to send the incoming data to a file and play it (or play if via ffplay). My OpenCV implementation is functional as well because I am able to capture video from a file and also a webcam.

    The problem is the streaming to OpenCV. I have heard that this may be done using a named pipe. First I could stream the ffmpeg output to the pipe and then have OpenCV open this pipe and begin processing.

    What I’ve tried :

    I make a named-pipe in my cygwin bash by :

       $ mkfifo stream_pipe

    Next I use my ffmpeg command to pull the stream from rtp and send it to the pipe :

       $ ffmpeg -f avi -i rtp://xxx.xxx.xxx.xxx:1234 -f avi -y out.avi > stream_pipe

    I am not sure if this is the right way to go about sending the stream to the named pipe but it seems to be accepting the command and work because of the output from ffmpeg gives me bitrates, fps, and such.

    Next I use the named pipe in my OpenCV capture function :

       $ ./cvcap.exe stream_pipe

    where the code for cvcap.cpp boils down to this :

       cv::VideoCapture *pIns = new cv::VideoCapture(argv[1]);

    The program seems to hang when reaching this one line, so, I am wondering if this is the right way of going about this. I have never used named pipes before and I am not sure if this is the correct usage. In addition, I don’t know if I need to handle the named pipe differently in OpenCV—change code around to accept this kind of input. Like I said, my code already accepts files and camera inputs, I am just hung up on a stream coming in. I have only heard that named pipes can be used for OpenCV—I haven’t seen any actual code or commands !

    Any help or insights are appreciated !

    UPDATE :

    I believe named pipes may not be working in the way I intended. As seen on this cygwin forum post :

    The problem is that Cygwin’s implementation of fifos is very buggy. I wouldn’t recommend using fifos for anything but the simplest of applications.

    I may need to find another way to do this. I have tried to pipe the ffmpeg output into a normal file and then have OpenCV read it at the same time. This works to some extent, but I imagine in can be dangerous to read and write from a file concurrently—who knows what would happen !

  • Which FFmpeg codec should be used for video streams with single byte pixel format ?

    2 décembre 2011, par Gearoid Murphy

    I've got a black and white video stream coming off a Firewire astronomy camera, I'd like to use FFmpeg to compress the video stream but it will not accept single byte pixel formats for the MPEG1VIDEO codecs. I've been trying random codecs for the last hour without much success, could anyone give me some sage advise on how to achieve my goal ? :) thx