Recherche avancée

Médias (91)

Autres articles (104)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Emballe Médias : Mettre en ligne simplement des documents

    29 octobre 2010, par

    Le plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
    Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
    D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...)

Sur d’autres sites (9593)

  • Serving live video streams with Spring Boot ?

    29 octobre 2019, par ank

    Not sure if my question is correct/clear but basically I need help getting started building http video stream from different ffmpeg output with Spring Boot. I’m trying to build an NVR application. I plan to use ffmpeg to read ip camera streams (over LAN), produce an output and allow users to view these streams (live) through the web application (possible over the internet) or through http built using Spring Boot. I would also want to give users the ability to add more ip camera streams (within the web application) and have the application automatically run ffmpeg to read and write the output then make the live stream available for viewing within the application or through http.

    For the ffmpeg commands, I plan to use the ffmpeg-cli-wrapper library. For the live streaming from the application itself or through http, are there any libraries that I can use for this ?

  • Need help getting started building http video stream from different ffmpeg output with Spring Boot ?

    26 octobre 2019, par ank

    Not sure if my question is correct/clear but basically I’m trying to build an NVR application. I plan to use ffmpeg to read ip camera streams (over LAN), produce an output and allow users to view these streams (live) through the web application (possible over the internet) or through http built using Spring Boot. I would also want to give users the ability to add more ip camera streams (within the web application) and have the application automatically run ffmpeg to read and write the output then make the live stream available for viewing within the application or through http.

    For the ffmpeg commands, I plan to use the ffmpeg-cli-wrapper library. For the live streaming from the application itself or through http, are there any libraries that I can use for this ?

  • libavcodec initialization to achieve real time playback with frame dropping when necessary

    20 octobre 2019, par Blake Senftner

    I have a C++ computer vision application linking with the ffmpeg libraries that provides frames from video streams to analysis routines. The idea being one can provide a moderately generic video stream identifier, and that video source will be decompressed and passed frame after frame to an analysis routine (which runs the user’s analysis functions.) The "moderately generic video identifier" covers 3 generic video stream types : paths to video files on disk, IP video streams (cameras or video streaming services), and USB webcam pins with desired format & rate.

    My current video player is generic as possible : video only, ignoring audio and other streams. It has a switch case for retrieving a stream’s frame rate based upon the stream’s source and codec, which is used to estimate the delay between decompressing frames. I’ve had many issues with trying to get reliable timestamps from the streams, so I am currently ignoring pts and dts. I know ignoring pts/dts is bad for variable frame rate streams. I plan to special case them later. The player currently checks to see if the last decompressed frame is more than 2 frames late (assuming a constant frame rate), and if so "drops the frame" - does not pass it to the user’s analysis routine.

    Essentially, the video player’s logic is determining when to skip frames (not pass them to the time consuming analysis routine) so the analysis is fed video frames in as close as possible to real time.

    I am looking for examples or discussions how one can initialize and/or maintain their AVFormatContext, AVStream, and AVCodecContext using (presumably but not limited to) AVDictionary options such that frame dropping as is necessary to maintain real time is performed at the libav libraries level, and not at my video player level. If achieving this requires separate AVDictionaies (or more) for each stream type and codec, then so be it. I am interested in understanding the pros and cons of both approachs : dropping frames at the player level or at the libav level.

    (When some analysis requires every frame, the existing player implementation with frame dropping disabled is fine. I suspect if I can get frame dropping to occur at the libav level, I’ll save the packet to frame decompression time as well, reducing the processing more than my current version.)