
Recherche avancée
Autres articles (57)
-
Dépôt de média et thèmes par FTP
31 mai 2013, parL’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...) -
Soumettre améliorations et plugins supplémentaires
10 avril 2011Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...) -
Encodage et transformation en formats lisibles sur Internet
10 avril 2011MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)
Sur d’autres sites (10144)
-
How do I send a mediaStream from the electron renderer process to a background ffmpeg process ?
26 juillet 2020, par Samamoma_VadakopaGoal (to avoid the XY problem) :


I'm building a small linux desktop application using webRTC, electron, and create-react-app. The application should receive a mediaStream via a webRTC peer connection, display the stream to the user, create a virtual webcam device, and send the stream to the virtual webcam so it can be selected as the input on most major videoconferencing platforms.


Problem :


The individual parts all work : receiving the stream (webRTC), creating the webcam device (v4l2loopback), creating a child process of ffmpeg from within electron, passing the video stream to the ffmpeg process, streaming the video to the virtual device using ffmpeg, and selecting the virtual device and seeing the video stream in a videoconference meeting.


But I'm currently stuck on tying the parts together.
The problem is, the mediaStream object is available inside electron's renderer process (as state in a deeply nested react component, FWIW). As far as I can tell, I can only create a node.js child process of ffmpeg from within electron's main process. That implies that I need to get the mediaStream from the renderer to the main process. To communicate between processes, electron uses an IPC system. Unfortunately, it seems that IPC doesn't support sending a complex object like a video stream.


What I've tried :


- 

-
starting ffmpeg child process (using child_process.spawn) from within renderer process throws 'fs.fileexistssync' error. Browsing SO indicates that only the main process can start these background processes.


-
creating separate webRTC connection between renderer and main to re-stream the video. I'm using IPC to facilitate the connection, but offer/answer descriptions aren't reaching the other peer over IPC - my guess is this is due to the same limitations on IPC as before.








My next step is to create a separate node server on app startup which ingests the incoming RTC stream and rebroadcasts it to the app's renderer process, as well as to a background ffmpeg process.


Before I try that, though, does anyone have suggestions for approaches I should consider ? (this is my first SO question, so any advice on how to improve it is appreciated).


-
-
Multiple live video outputs advice. Live stream/Record/Preview, FFMPEG, Windows, Decklink [closed]
18 septembre 2024, par stroltzI am looking for advice on how best to achieve multiple live video outputs.


The live source is a Decklink card on Windows. (We have a ffmpeg build working to access the card) We want 4 outputs ;


- 

-
We want to run a preview window (low quality would be preferred) just so the user can see the video is working.


-
We want to be able to live stream - single bit rate, RTMP. (goes up to a CDN)


-
Independent from the streaming we want to be able to stop and start recording to file. Ideally using CRF. So a separate encode – but maybe we use the RTMP encode, not sure, and do 1 x encode only.


-
We also want to save a separate audio file. Stops and starts at the same time as the video file above (if required we could do this as a post process on the video file we make above)












We want to keep CPU use down to as reasonable as possible. (so no high end hardware)


We have had a suggestion of this with ffmpeg ;


Input >> ffmpeg


- 

- split input to main and monitoring ;
- scale monitoring stream to lower resolution
- encode both streams
- provide both outputs to local streaming server
ffmpeg >> local streaming server
- use API to start and stop recordings (or web console, if you do it manually)
- provide streams to CDN or/and provide access to your streams for end users














recorded files >> another ffmpeg (controlled by some script that get
RECORDING COMPLETED event to start ffmpeg process)


- 

- extract audio from recorded file
- save audio into file






Which sounds possible, but if doing that, which local streaming server would work best (open source, API...)


or open to other ideas as to the best way.


https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs shows lots of ways, but I don't think you get to control the individual outputs independently.


-
-
lavu/frame : Add Dolby Vision metadata side data type
3 janvier 2022, par Niklas Haaslavu/frame : Add Dolby Vision metadata side data type
In order to be able to extend this struct later (as the Dolby Vision RPU
evolves), all of the 'container' structs are considered extensible, and
the individual constituent fields must instead be accessed via offsets.
The precedent for this style of access is set in
<libavutil/detection_bbox.h>Signed-off-by : Niklas Haas <git@haasn.dev>
Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>