
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (51)
-
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras. -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users.
Sur d’autres sites (7768)
-
Is it possible to create a video from a live stream of bitmaps ?
30 mars 2015, par user3088260I have an application that gets single frame from the webcam roughly every 200 ms as a bitmap. I have used
AForge.Video.FFMPEG.VideoFileWriter
to create a video from a few of them, but the problem is that the stream must be closed for the bitmaps to be written to file and I want to add more bitmaps to the same file for days - like a CCTV camera. Is it possible to transcode live video using AForge from bitmaps ? -
Independent MP4 fragments for live stream
20 avril 2020, par J DoeMy goal is to stream a H264 live-generated infinite feed (webcam) to the browser.



Specifically, I want to stream a Raspberry PI camera to the browser.



(Disclaimer : I'm streaming a H264 file right now ; but it's to be replaced with the PI webcam)



So, I have a H264 stream which I pipe to ffmpeg to mux it to MP4 fragments (movflags is set to frag_keyframe+empty_moov+default_base_moof), and then I send the fragments using websocket (in the server I extract the fragments and send each fragment as a message that has moof+mdata. Didn't figure out how to do it with HTTP Progressive, because the stream is live-generated...), and then they're played in the browser using the MediaSource API.



It works, but there's one problem : the video doesn't play unless it has ALL the fragments.



I tried sending the initializing fragment, and then sending only newly generated fragments, but it errors out and doesn't play.



How can I make it so you can join in the middle of the livestream, without being there from the very start of when it started recording (or in my testing, in the middle of the file stream).



In other words, how do I make the fragments independent of each other ?



P.S. any extra info will help, I'm new to this.


-
Live streaming and simultaneous local/server video saving with Insta360/Theta 360 camera [closed]
13 août 2023, par FornowI'm currently working on a project that involves live streaming video from a 360 camera, specifically the Insta360 and Theta models, while also saving the streamed video either locally or on a remote server. I'm relatively new to both live streaming and working with 360 cameras, so I'm seeking guidance on the best approach to achieve this.


My primary goals are as follows :


- 

-
Live Streaming : I want to be able to stream the real-time video captured by the 360 camera to a web platform or application, allowing users to experience the immersive 360 content as it happens.


-
Simultaneous Video Saving : In addition to live streaming, I also need to save the streamed video. This can either be saved locally on the device running the streaming process or on a remote server. The saved video should ideally retain its 360 nature and high-quality resolution.








I've been researching various technologies and frameworks like WebRTC for live streaming, but I'm unsure about the compatibility and best practices when dealing specifically with 360 cameras like Insta360 and Theta. Additionally, I'm uncertain about the most efficient way to save the streamed video while maintaining its immersive properties.


If anyone has experience with live streaming from 360 cameras and simultaneously saving the content, could you please provide insights into the following :


- 

- Recommended libraries, SDKs, or frameworks for live streaming 360 video from Insta360 or Theta cameras.
- Tips for ensuring the streamed video retains its 360 attributes and high quality.
- Best practices for saving the streamed video either locally or on a remote server while the live stream is ongoing.








Any code examples, tutorials, or step-by-step guides would be greatly appreciated. Thank you in advance for your help !


-