
Recherche avancée
Médias (1)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (53)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
Sur d’autres sites (10509)
-
How to make media recorder api individual chunks playable by it self
10 août 2024, par tedGuyI'm trying to send individual chunks to the server instead of sending the whole chunks at once. This way, I have Ffmpeg on my Rails server to convert these chunks to HLS and upload them to S3 to stream the video instantly. However, I've encountered an issue the Media Recorder only provides playable chunks for the first segment after that, they are not playable and need to be concatenated to play.


To avoid this, I've taken a different approach where I start a new Media Recorder every 3 seconds so that I get a playable chunk every time. However, this approach has its issues the video glitches a bit due to the delay when I stop and start a new Media Recorder. Is there a way to achieve this with ease ? This is my current status. please help !


const startVideoRecording = async (
 screenStream: MediaStream,
 audioStream: MediaStream
 ) => {
 setStartingRecording(true);

 try {
 const res = await getVideoId();
 videoId.current = res;
 } catch (error) {
 console.log(error);
 return;
 }

 const outputStream = new MediaStream();
 outputStream.addTrack(screenStream.getVideoTracks()[0]);
 outputStream.addTrack(audioStream.getAudioTracks()[0]); // Add audio track

 const mimeTypes = [
 "video/webm;codecs=h264",
 "video/webm;codecs=vp9",
 "video/webm;codecs=vp8",
 "video/webm",
 "video/mp4",
 ];

 let selectedMimeType = "";
 for (const mimeType of mimeTypes) {
 if (MediaRecorder.isTypeSupported(mimeType)) {
 selectedMimeType = mimeType;
 break;
 }
 }

 if (!selectedMimeType) {
 console.error("No supported mime type found");
 return;
 }

 const videoRecorderOptions = {
 mimeType: selectedMimeType,
 };

 let chunkIndex = 0;

 const startNewRecording = () => {
 // Stop the current recorder if it's running
 if (
 videoRecorderRef.current &&
 videoRecorderRef.current.state === "recording"
 ) {
 videoRecorderRef.current.stop();
 }

 // Create a new MediaRecorder instance
 const newVideoRecorder = new MediaRecorder(
 outputStream,
 videoRecorderOptions
 );
 videoRecorderRef.current = newVideoRecorder;

 newVideoRecorder.ondataavailable = async (event) => {
 if (event.data.size > 0) {
 chunkIndex++;
 totalSegments.current++;
 const segmentIndex = totalSegments.current;
 console.log(event.data);
 handleUpload({ segmentIndex, chunk: event.data });
 }
 };

 // Start recording with a 3-second interval
 newVideoRecorder.start();
 };

 // Start the first recording
 startNewRecording();

 // Set up an interval to restart the recording every 3 seconds
 recordingIntervalIdRef.current = setInterval(() => {
 startNewRecording();
 }, 3000);

 setIsRecording(true);
 setStartingRecording(false);
 };



-
Capture multiple individual streams ALSA FFMPEG
3 novembre 2017, par user3170450There are similar questions available but I couldn’t understand them properly as per my use case. As I am new to ALSA so I would like to explain my use case first.
I am opening my application via google chrome and I need to to capture the audio being played in that chrome window. I did it successfully by capturing the speaker sound with following command :
ffmpeg -f pulse -i alsa_output.pci-0000_00_1b.0.analog-stereo.monitor -ac 1 -ar 16000 test.wav
But the real problem is I will be opening multiple windows at the same time and I need to capture each audio stream separately. There is something like Virtual Audio device but I don’t know how to configure and use them for my use case.
Please guide me in right direction.
Just for a note my system has only one physical sound card available which I am capturing for one window.
-
Fragmented MP4 : TrackFragHeader TFHD must have TrackID, is this in the spec ?
27 juillet 2021, par PenquinI'm building a fragmented mp4 muxer and noticed that the track id is repeated inside the TFHD.
If the video does not have this undocumented track id, it will simply not play.


Here's an example of a muxer adding it :
https://github.com/edgeware/mp4ff/blob/bb9320744777dc97f18034c8aed45a9bcdbaa995/mp4/tfhd.go#L154


I was relying on the open spec provided by Microsoft :
https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-sstr/513ea48c-9a57-4792-a32a-fb6202ce2a58


Is this an addition to the spec ? Is the spec provided by Microsoft wrong ?