
Recherche avancée
Autres articles (82)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)
Sur d’autres sites (6876)
-
How to make media recorder api individual chunks playable by it self
10 août 2024, par tedGuyI'm trying to send individual chunks to the server instead of sending the whole chunks at once. This way, I have Ffmpeg on my Rails server to convert these chunks to HLS and upload them to S3 to stream the video instantly. However, I've encountered an issue the Media Recorder only provides playable chunks for the first segment after that, they are not playable and need to be concatenated to play.


To avoid this, I've taken a different approach where I start a new Media Recorder every 3 seconds so that I get a playable chunk every time. However, this approach has its issues the video glitches a bit due to the delay when I stop and start a new Media Recorder. Is there a way to achieve this with ease ? This is my current status. please help !


const startVideoRecording = async (
 screenStream: MediaStream,
 audioStream: MediaStream
 ) => {
 setStartingRecording(true);

 try {
 const res = await getVideoId();
 videoId.current = res;
 } catch (error) {
 console.log(error);
 return;
 }

 const outputStream = new MediaStream();
 outputStream.addTrack(screenStream.getVideoTracks()[0]);
 outputStream.addTrack(audioStream.getAudioTracks()[0]); // Add audio track

 const mimeTypes = [
 "video/webm;codecs=h264",
 "video/webm;codecs=vp9",
 "video/webm;codecs=vp8",
 "video/webm",
 "video/mp4",
 ];

 let selectedMimeType = "";
 for (const mimeType of mimeTypes) {
 if (MediaRecorder.isTypeSupported(mimeType)) {
 selectedMimeType = mimeType;
 break;
 }
 }

 if (!selectedMimeType) {
 console.error("No supported mime type found");
 return;
 }

 const videoRecorderOptions = {
 mimeType: selectedMimeType,
 };

 let chunkIndex = 0;

 const startNewRecording = () => {
 // Stop the current recorder if it's running
 if (
 videoRecorderRef.current &&
 videoRecorderRef.current.state === "recording"
 ) {
 videoRecorderRef.current.stop();
 }

 // Create a new MediaRecorder instance
 const newVideoRecorder = new MediaRecorder(
 outputStream,
 videoRecorderOptions
 );
 videoRecorderRef.current = newVideoRecorder;

 newVideoRecorder.ondataavailable = async (event) => {
 if (event.data.size > 0) {
 chunkIndex++;
 totalSegments.current++;
 const segmentIndex = totalSegments.current;
 console.log(event.data);
 handleUpload({ segmentIndex, chunk: event.data });
 }
 };

 // Start recording with a 3-second interval
 newVideoRecorder.start();
 };

 // Start the first recording
 startNewRecording();

 // Set up an interval to restart the recording every 3 seconds
 recordingIntervalIdRef.current = setInterval(() => {
 startNewRecording();
 }, 3000);

 setIsRecording(true);
 setStartingRecording(false);
 };



-
Capture multiple individual streams ALSA FFMPEG
3 novembre 2017, par user3170450There are similar questions available but I couldn’t understand them properly as per my use case. As I am new to ALSA so I would like to explain my use case first.
I am opening my application via google chrome and I need to to capture the audio being played in that chrome window. I did it successfully by capturing the speaker sound with following command :
ffmpeg -f pulse -i alsa_output.pci-0000_00_1b.0.analog-stereo.monitor -ac 1 -ar 16000 test.wav
But the real problem is I will be opening multiple windows at the same time and I need to capture each audio stream separately. There is something like Virtual Audio device but I don’t know how to configure and use them for my use case.
Please guide me in right direction.
Just for a note my system has only one physical sound card available which I am capturing for one window.
-
lavf/mpegts : add supplementary audio descriptor
20 février 2018, par Stefan Pöschellavf/mpegts : add supplementary audio descriptor
The supplementary audio descriptor is defined in ETSI EN 300 468 and
provides more details regarding accessibility audio tracks, especially
the normative annex J contains a detailed description of its use.Its language code (if present) overrides the language code of an also
present ISO 639 language descriptor.Note that this also changes the priority of multiple descriptors with
language from "the last descriptor with language within the ES loop" to
"specific descriptor over general ISO 639 descriptor".Signed-off-by : Aman Gupta <aman@tmm1.net>