
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (61)
-
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (7876)
-
mp4 Vj Animation video lagging hi res video
21 février 2020, par Ryan StoneI am trying to get a video to play inside a video tag at the top left hand corner of my page, it loads ok, the resolution is good and it seems to be looping but it is lagging very much, definatly not achieving 60fps it is in mp4 format and the resolution on the original mp4 is 1920x1080 it is a hi resolution vj free loop called GlassVein, you can see it if you search on youtube. On right clicking properties it comes up with the following inforamtion ;
Bitrate:127kbs
Data rate:11270kbps
Total bitrate:11398kbs
Audio sample rate is : 44khz
filetype is:VLC media file(.mp4)
(but i do not want or need the audio)& it also says 30fps, but I’m not sure i believe this as it runs smooth as butter on vlc media player no lagging, just smooth loop animation
I have searched on :https://trac.ffmpeg.org/wiki/Encode/AAC for encoding information but it is complete gobbldygook to me, I don’t understand a word its saying
My code is so far as follows ;
<video src="GlassVeinColorful.mp4" autoplay="1" preload="auto" class="Vid" width="640" height="360" loop="1" viewport="" faststart="faststart" mpeg4="mpeg4" 320x240="320x240" 1080="1080" 128k="128k">
</video>Does anyone know why this is lagging so much, or what I could do about it.
it is a quality animation and I don’t really want to loose an of its resolution or crispness.. the -s section was originally set to 1920x1080 as this is what the original file is but i have changed it to try and render it quicker...Any helpful sites, articles or answers would be great..
2020 Update
The Solution to this problem was to convert the Video to WebM, then use Javascript & a Html5 Canvas Element to render the Video to the page instead of using the video tag to embed the video.
Html
<section>
<video src="Imgs/Vid/PurpGlassVein.webm" type="video/webm" width="684" height="auto" muted="muted" loop="loop" autoplay="autoplay">
<source>
<source>
<source>
</source></source></source></video>
<canvas style="filter:opacity(0);"></canvas>
</section>Css
video{
display:none !important;
visibility:hidden;
}Javascript
const Canv = document.querySelector("canvas");
const Video = document.querySelector("video");
const Ctx = Canv.getContext("2d");
Video.addEventListener('play',()=>{
function step() {
Ctx.drawImage(Video, 0, 0, Canv.width, Canv.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
})
Canv.animate({
filter: ['opacity(0) blur(5.28px)','opacity(1) blur(8.20px)']
},{
duration: 7288,
fill: 'forwards',
easing: 'ease-in',
iterations: 1,
delay: 728
})I’ve Also Used the Vanilla Javascript .animate() API to fade the element into the page when the page loads. But one Caveat is that both the Canvas and the off-screen Video Tag must match the original videos resolution otherwise it starts to lag again, however you can use Css to scale it down via transform:scale(0.5) ; which doesn’t seem to effect performance at all.
runs smooth as butter, and doesn’t loose any of the high resolution image.
Added a slight blur0.34px
onto it aswell to smooth it even more.Possibly could of still used ffmpeg to get a better[Smaller File Size] WebM Output file but thats something I’ll have to look into at a later date.
-
Detect volume via mic, start recording, end on silence, transcribe and sent to endpoint
15 juin 2023, par alphadmonI have been attempting to get this to work in many ways but I can't seem to get it right. Most of the time I get a part of it to work and then when I try to make other parts work, I generally break other things.


I am intercepting the volume coming from the mic and if it is louder than 50, I start a recording. I then keep recording until there is a silence, if the silence is equal to 5 seconds I then stop the recording.


I then send the recording to be transcribed by
whisper
using OpenAI API.

Once that is returned, I then want to send it to the open ai chat end point and get the response.


After that, I would like to start listening again.


Here is what I have that is sort of working so far, but the recording is an empty file always :


// DETECT SPEECH
const recorder = require('node-record-lpcm16');

// TRANSCRIBE
const fs = require("fs");
const ffmpeg = require("fluent-ffmpeg");
const mic = require("mic");
const { Readable } = require("stream");
const ffmpegPath = require("@ffmpeg-installer/ffmpeg").path;
require('dotenv').config();

// CHAT
const { Configuration, OpenAIApi } = require("openai");

// OPEN AI
const configuration = new Configuration({
 organization: process.env.OPENAI_ORG,
 apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

// SETUP
ffmpeg.setFfmpegPath(ffmpegPath);

// VARS
let isRecording = false;
const audioFilename = 'recorded_audio.wav';
const micInstance = mic({
 rate: '16000',
 channels: '1',
 fileType: 'wav',
});

// DETECT SPEECH
const file = fs.createWriteStream('determine_speech.wav', { encoding: 'binary' });
const recording = recorder.record();
recording.stream().pipe(file);


recording.stream().on('data', async (data) => {
 let volume = parseInt(calculateVolume(data));
 if (volume > 50 && !isRecording) {
 console.log('You are talking.');
 await recordAudio(audioFilename);
 } else {
 setTimeout(async () => {
 console.log('You are quiet.');
 micInstance.stop();
 console.log('Finished recording');
 const transcription = await transcribeAudio(audioFilename);
 console.log('Transcription:', transcription);
 setTimeout(async () => {
 await askAI(transcription);
 }, 5000);
 }, 5000);
 }
});

function calculateVolume(data) {
 let sum = 0;

 for (let i = 0; i < data.length; i += 2) {
 const sample = data.readInt16LE(i);
 sum += sample * sample;
 }

 const rms = Math.sqrt(sum / (data.length / 2));

 return rms;
}

// TRANSCRIBE
function recordAudio(filename) {
 const micInputStream = micInstance.getAudioStream();
 const output = fs.createWriteStream(filename);
 const writable = new Readable().wrap(micInputStream);

 console.log('Listening...');

 writable.pipe(output);

 micInstance.start();

 micInputStream.on('error', (err) => {
 console.error(err);
 });
}

// Transcribe audio
async function transcribeAudio(filename) {
 const transcript = await openai.createTranscription(
 fs.createReadStream(filename),
 "whisper-1",
 );
 return transcript.data.text;
}

// CHAT
async function askAI(text) {
 let completion = await openai.createChatCompletion({
 model: "gpt-4",
 temperature: 0.2,
 stream: false,
 messages: [
 { role: "user", content: text },
 { role: "system", content: "Act like you are a rude person." }
 ],
 });

 completion = JSON.stringify(completion.data, null, 2);
 console.log(completion);
}



-
Live stream failed on Youtube
20 septembre 2023, par kuldeep chopraWe are currently running an AWS container that utilizes FFmpeg for live streaming while concurrently saving the stream as an MP4 file. However, we encountered an issue during a recent live stream session that was functioning correctly until approximately 13 minutes in, at which point FFmpeg reported the following error logs :


ffmpeg [flv @ 0x555c4bdfe680] Failed to update header with correct duration.
2023-09-07T23:06:38.490+05:30 [flv @ 0x555c4bdfe680] Failed to update header with correct filesize.
2023-09-07T23:06:38.491+05:30 failed => rtmp://a.rtmp.youtube.com/live2/key......
2023-09-07T23:06:38.491+05:30 ffmpeg [tee @ 0x555c48843700] Slave muxer #1 failed: Broken pipe, continuing with 1/2 slaves.



It's worth noting that we have conducted hundreds of live streams on YouTube without encountering this error before ; this is the first occurrence of such an issue.


We would greatly appreciate it if you could investigate and provide insights into the underlying problem causing these error messages.