
Recherche avancée
Médias (1)
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (62)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (8176)
-
create a timelapse video using MediaRecorder API ( and ffmpeg ? )
24 août 2022, par The Blind HawkSummary


I have a version of my code already working on Chrome and Edge (Mac Windows and Android), but I need some fixes for it to work on IOS (Safari/Chrome).

My objective is to record around 25 minutes and download a timelapse version of the recording.

final product requirements :

speed: 3fps
length: ~25s

(I need to record one frame every 20 seconds for 25 mins)



this.secondStream settings :


this.secondStream = await navigator.mediaDevices.getUserMedia({
 audio: false,
 video: {width: 430, height: 430, facingMode: "user"}
});



My code for IOS so far :


startIOSVideoRecording: function() {
 console.log("setting up recorder");
 var self = this;
 this.data = [];

 if (MediaRecorder.isTypeSupported('video/mp4')) {
 // IOS does not support webm, so I will be using mp4
 var options = {mimeType: 'video/mp4', videoBitsPerSecond : 1000000};
 } else {
 console.log("ERROR: mp4 is not supported, trying to default to webm");
 var options = {mimeType: 'video/webm'};
 }
 console.log("options settings:");
 console.log(options);

 this.recorder = new MediaRecorder(this.secondStream, options);

 this.recorder.ondataavailable = function(evt) {
 if (evt.data && evt.data.size > 0) {
 self.data.push(evt.data);
 console.log('chunk size: ' + evt.data.size);
 }
 }

 this.recorder.onstop = function(evt) {
 console.log('recorder stopping');
 var blob = new Blob(self.data, {type: "video/mp4"});
 self.download(blob, "mp4");
 self.sendMail(videoBlob);
 }

 console.log("finished setup, starting")
 this.recorder.start(1200);

 function sleep(ms) { return new Promise(resolve => setTimeout(resolve, ms));}

 async function looper() {
 // I am trying to pick one second every 20 more or less
 await sleep(500);
 self.recorder.pause();
 await sleep(18000);
 self.recorder.resume();
 looper();
 }
 looper();
 },



Issues


Only one call to getUserMedia()


I am already using
this.secondstream
elsewhere, and I need the settings to stay as they are for the other functionality.

On Chrome and Edge, I could just callgetUserMedia()
again with different settings, and the issue would be solved, but on IOS callinggetUserMedia()
a second time kills the first stream.

The settings that I was planning to use (works for Chrome and Edge) :

navigator.mediaDevices.getUserMedia({
 audio: false,
 video: { 
 width: 360, height: 240, facingMode: "user", 
 frameRate: { min:0, ideal: 0.05, max:0.1 } 
 },
}



The timelapse library I am using does not support mp4 (ffmpeg as alternative ?)


I am forced to use mp4 on IOS apparently, but this does not allow me to use the library I was relying on so I need an alternative.

I am thinking of usingffmpeg
but cannot find any documentation to make it interact with the blob before the download.

I do not want to edit the video after downloading it, but I want to be able to download the already edited version, so no terminal commands.

MediaRecorder pause and resume are not ideal


On Chrome and Edge I would keep one frame every 20 seconds by setting the frameRate to 0.05, but this does not seem to work on IOS for two reasons.

First one is related to the first issue of not being able to change the settings ofgetUserMedia()
without destroying the initial stream in the first place.

And even after changing the settings, It seems that setting the frame rate below 1 is not supported on IOS. Maybe I wrote something else wrong, but I was not able to open the downloaded file.

Therefore I tried relying on pausing and resuming the MediaRecorder, but this brings forth another two issues :

I am currently saving 1 second every 20 seconds and not 1 frame every 20 seconds, and I cannot find any workarounds.

Pause and Resume take a little bit of time, making the code unreliable, as I sometimes pick 2/20 seconds instead of 1/20, and I have no reliability that the loop is actually running every 20 seconds (might be 18 might be 25).

My working code for other platforms


This is my code for the other platforms, hope it helps !

Quick note : you will need to give it a bit of time between setup and start.

The timelapse library is here


 setupVideoRecording: function() {
 let video = { 
 width: 360, height: 240, facingMode: "user", 
 frameRate: { min:0, ideal: 0.05, max:0.1 } 
 };
 navigator.mediaDevices.getUserMedia({
 audio: false,
 video: video,
 }).then((stream) => {
 // this is a video element
 const recVideo = document.getElementById('self-recorder');
 recVideo.muted = true;
 recVideo.autoplay = true;
 recVideo.srcObject = stream;
 recVideo.play();
 });
 },

 startVideoRecording: function() {
 console.log("setting up recorder");
 var self = this;
 this.data = [];

 var video = document.getElementById('self-recorder');

 if (MediaRecorder.isTypeSupported('video/webm; codecs=vp9')) {
 var options = {mimeType: 'video/webm; codecs=vp9'};
 } else if (MediaRecorder.isTypeSupported('video/webm')) {
 var options = {mimeType: 'video/webm'};
 }
 console.log("options settings:");
 console.log(options);

 this.recorder = new MediaRecorder(video.captureStream(), options);

 this.recorder.ondataavailable = function(evt) {
 self.data.push(evt.data);
 console.log('chunk size: ' + evt.data.size);
 }

 this.recorder.onstop = function(evt) {
 console.log('recorder stopping');
 timelapse(self.data, 3, function(blob) {
 self.download(blob, "webm");
 });
 }

 console.log("finished setup, starting");
 this.recorder.start(40000);
 }



-
Video streaming to YouTube using JavaScript and Java
28 septembre 2020, par user1597121I'm trying to stream live video from a user's browser to YouTube Live. I already have the following working :


- 

- Capture video from the webcam using
navigator.mediaDevices.getUserMedia
- Send video data to the server via WebSocket by periodically invoking this function :






function getFrame(video)
{
 var canvas = document.createElement('canvas');
 canvas.width = video.videoWidth;
 canvas.height = video.videoHeight;
 canvas.getContext('2d').drawImage(video, 0, 0);

 return canvas.toDataURL('image/png', 1);
}



- 

- Creating a live broadcast and stream on YouTube via their API and receiving the RTMP info where they expect the video stream to be sent.




This is where I seem to be stuck. I'm not sure how to send the video data from my Java server to YouTube's RTMP endpoint. I've looked into using Red5 or ffmpeg, but haven't been able to find an example where the data is continually being sent via WebSocket. Rather, there's always some "stream" that is being redirected to YouTube, coming in on a dedicated port, or perhaps from a pre-recorded video file.


I have very limited knowledge of how video streaming works, so that's presumably making things more difficult than they should be. I'd really appreciate some help with getting this figured out. Thank you !


- Capture video from the webcam using
-
Unrecognized option 'crf'
6 septembre 2022, par Arjit KaushalI am trying compress video using ffmpeg but i am facing errors in the command.
Although it runs perfectly fine on my linux terminal.
( ffmpeg -i input.avi -vcodec libx264 -crf 24 output.avi)
.

my code :


void _compress() {
 if (_videoModel == null) return;
 String inputPath = _videoModel!.originalCachePath;
 String outputPath = _videoModel!.editCachePath;
 
 FFmpegKit.execute("-i $inputPath -vcodec libx264 -crf 24 -y $outputPath")
 .then((session) async {
 final returnCode = await session.getReturnCode();
 if (ReturnCode.isSuccess(returnCode)) {
 Navigator.pushNamed(context, PreviewPage.routeName,
 arguments: _videoModel);
 } else if (ReturnCode.isCancel(returnCode)) {
 print("compress cancel");
 } else {
 print("compress error : $returnCode");
 FFmpegKitConfig.enableLogCallback((log){
 final message = log.getMessage();
 print(message);
 });
 
 
 }
 });
 }



I am facing the following errors :
Unrecognized option 'crf',
I/flutter (31056) : Error splitting the argument list,
Option not found.