Recherche avancée

Médias (1)

Mot : - Tags -/illustrator

Autres articles (78)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (4926)

  • Is there a way to work with fluent-ffmpeg library for audio encoding (webm to wav/mp3) in Angular 2+ ?

    24 avril 2019, par binarysynthesis

    So i’m using Browser media objects to record audio using microphone for speech to text transcription. The recorded media gives me a blob/file which is in webm format. I want to convert the blob/file to wav or mp3 format which will be sent to AWS S3 Storage from where i intend to use the AWS Transcribe (Speech to Text) service to pick up the file and produce a transcript of the speech. AWS Transcribe doesn’t support webm format due to which i need to encode the audio on the client side to either wav or mp3. I’m trying to use the fluent-ffmpeg (third-party) [https://www.npmjs.com/package/fluent-ffmpeg] npm library to accomplish that but i keep getting the following error in the Typescript compiler when building using ng serve. I have already tried using the RecorderJS and the WebAudioRecoderJS npm libraries and i get the same ’Module not found’ error. -

    ERROR in ./node_modules/fluent-ffmpeg/index.js
    Module not found: Error: Can't resolve './lib-cov/fluent-ffmpeg' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\fluent-ffmpeg'
    ERROR in ./node_modules/fluent-ffmpeg/lib/ffprobe.js
    Module not found: Error: Can't resolve 'child_process' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\fluent-ffmpeg\lib'
    ERROR in ./node_modules/fluent-ffmpeg/lib/processor.js
    Module not found: Error: Can't resolve 'child_process' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\fluent-ffmpeg\lib'
    ERROR in ./node_modules/fluent-ffmpeg/lib/utils.js
    Module not found: Error: Can't resolve 'child_process' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\fluent-ffmpeg\lib'
    ERROR in ./node_modules/fluent-ffmpeg/lib/recipes.js
    Module not found: Error: Can't resolve 'fs' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\fluent-ffmpeg\lib'
    ERROR in ./node_modules/fluent-ffmpeg/lib/capabilities.js
    Module not found: Error: Can't resolve 'fs' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\fluent-ffmpeg\lib'
    ERROR in ./node_modules/fluent-ffmpeg/lib/processor.js
    Module not found: Error: Can't resolve 'fs' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\fluent-ffmpeg\lib'
    ERROR in ./node_modules/isexe/index.js
    Module not found: Error: Can't resolve 'fs' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\isexe'
    ERROR in ./node_modules/isexe/windows.js
    Module not found: Error: Can't resolve 'fs' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\isexe'
    ERROR in ./node_modules/isexe/mode.js
    Module not found: Error: Can't resolve 'fs' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\isexe'
    ERROR in ./node_modules/fluent-ffmpeg/lib/utils.js
    Module not found: Error: Can't resolve 'os' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\fluent-ffmpeg\lib'
    ERROR in ./node_modules/fluent-ffmpeg/lib/recipes.js
    Module not found: Error: Can't resolve 'path' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\fluent-ffmpeg\lib'
    ERROR in ./node_modules/fluent-ffmpeg/lib/fluent-ffmpeg.js
    Module not found: Error: Can't resolve 'path' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\fluent-ffmpeg\lib'
    ERROR in ./node_modules/fluent-ffmpeg/lib/capabilities.js
    Module not found: Error: Can't resolve 'path' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\fluent-ffmpeg\lib'
    ERROR in ./node_modules/fluent-ffmpeg/lib/processor.js
    Module not found: Error: Can't resolve 'path' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\fluent-ffmpeg\lib'
    ERROR in ./node_modules/fluent-ffmpeg/lib/options/misc.js
    Module not found: Error: Can't resolve 'path' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\fluent-ffmpeg\lib\options'
    ERROR in ./node_modules/which/which.js
    Module not found: Error: Can't resolve 'path' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\which'
    ERROR in ./node_modules/fluent-ffmpeg/lib/recipes.js
    Module not found: Error: Can't resolve 'stream' in 'C:\Users\banshuman\Desktop\AWS-Transcribe-Angular\aws-transcribe-angular\node_modules\fluent-ffmpeg\lib'
    i 「wdm」: Failed to compile.

    I am using Angular 7.2.0 and TypeScript 3.2.4.
    I have also installed the type definitions for fluent-ffmpeg [https://www.npmjs.com/package/@types/fluent-ffmpeg] within the node_modules to specify the typings for TypeScript.
    Below is my angular component file where i have implemented the audio recording functionality in the Browser -

    /// <reference types="@types/dom-mediacapture-record"></reference>
    import { Component } from '@angular/core';
    import * as aws from 'aws-sdk';
    import * as TranscribeService from 'aws-sdk/clients/transcribeservice';
    import * as Ffmpeg from 'fluent-ffmpeg';

    @Component({
     selector: 'app-root',
     templateUrl: './app.component.html',
     styleUrls: ['./app.component.css']
    })
    export class AppComponent {

     speechToText() {

       console.log(Ffmpeg);    

       // Begin streaming audio
       navigator.mediaDevices.getUserMedia({ audio: true })
         .then(stream => {
           const mediaRecorder = new MediaRecorder(stream);
           // Start recording audio
           mediaRecorder.start();

           const audioChunks = [];
           // When recording starts
           mediaRecorder.addEventListener("dataavailable", event => {
             audioChunks.push((<any>event).data);
           });


           // When recording stops
           mediaRecorder.addEventListener("stop", () => {
             const audioBlob = new Blob(audioChunks, { type: 'audio/webm;codecs=opus' });
             const audioFile = new File([audioBlob], 'outputAudioFile');
             const audioUrl = URL.createObjectURL(audioBlob);

    </any>

    I’m not posting the entire component code as the rest is part of the AWS SDK and is irrelevant to the problem statement. I need to convert the audioBlob or the audioFile which are currently in the webm format to wav or mp3 for uploading to the AWS services. How can i achieve that in Angular using the ffmpeg library ? I’m open to other solutions as well and not just ffmpeg to get the job done on the client side.

  • Using ffmpeg to merge video segments created by the MediaRecorder API

    10 avril 2023, par Dario Cimmino

    I am recording a live video from a webcam using mediarecorder API un chunks of 3 seconds :

    &#xA;

    startButton.addEventListener(&#x27;click&#x27;, () => {&#xA;navigator.mediaDevices.getUserMedia({&#xA;    video: {&#xA;        width: 1280,&#xA;        height: 720,&#xA;        frameRate: { ideal: 30, max: 30 }&#xA;    }&#xA;})&#xA;    .then(stream => {&#xA;        video.srcObject = stream;&#xA;        mediaRecorder = new MediaRecorder(stream, { mimeType: &#x27;video/webm&#x27; });&#xA;        mediaRecorder.ondataavailable = async (event) => {&#xA;            const blob = new Blob([event.data], { type: &#x27;video/mp4&#x27; });&#xA;            const formData = new FormData();&#xA;            formData.append(&#x27;segment&#x27;, blob, `segment${segmentNumber}.mp4`);&#xA;&#xA;            // When a new video segment is ready&#xA;            fetch(&#x27;http://localhost:3000/upload&#x27;, {&#xA;                method: &#x27;POST&#x27;,&#xA;                body: formData&#xA;            })&#xA;                .then((response) => response.text())&#xA;                .then((result) => {&#xA;                    console.log(&#x27;Upload result:&#x27;, result);&#xA;                })&#xA;                .catch((error) => {&#xA;                    console.error(&#x27;Error uploading video segment:&#x27;, error);&#xA;                });&#xA;            //Upload data to mysql&#xA;            fetch(&#x27;upload.php&#x27;, {&#xA;                method: &#x27;POST&#x27;,&#xA;                body: formData&#xA;            })&#xA;                .then(response => response.text())&#xA;                .then(result => {&#xA;                    console.log(&#x27;Upload result to MYSQL:&#x27;, result);&#xA;                })&#xA;                .catch(error => {&#xA;                    console.error(&#x27;Error uploading video segment to MYSQL:&#x27;, error);&#xA;                });&#xA;            segmentNumber&#x2B;&#x2B;;&#xA;        };&#xA;&#xA;        mediaRecorder.start(3000);&#xA;    })&#xA;    .catch(error => {&#xA;        console.error(&#x27;Error accessing camera:&#x27;, error);&#xA;    });&#xA;

    &#xA;

    }) ;

    &#xA;

    I am left with only the first segment playable, as is expected.

    &#xA;

    How ever when the recording stops, i'd like to merge all those segments recorded using ffmpeg (or any other) with the help of my nodeJs server.

    &#xA;

    I am having difficulty understand the parsing of mp4 files.

    &#xA;

    if I try the command :

    &#xA;

    ffmpeg -i segment1.mp4 -i segment2.mp4 -i segment3.mp4  out.mp4&#xA;

    &#xA;

    I get the following error :

    &#xA;

        ffmpeg version N-110223-gb18a9c2971-20230410 Copyright (c) 2000-2023 the FFmpeg developers&#xA;  built with gcc 12.2.0 (crosstool-NG 1.25.0.152_89671bf)&#xA;  configuration: --prefix=/ffbuild/prefix --pkg-config-flags=--static --pkg-config=pkg-config --cross-prefix=x86_64-w64-mingw32- --arch=x86_64 --target-os=mingw32 --enable-gpl --enable-version3 --disable-debug --disable-w32threads --enable-pthreads --enable-iconv --enable-libxml2 --enable-zlib --enable-libfreetype --enable-libfribidi --enable-gmp --enable-lzma --enable-fontconfig --enable-libvorbis --enable-opencl --disable-libpulse --enable-libvmaf --disable-libxcb --disable-xlib --enable-amf --enable-libaom --enable-libaribb24 --enable-avisynth --enable-chromaprint --enable-libdav1d --enable-libdavs2 --disable-libfdk-aac --enable-ffnvcodec --enable-cuda-llvm --enable-frei0r --enable-libgme --enable-libkvazaar --enable-libass --enable-libbluray --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librist --enable-libssh --enable-libtheora --enable-libvpx --enable-libwebp --enable-lv2 --disable-libmfx --enable-libvpl --enable-openal --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-librav1e --enable-librubberband --enable-schannel --enable-sdl2 --enable-libsoxr --enable-libsrt --enable-libsvtav1 --enable-libtwolame --enable-libuavs3d --disable-libdrm --disable-vaapi --enable-libvidstab --enable-vulkan --enable-libshaderc --enable-libplacebo --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libzimg --enable-libzvbi --extra-cflags=-DLIBTWOLAME_STATIC --extra-cxxflags= --extra-ldflags=-pthread --extra-ldexeflags= --extra-libs=-lgomp --extra-version=20230410&#xA;  libavutil      58.  6.100 / 58.  6.100&#xA;  libavcodec     60.  9.100 / 60.  9.100&#xA;  libavformat    60.  4.101 / 60.  4.101&#xA;  libavdevice    60.  2.100 / 60.  2.100&#xA;  libavfilter     9.  5.100 /  9.  5.100&#xA;  libswscale      7.  2.100 /  7.  2.100&#xA;  libswresample   4. 11.100 /  4. 11.100&#xA;  libpostproc    57.  2.100 / 57.  2.100&#xA;Input #0, matroska,webm, from &#x27;segment1.mp4&#x27;:&#xA;  Metadata:&#xA;    encoder         : Chrome&#xA;  Duration: N/A, start: 0.000000, bitrate: N/A&#xA;  Stream #0:0(eng): Video: h264 (Constrained Baseline), yuv420p(progressive), 1280x720 [SAR 1:1 DAR 16:9], 30.30 fps, 30 tbr, 1k tbn (default)&#xA;[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d93cf25fc0] Format mov,mp4,m4a,3gp,3g2,mj2 detected only with low score of 1, misdetection possible!&#xA;[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d93cf25fc0] moov atom not found&#xA;segment2.mp4: Invalid data found when processing input&#xA;

    &#xA;

    any help or inputs are appreciated. THanks !

    &#xA;

  • How to create video from a stream webcam and canvas ?

    1er mai 2024, par Stefdelec

    I am trying to generate a video on browser from different cut :&#xA;Slide : stream from canvas&#xA;Video : stream from webcam

    &#xA;

    I just want to allow user to download the video edit with&#xA;slide1 + video1 + slide2 + video2 + slide3 + video3.

    &#xA;

    Here is my code :

    &#xA;

    const canvas = document.getElementById(&#x27;myCanvas&#x27;);&#xA;const ctx = canvas.getContext(&#x27;2d&#x27;);&#xA;const webcam = document.getElementById(&#x27;webcam&#x27;);&#xA;const videoPlayer = document.createElement(&#x27;video&#x27;);&#xA;videoPlayer.controls = true;&#xA;document.body.appendChild(videoPlayer);&#xA;const videoWidth = 640;&#xA;const videoHeight = 480;&#xA;let keepAnimating = true;&#xA;const frameRate=30;&#xA;// Attempt to get webcam access&#xA;function setupWebcam() {&#xA; const constraints = {&#xA;        video: {&#xA;             frameRate: frameRate,&#xA;            width: videoWidth,  &#xA;            height: videoHeight &#xA;        }&#xA;    };&#xA;  navigator.mediaDevices.getUserMedia(constraints)&#xA;    .then(stream => {&#xA;      webcam.srcObject = stream;&#xA;      webcam.addEventListener(&#x27;loadedmetadata&#x27;, () => {&#xA;        recordSegments();&#xA;        console.log(&#x27;Webcam feed is now displayed&#x27;);&#xA;      });&#xA;    })&#xA;    .catch(err => {&#xA;      console.error("Error accessing webcam:", err);&#xA;      alert(&#x27;Could not access the webcam. Please ensure permissions are granted and try again.&#x27;);&#xA;    });&#xA;}&#xA;&#xA;&#xA;// Function to continuously draw on the canvas&#xA;function animateCanvas(content) {&#xA;  if (!keepAnimating) {&#xA;    console.log("keepAnimating", keepAnimating);&#xA;    return;&#xA;  }; // Stop the animation when keepAnimating is false&#xA;&#xA;  ctx.clearRect(0, 0, canvas.width, canvas.height); // Clear previous drawings&#xA;  ctx.fillStyle = `rgba(${Math.floor(Math.random() * 255)}, ${Math.floor(Math.random() * 255)}, ${Math.floor(Math.random() * 255)}, 0.5)`;&#xA;  ctx.fillRect(0, 0, canvas.width, canvas.height);&#xA;  ctx.fillStyle = &#x27;#000&#x27;;&#xA;  ctx.font = &#x27;48px serif&#x27;;&#xA;  ctx.fillText(content &#x2B; &#x27; &#x27; &#x2B; new Date().toLocaleTimeString(), 50, 100);&#xA;&#xA;  // Request the next frame&#xA;  requestAnimationFrame(() => animateCanvas(content));&#xA;}&#xA;&#xA;&#xA;// Initialize recording segments array&#xA;const recordedSegments = [];&#xA;// Modified startRecording to manage animation&#xA;function startRecording(stream, duration = 5000, content) {&#xA;  const recorder = new MediaRecorder(stream, { mimeType: &#x27;video/webm&#x27; });&#xA;  const data = [];&#xA;&#xA;  recorder.ondataavailable = e => data.push(e.data);&#xA;&#xA;&#xA;  // Start animating the canvas&#xA;  keepAnimating = true;&#xA;  animateCanvas(content);&#xA;  recorder.start();&#xA;  return new Promise((resolve) => {&#xA;    // Automatically stop recording after &#x27;duration&#x27; milliseconds&#xA;    setTimeout(() => {&#xA;      recorder.stop();&#xA;      // Stop the animation when recording stops&#xA;      keepAnimating = false;&#xA;    }, duration);&#xA;&#xA;    recorder.onstop = () => {&#xA;      const blob = new Blob(data, { type: &#x27;video/webm&#x27; });&#xA;      recordedSegments.push(blob);&#xA;       keepAnimating = true;&#xA;      resolve(blob);&#xA;    };&#xA;  });&#xA;}&#xA;&#xA;// Sequence to record segments&#xA;async function recordSegments() {&#xA;  // Record canvas with dynamic content&#xA;  await startRecording(canvas.captureStream(frameRate), 2000, &#x27;Canvas Draw 1&#x27;).then(() => console.log(&#x27;Canvas 1 recorded&#x27;));&#xA;&#xA;      await startRecording(webcam.srcObject,3000).then(() => console.log(&#x27;Webcam 1 recorded&#x27;));&#xA;&#xA;          await startRecording(webcam.srcObject).then(() => console.log(&#x27;Webcam 1 recorded&#x27;));&#xA;  mergeAndDownloadVideo();&#xA;}&#xA;&#xA;function downLoadVideo(blob){&#xA; const url = URL.createObjectURL(blob);&#xA;&#xA;  // Create an anchor element and trigger a download&#xA;  const a = document.createElement(&#x27;a&#x27;);&#xA;  a.style.display = &#x27;none&#x27;;&#xA;  a.href = url;&#xA;  a.download = &#x27;merged-video.webm&#x27;;&#xA;  document.body.appendChild(a);&#xA;  a.click();&#xA;&#xA;  // Clean up by revoking the Blob URL and removing the anchor element after the download&#xA;  setTimeout(() => {&#xA;    document.body.removeChild(a);&#xA;    window.URL.revokeObjectURL(url);&#xA;  }, 100);&#xA;}&#xA;function mergeAndDownloadVideo() {&#xA;  console.log("recordedSegments length", recordedSegments.length);&#xA;  // Create a new Blob from all recorded video segments&#xA;  const superBlob = new Blob(recordedSegments, { type: &#x27;video/webm&#x27; });&#xA;  &#xA;  downLoadVideo(superBlob)&#xA;&#xA;  // Create a URL for the superBlob&#xA; &#xA;}&#xA;&#xA;// Start the process by setting up the webcam first&#xA;setupWebcam();&#xA;

    &#xA;

    You can find it here : https://jsfiddle.net/Sulot/nmqf6wdj/25/

    &#xA;

    I am unable to have one "slide" + webcam video + "slide" + webcam video.

    &#xA;

    It merges only the first 2 segments, but not the other. I tried with ffmpeg browser side.

    &#xA;