Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (20)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

Sur d’autres sites (5381)

  • Is there a way to program the (Download) button to save a group of images as a one video ?

    9 février 2024, par Lina Al-fawzan

    This is my entire code. Its function is that everything the user writes or says will have images returned to him according to what he wrote/said, and the next image will be shown to him after he presses “close,” and he can save each image separately. I want to make a simple modification to it. First, instead of a close button, I want each image to be displayed for 3 seconds and the next one to be displayed, and so on... “all of them in one window”, and for the “download” button to be when the last image is displayed, and for them all to be saved in one video.

    


    import &#x27;package:flutter/material.dart&#x27;;&#xA;import &#x27;package:flutter/services.dart&#x27; show rootBundle;&#xA;import &#x27;dart:convert&#x27;;&#xA;import &#x27;dart:typed_data&#x27;;&#xA;import &#x27;package:image_gallery_saver/image_gallery_saver.dart&#x27;;&#xA;import &#x27;package:speech_to_text/speech_to_text.dart&#x27; as stt;&#xA;&#xA;void main() {&#xA;  runApp(MyApp());&#xA;}&#xA;&#xA;class MyApp extends StatelessWidget {&#xA;  @override&#xA;  Widget build(BuildContext context) {&#xA;    return MaterialApp(&#xA;      home: MyHomePage(),&#xA;    );&#xA;  }&#xA;}&#xA;&#xA;class MyHomePage extends StatefulWidget {&#xA;  @override&#xA;  _MyHomePageState createState() => _MyHomePageState();&#xA;}&#xA;&#xA;class _MyHomePageState extends State<myhomepage> {&#xA;  TextEditingController _textEditingController = TextEditingController();&#xA;  late stt.SpeechToText _speech;&#xA;  bool _isListening = false;&#xA;&#xA;  @override&#xA;  void initState() {&#xA;    super.initState();&#xA;    _speech = stt.SpeechToText();&#xA;  }&#xA;&#xA;  void _listen() async {&#xA;    if (!_isListening) {&#xA;      bool available = await _speech.initialize(&#xA;        onStatus: (val) => print(&#x27;onStatus: $val&#x27;),&#xA;        onError: (val) => print(&#x27;onError: $val&#x27;),&#xA;      );&#xA;      if (available) {&#xA;        setState(() => _isListening = true);&#xA;        _speech.listen(&#xA;          onResult: (val) => setState(() {&#xA;            _textEditingController.text = val.recognizedWords;&#xA;            if (val.hasConfidenceRating &amp;&amp; val.confidence > 0) {&#xA;              _showImages(val.recognizedWords);&#xA;            }&#xA;          }),&#xA;        );&#xA;      }&#xA;    } else {&#xA;      setState(() => _isListening = false);&#xA;      _speech.stop();&#xA;    }&#xA;  }&#xA;&#xA;  @override&#xA;  Widget build(BuildContext context) {&#xA;    return Scaffold(&#xA;      appBar: AppBar(&#xA;        title: Text(&#x27;Image Viewer&#x27;),&#xA;      ),&#xA;      body: Padding(&#xA;        padding: const EdgeInsets.all(16.0),&#xA;        child: Column(&#xA;          mainAxisAlignment: MainAxisAlignment.center,&#xA;          children: [&#xA;            TextField(&#xA;              controller: _textEditingController,&#xA;              decoration: const InputDecoration(&#xA;                labelText: &#x27;Enter a word&#x27;,&#xA;              ),&#xA;            ),&#xA;            SizedBox(height: 16.0),&#xA;            ElevatedButton(&#xA;              onPressed: () {&#xA;                String userInput = _textEditingController.text;&#xA;                _showImages(userInput);&#xA;              },&#xA;              child: Text(&#x27;Show Images&#x27;),&#xA;            ),&#xA;            SizedBox(height: 16.0),&#xA;            ElevatedButton(&#xA;              onPressed: _listen,&#xA;              child: Text(_isListening ? &#x27;Stop Listening&#x27; : &#x27;Start Listening&#x27;),&#xA;            ),&#xA;          ],&#xA;        ),&#xA;      ),&#xA;    );&#xA;  }&#xA;&#xA;Future<void> _showImages(String userInput) async {&#xA;  String directoryPath = &#x27;assets/output_images/&#x27;;&#xA;  print("User Input: $userInput");&#xA;  print("Directory Path: $directoryPath");&#xA;&#xA;  List<string> assetFiles = await rootBundle&#xA;      .loadString(&#x27;AssetManifest.json&#x27;)&#xA;      .then((String manifestContent) {&#xA;    final Map manifestMap = json.decode(manifestContent);&#xA;    return manifestMap.keys&#xA;        .where((String key) => key.startsWith(directoryPath))&#xA;        .toList();&#xA;  });&#xA;&#xA;  List<string> imageFiles = assetFiles.where((String assetPath) =>&#xA;      assetPath.toLowerCase().endsWith(&#x27;.jpg&#x27;) ||&#xA;      assetPath.toLowerCase().endsWith(&#x27;.gif&#x27;)).toList();&#xA;&#xA;  List<string> words = userInput.split(&#x27; &#x27;); // Tokenize the sentence into words&#xA;&#xA;  for (String word in words) {&#xA;    String wordImagePath = &#x27;$directoryPath$word.gif&#x27;;&#xA;&#xA;    if (imageFiles.contains(wordImagePath)) {&#xA;      await _showDialogWithImage(wordImagePath);&#xA;    } else {&#xA;      for (int i = 0; i &lt; word.length; i&#x2B;&#x2B;) {&#xA;        String letter = word[i];&#xA;        String letterImagePath = imageFiles.firstWhere(&#xA;          (assetPath) => assetPath.toLowerCase().endsWith(&#x27;$letter.jpg&#x27;),&#xA;          orElse: () => &#x27;&#x27;,&#xA;        );&#xA;        if (letterImagePath.isNotEmpty) {&#xA;          await _showDialogWithImage(letterImagePath);&#xA;        } else {&#xA;          print(&#x27;No image found for $letter&#x27;);&#xA;        }&#xA;      }&#xA;    }&#xA;  }&#xA;}&#xA;&#xA;  &#xA;&#xA;  Future<void> _showDialogWithImage(String imagePath) async {&#xA;    await showDialog<void>(&#xA;      context: context,&#xA;      builder: (BuildContext context) {&#xA;        return AlertDialog(&#xA;          content: Image.asset(imagePath),&#xA;          actions: [&#xA;            TextButton(&#xA;              onPressed: () {&#xA;                Navigator.of(context).pop();&#xA;              },&#xA;              child: Text(&#x27;Close&#x27;),&#xA;            ),&#xA;            TextButton(&#xA;              onPressed: () async {&#xA;                await _downloadImage(imagePath);&#xA;                Navigator.of(context).pop();&#xA;              },&#xA;              child: Text(&#x27;Download&#x27;),&#xA;            ),&#xA;          ],&#xA;        );&#xA;      },&#xA;    );&#xA;  }&#xA;&#xA;  Future<void> _downloadImage(String assetPath) async {&#xA;    try {&#xA;      final ByteData data = await rootBundle.load(assetPath);&#xA;      final List<int> bytes = data.buffer.asUint8List();&#xA;&#xA;      final result = await ImageGallerySaver.saveImage(Uint8List.fromList(bytes));&#xA;&#xA;      if (result != null) {&#xA;        ScaffoldMessenger.of(context).showSnackBar(&#xA;          SnackBar(&#xA;            content: Text(&#x27;Image saved to gallery.&#x27;),&#xA;          ),&#xA;        );&#xA;      } else {&#xA;        ScaffoldMessenger.of(context).showSnackBar(&#xA;          SnackBar(&#xA;            content: Text(&#x27;Failed to save image to gallery.&#x27;),&#xA;          ),&#xA;        );&#xA;      }&#xA;    } catch (e) {&#xA;      print(&#x27;Error downloading image: $e&#x27;);&#xA;    }&#xA;  }&#xA;}&#xA;&#xA;</int></void></void></void></string></string></string></void></myhomepage>

    &#xA;

  • Capturing audio data (using javascript) and uploading on a server as MP3

    4 septembre 2018, par Michel

    Following a number of resources on the internet, I am trying to build a simple web page, where I can go to record something (my voice), then make a mp3 file out of the recording and finally upload that file to a server.

    At this point I can do the recording and also play back, but I haven’t gone as far as uploading, it seems like I cannot even make an mp3 file locally.
    Can someone tell me what I am doing wrong, or in the wrong order ?

    Below is all the code I have at this point.

       
       
       
       


    <div>
       <h2>Audio record and playback</h2>
       <p>
           <button></button></p><h3>Start</h3>
           <button disabled="disabled"><h3>Stop</h3></button>
           <audio controls="controls"></audio>
           <a></a>
       
    </div>

    <code class="echappe-js">&lt;script&gt;<br />
     var player = document.getElementById('player');<br />
    <br />
     var handleSuccess = function(stream) {<br />
       rec = new MediaRecorder(stream);<br />
    <br />
       rec.ondataavailable = e =&gt; {<br />
           audioChunks.push(e.data);<br />
           if (rec.state == &quot;inactive&quot;) {<br />
               let blob = new Blob(audioChunks,{type:'audio/x-mpeg-3'});<br />
               player.src = URL.createObjectURL(blob);<br />
               player.controls=true;<br />
               player.autoplay=true;<br />
               // audioDownload.href = player.src;<br />
               // audioDownload.download = 'sound.data';<br />
               // audioDownload.innerHTML = 'Download';<br />
               mp3Build();<br />
           }<br />
       }<br />
    <br />
       player.src = stream;<br />
     };<br />
    <br />
     navigator.mediaDevices.getUserMedia({audio:true/*, video: false */})<br />
         .then(handleSuccess);<br />
    <br />
    startRecord.onclick = e =&gt; {<br />
     startRecord.disabled = true;<br />
     stopRecord.disabled=false;<br />
     audioChunks = [];<br />
     rec.start();<br />
    }<br />
    <br />
    stopRecord.onclick = e =&gt; {<br />
     startRecord.disabled = false;<br />
     stopRecord.disabled=true;<br />
     rec.stop();<br />
    }<br />
    <br />
    <br />
    var ffmpeg = require('ffmpeg');<br />
    <br />
    function mp3Build() {<br />
    try {<br />
       var process = new ffmpeg('sound.data');<br />
       process.then(function (audio) {<br />
           // Callback mode.<br />
           audio.fnExtractSoundToMP3('sound.mp3', function (error, file) {<br />
               if (!error) {<br />
                   console.log('Audio file: ' + file);<br />
           audioDownload.href = player.src;<br />
           audioDownload.download = 'sound.mp3';<br />
           audioDownload.innerHTML = 'Download';<br />
         } else {<br />
           console.log('Error-fnExtractSoundToMP3: ' + error);<br />
         }<br />
           });<br />
       }, function (err) {<br />
           console.log('Error: ' + err);<br />
       });<br />
    } catch (e) {<br />
       console.log(e.code);<br />
       console.log(e.msg);<br />
    }<br />
    }<br />
    <br />
    &lt;/script&gt;

    When I try to investigate and see what is happening using the Debugger inside the Web Console ; on the line :

    var process = new ffmpeg('sound.data');

    I get this message :

    Paused on exception
    TypeError ffmpeg is not a contructor.

    And on the line :

    var ffmpeg = require('ffmpeg');

    I get this message :

    Paused on exception
    ReferenceError require is not defined.

    Beside when I watch the expression ffmpeg, I can see :

    ffmpeg: undefined

    After some further investigations, and using browserify I use the following code :

       
       
       
       


    <div>
       <h2>Audio record and playback</h2>
       <p>
           <button></button></p><h3>Start</h3>
           <button disabled="disabled"><h3>Stop</h3></button>
           <audio controls="controls"></audio>
           <a></a>
       
    </div>

    <code class="echappe-js">&lt;script src='http://stackoverflow.com/feeds/tag/bundle.js'&gt;&lt;/script&gt;
    &lt;script&gt;<br />
     var player = document.getElementById('player');<br />
    <br />
     var handleSuccess = function(stream) {<br />
       rec = new MediaRecorder(stream);<br />
    <br />
       rec.ondataavailable = e =&gt; {<br />
           if (rec.state == &quot;inactive&quot;) {<br />
               let blob = new Blob(audioChunks,{type:'audio/x-mpeg-3'});<br />
               //player.src = URL.createObjectURL(blob);<br />
               //player.srcObject = URL.createObjectURL(blob);<br />
               //player.srcObject = blob;<br />
               player.srcObject = stream;<br />
               player.controls=true;<br />
               player.autoplay=true;<br />
               // audioDownload.href = player.src;<br />
               // audioDownload.download = 'sound.data';<br />
               // audioDownload.innerHTML = 'Download';<br />
               mp3Build();<br />
           }<br />
       }<br />
    <br />
       //player.src = stream;<br />
       player.srcObject = stream;<br />
     };<br />
    <br />
     navigator.mediaDevices.getUserMedia({audio:true/*, video: false */})<br />
         .then(handleSuccess);<br />
    <br />
    startRecord.onclick = e =&gt; {<br />
     startRecord.disabled = true;<br />
     stopRecord.disabled=false;<br />
     audioChunks = [];<br />
     rec.start();<br />
    }<br />
    <br />
    stopRecord.onclick = e =&gt; {<br />
     startRecord.disabled = false;<br />
     stopRecord.disabled=true;<br />
     rec.stop();<br />
    }<br />
    <br />
    <br />
    var ffmpeg = require('ffmpeg');<br />
    <br />
    function mp3Build() {<br />
    try {<br />
       var process = new ffmpeg('sound.data');<br />
       process.then(function (audio) {<br />
           // Callback mode.<br />
           audio.fnExtractSoundToMP3('sound.mp3', function (error, file) {<br />
               if (!error) {<br />
                   console.log('Audio file: ' + file);<br />
           //audioDownload.href = player.src;<br />
           audioDownload.href = player.srcObject;<br />
           audioDownload.download = 'sound.mp3';<br />
           audioDownload.innerHTML = 'Download';<br />
         } else {<br />
           console.log('Error-fnExtractSoundToMP3: ' + error);<br />
         }<br />
           });<br />
       }, function (err) {<br />
           console.log('Error: ' + err);<br />
       });<br />
    } catch (e) {<br />
       console.log(e.code);<br />
       console.log(e.msg);<br />
    }<br />
    }<br />
    <br />
    &lt;/script&gt;

    That solved the problem of :

    the expression ffmpeg being: undefined

    But the play back is no longer working. I may not be doing the right thing with player.srcObject and maybe some other things too.

    When I use this line :

    player.srcObject = URL.createObjectURL(blob);

    I get this message :

    Paused on exception
    TypeError: Value being assigned to HTMLMediaElement.srcObject is not an object.

    And when I use this line :

    player.srcObject = blob;

    I get this message :

    Paused on exception
    TypeError: Value being assigned to HTMLMediaElement.srcObject does not implement interface MediaStream.

    Finally, if I use this :

    player.srcObject = stream;

    I do not get any error message but the voice recording still does not work.

  • Save video to disk from WebRTC MediaStream in Node

    27 novembre 2020, par SAGBO Aimé

    I'm building an app where the user can connect to the server through a WebRTC (I'm using simple-peer library both server-side and client-side to set the peer-to-peer connection).&#xA;Once the client and the server are connected, the client app stream the user camera and micro to the server.

    &#xA;

    Now, I want to save the streamed data to the filesystem server-side as an MP4 video file.

    &#xA;

    I hear about ffmpeg and fluent-ffmpeg to achieve this but i don't know how to use them.

    &#xA;

      &#xA;
    • Server side code to set up the peer connection
    • &#xA;

    &#xA;

    const Peer = require("simple-peer");&#xA;const wrtc = require("wrtc");&#xA;&#xA;const peer = new Peer({ initiator: false, wrtc: wrtc, trickle: false });&#xA;&#xA;peer.on("error", (err: any) => console.log("error", err));&#xA;&#xA;  peer.on("signal", (data: any) => {&#xA;    if (data.type === "offer" || data.type === "answer")&#xA;      dispatchMessage(JSON.stringify(data));&#xA;    // if (data.renegotiate || data.transceiverRequest) return;&#xA;  });&#xA;&#xA;  peer.on("connect", () => {&#xA;    console.log("CONNECTED");&#xA;    peer.send(JSON.stringify("HELLO DEER PEER FROM SERVER"));&#xA;  });&#xA;&#xA;  peer.on("data", (data: any) => {&#xA;    console.log("data: ", data);&#xA;  });&#xA;&#xA;  peer.on("stream", (stream: MediaStream) => {&#xA;    console.log("-------Stream received", stream);&#xA;  });&#xA;&#xA;  peer.on("track", (track: MediaStreamTrack) => {&#xA;    console.log("-------trackEvent:", track);&#xA;  });&#xA;

    &#xA;

      &#xA;
    • Client-side code
    • &#xA;

    &#xA;

    const stream = await window.navigator.mediaDevices.getUserMedia({&#xA;    video: { width: { ideal: 4096 }, height: { ideal: 2160 }},&#xA;    audio: true,&#xA;});&#xA;&#xA;const p = new SimplePeer({&#xA;    initiator: isInitiator,  &#xA;    trickle: false  &#xA;});&#xA;&#xA;stream.getTracks().forEach(track => p.addTrack(&#xA;    track,  &#xA;    stream  &#xA;));&#xA;&#xA;// Here I set up the listeners for the peer connection&#xA;

    &#xA;