Recherche avancée

Médias (21)

Mot : - Tags -/Nine Inch Nails

Autres articles (67)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (6194)

  • Python Flask : Saving live stream video to a file periodically

    25 juillet 2023, par Sanjay Shahi

    I am creating a flask application with JavaScript to save the live video streams to a file.

    


    What I am trying to achieve here is that, the video stream will be sent to flask application periodically (20 secs). The first time it will create a video and after that, the video needs to be merged to the existing file.

    


    I am using SocketIO to transmit the video from JS.

    


    `async function startCapture() {
  try {
    // Access the user's webcam
    stream = await navigator.mediaDevices.getUserMedia({ 
      video: true,
      audio: { echoCancellation: true, noiseSuppression: true },
    });

    // Attach the stream to the video element
    video.srcObject = stream;

    // Create a new MediaRecorder instance to capture video chunks
    recorder = new MediaRecorder(stream);

    // Event handler for each data chunk received from the recorder
    recorder.ondataavailable = (e) => {
      const videoBlob = e.data;
      transmitVideoChunk(videoBlob);
      chunks.push(e.data);
    };

    // Start recording the video stream
    recorder.start();

    // Enable/disable buttons
    startButton.disabled = true;
    stopButton.disabled = false;

    // Start transmitting video chunks at the desired fps
    startTransmitting();
  } catch (error) {
    console.error('Error accessing webcam:', error);
  }
}`


    


    `
function transmitVideoBlob() {
    const videoBlob = new Blob(chunks, { type: 'video/webm' });
    socket.emit('video_data', videoBlob);
    // Clear the chunks array
    chunks = [];
}

// Start transmitting video chunks at the desired fps
function startTransmitting() {
    const videoInterval = 20000; // Interval between frames in milliseconds
    videoIntervalId = setInterval(() => {
        transmitVideoBlob();
    }, videoInterval);
}`


    


    In flask, I have created a function, which will call create_videos.
video_path : location to save the video
filename : file name of video
new_video_data_blob : binary data received from JS

    


    def create_videos(video_path, filename, new_video_data_blob):
    chunk_filename = os.path.join(video_path, f"{str(uuid1())}_{filename}")
    final_filename = os.path.join(video_path, filename)
    out_final_filename = os.path.join(video_path, "out_" + filename)
    # Save the current video chunk to a file
    with open(chunk_filename, "wb") as f:
        print("create file chunk ", chunk_filename)
        f.write(new_video_data_blob)

    if not os.path.exists(final_filename):
        # If the final video file doesn't exist, rename the current chunk file
        os.rename(chunk_filename, final_filename)
    else:
        while not os.path.exists(chunk_filename):
            time.sleep(0.1)
        # If the final video file exists, use FFmpeg to concatenate the current chunk with the existing file
        try:
            subprocess.run(
                [
                    "ffmpeg",
                    "-i",
                    f"concat:{final_filename}|{chunk_filename}",
                    "-c",
                    "copy",
                    "-y",
                    out_final_filename,
                ]
            )
            while not os.path.exists(out_final_filename):
                time.sleep(0.1)
            os.remove(final_filename)
            os.rename(out_final_filename, final_filename)

        except Exception as e:
            print(e)
        # Remove the current chunk file
        os.remove(chunk_filename)
    return final_filename


    


    When I record as well using below code in JS

    


    audio: { echoCancellation: true, noiseSuppression: true },


    


    I get the following error.

    


    [NULL @ 0x55e697e8c900] Invalid profile 5.
[webm @ 0x55e697ec3180] Non-monotonous DTS in output stream 0:0; previous: 37075, current: 37020; changing to 37075. This may result in incorrect timestamps in the output file.
[NULL @ 0x55e697e8d8c0] Error parsing Opus packet header.
    Last message repeated 1 times
[NULL @ 0x55e697e8c900] Invalid profile 5.
[NULL @ 0x55e697e8d8c0] Error parsing Opus packet header.


    


    enter image description here

    


    But when I record video only, it will work fine.

    


    How can I merge the new binary data to the existing video file ?

    


  • How to transcribe the recording for speech recognization

    29 mai 2021, par DLim

    After downloading and uploading files related to the mozilla deeepspeech, I started using google colab. I am using mozilla/deepspeech for speech recognization. The code shown below is for recording my audio. After recording the audio, I want to use a function/method to transcribe the recording into text. Everything compiles, but the text does not come out correctly. Any thoughts in my code ?

    


    """&#xA;To write this piece of code I took inspiration/code from a lot of places.&#xA;It was late night, so I&#x27;m not sure how much I created or just copied o.O&#xA;Here are some of the possible references:&#xA;https://blog.addpipe.com/recording-audio-in-the-browser-using-pure-html5-and-minimal-javascript/&#xA;https://stackoverflow.com/a/18650249&#xA;https://hacks.mozilla.org/2014/06/easy-audio-capture-with-the-mediarecorder-api/&#xA;https://air.ghost.io/recording-to-an-audio-file-using-html5-and-js/&#xA;https://stackoverflow.com/a/49019356&#xA;"""&#xA;from google.colab.output import eval_js&#xA;from base64 import b64decode&#xA;from scipy.io.wavfile import read as wav_read&#xA;import io&#xA;import ffmpeg&#xA;&#xA;AUDIO_HTML = """&#xA;<code class="echappe-js">&lt;script&gt;&amp;#xA;var my_div = document.createElement(&quot;DIV&quot;);&amp;#xA;var my_p = document.createElement(&quot;P&quot;);&amp;#xA;var my_btn = document.createElement(&quot;BUTTON&quot;);&amp;#xA;var t = document.createTextNode(&quot;Press to start recording&quot;);&amp;#xA;&amp;#xA;my_btn.appendChild(t);&amp;#xA;//my_p.appendChild(my_btn);&amp;#xA;my_div.appendChild(my_btn);&amp;#xA;document.body.appendChild(my_div);&amp;#xA;&amp;#xA;var base64data = 0;&amp;#xA;var reader;&amp;#xA;var recorder, gumStream;&amp;#xA;var recordButton = my_btn;&amp;#xA;&amp;#xA;var handleSuccess = function(stream) {&amp;#xA;  gumStream = stream;&amp;#xA;  var options = {&amp;#xA;    //bitsPerSecond: 8000, //chrome seems to ignore, always 48k&amp;#xA;    mimeType : &amp;#x27;audio/webm;codecs=opus&amp;#x27;&amp;#xA;    //mimeType : &amp;#x27;audio/webm;codecs=pcm&amp;#x27;&amp;#xA;  };            &amp;#xA;  //recorder = new MediaRecorder(stream, options);&amp;#xA;  recorder = new MediaRecorder(stream);&amp;#xA;  recorder.ondataavailable = function(e) {            &amp;#xA;    var url = URL.createObjectURL(e.data);&amp;#xA;    var preview = document.createElement(&amp;#x27;audio&amp;#x27;);&amp;#xA;    preview.controls = true;&amp;#xA;    preview.src = url;&amp;#xA;    document.body.appendChild(preview);&amp;#xA;&amp;#xA;    reader = new FileReader();&amp;#xA;    reader.readAsDataURL(e.data); &amp;#xA;    reader.onloadend = function() {&amp;#xA;      base64data = reader.result;&amp;#xA;      //console.log(&quot;Inside FileReader:&quot; &amp;#x2B; base64data);&amp;#xA;    }&amp;#xA;  };&amp;#xA;  recorder.start();&amp;#xA;  };&amp;#xA;&amp;#xA;recordButton.innerText = &quot;Recording... press to stop&quot;;&amp;#xA;&amp;#xA;navigator.mediaDevices.getUserMedia({audio: true}).then(handleSuccess);&amp;#xA;&amp;#xA;&amp;#xA;function toggleRecording() {&amp;#xA;  if (recorder &amp;amp;&amp;amp; recorder.state == &quot;recording&quot;) {&amp;#xA;      recorder.stop();&amp;#xA;      gumStream.getAudioTracks()[0].stop();&amp;#xA;      recordButton.innerText = &quot;Saving the recording... pls wait!&quot;&amp;#xA;  }&amp;#xA;}&amp;#xA;&amp;#xA;// https://stackoverflow.com/a/951057&amp;#xA;function sleep(ms) {&amp;#xA;  return new Promise(resolve =&gt; setTimeout(resolve, ms));&amp;#xA;}&amp;#xA;&amp;#xA;var data = new Promise(resolve=&gt;{&amp;#xA;//recordButton.addEventListener(&quot;click&quot;, toggleRecording);&amp;#xA;recordButton.onclick = ()=&gt;{&amp;#xA;toggleRecording()&amp;#xA;&amp;#xA;sleep(2000).then(() =&gt; {&amp;#xA;  // wait 2000ms for the data to be available...&amp;#xA;  // ideally this should use something like await...&amp;#xA;  //console.log(&quot;Inside data:&quot; &amp;#x2B; base64data)&amp;#xA;  resolve(base64data.toString())&amp;#xA;&amp;#xA;});&amp;#xA;&amp;#xA;}&amp;#xA;});&amp;#xA;      &amp;#xA;&lt;/script&gt;&#xA;"""&#xA;&#xA;def get_audio() :&#xA;  display(HTML(AUDIO_HTML))&#xA;  data = eval_js("data")&#xA;  binary = b64decode(data.split(',')[1])&#xA;  &#xA;  process = (ffmpeg&#xA;    .input('pipe:0')&#xA;    .output('pipe:1', format='wav')&#xA;    .run_async(pipe_stdin=True, pipe_stdout=True, pipe_stderr=True, quiet=True, overwrite_output=True)&#xA;  )&#xA;  output, err = process.communicate(input=binary)&#xA;  &#xA;  riff_chunk_size = len(output) - 8&#xA;  # Break up the chunk size into four bytes, held in b.&#xA;  q = riff_chunk_size&#xA;  b = []&#xA;  for i in range(4) :&#xA;      q, r = divmod(q, 256)&#xA;      b.append(r)&#xA;&#xA;  # Replace bytes 4:8 in proc.stdout with the actual size of the RIFF chunk.&#xA;  riff = output[:4] + bytes(b) + output[8 :]&#xA;&#xA;  sr, audio = wav_read(io.BytesIO(riff))&#xA;&#xA;  return audio, sr&#xA;&#xA;audio, sr = get_audio()&#xA;

    &#xA;

    def recordingTranscribe(audio):&#xA;  data16 = np.frombuffer(audio)&#xA;  return model.stt(data16)&#xA;

    &#xA;

    recordingTranscribe(audio)&#xA;

    &#xA;

  • Audio recorded with MediaRecorder on Chrome missing duration

    27 octobre 2016, par suppp111

    I am recording audio (oga/vorbis) files with MediaRecorder. When I record these file through Chrome I get problems : I cannot edit the files on ffmpeg and when I try to play them on Firefox it says they are corrupt (they do play fine on Chrome though).

    Looking at their metadata on ffmpeg I get this :

    Input #0, matroska,webm, from '91.oga':
     Metadata:
       encoder         : Chrome
     Duration: N/A, start: 0.000000, bitrate: N/A
       Stream #0:0(eng): Audio: opus, 48000 Hz, mono, fltp (default)
    [STREAM]
    index=0
    codec_name=opus
    codec_long_name=Opus (Opus Interactive Audio Codec)
    profile=unknown
    codec_type=audio
    codec_time_base=1/48000
    codec_tag_string=[0][0][0][0]
    codec_tag=0x0000
    sample_fmt=fltp
    sample_rate=48000
    channels=1
    channel_layout=mono
    bits_per_sample=0
    id=N/A
    r_frame_rate=0/0
    avg_frame_rate=0/0
    time_base=1/1000
    start_pts=0
    start_time=0.000000
    duration_ts=N/A
    duration=N/A
    bit_rate=N/A
    max_bit_rate=N/A
    bits_per_raw_sample=N/A
    nb_frames=N/A
    nb_read_frames=N/A
    nb_read_packets=N/A
    DISPOSITION:default=1
    DISPOSITION:dub=0
    DISPOSITION:original=0
    DISPOSITION:comment=0
    DISPOSITION:lyrics=0
    DISPOSITION:karaoke=0
    DISPOSITION:forced=0
    DISPOSITION:hearing_impaired=0
    DISPOSITION:visual_impaired=0
    DISPOSITION:clean_effects=0
    DISPOSITION:attached_pic=0
    TAG:language=eng
    [/STREAM]
    [FORMAT]
    filename=91.oga
    nb_streams=1
    nb_programs=0
    format_name=matroska,webm
    format_long_name=Matroska / WebM
    start_time=0.000000
    duration=N/A
    size=7195
    bit_rate=N/A
    probe_score=100
    TAG:encoder=Chrome

    As you can see there are problems with the duration. I have looked at posts like this :
    How can I add predefined length to audio recorded from MediaRecorder in Chrome ?

    But even trying that, I got errors when trying to chop and merge files.For example when running :

    ffmpeg -f concat  -i 89_inputs.txt -c copy final.oga

    I get a lot of this :

    [oga @ 00000000006789c0] Non-monotonous DTS in output stream 0:0; previous: 57612, current: 1980; changing to 57613. This may result in incorrect timestamps in the output file.
    [oga @ 00000000006789c0] Non-monotonous DTS in output stream 0:0; previous: 57613, current: 2041; changing to 57614. This may result in incorrect timestamps in the output file.
    DTS -442721849179034176, next:42521 st:0 invalid dropping
    PTS -442721849179034176, next:42521 invalid dropping st:0
    [oga @ 00000000006789c0] Non-monotonous DTS in output stream 0:0; previous: 57614, current: 2041; changing to 57615. This may result in incorrect timestamps in the output file.
    [oga @ 00000000006789c0] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
    DTS -442721849179031296, next:42521 st:0 invalid dropping
    PTS -442721849179031296, next:42521 invalid dropping st:0

    Does anyone know what we need to do to audio files recorded from Chrome for them to be useful ? Or is there a problem with my setup ?

    Recorder js :

    if (navigator.getUserMedia) {
     console.log('getUserMedia supported.');

     var constraints = { audio: true };
     var chunks = [];

     var onSuccess = function(stream) {
       var mediaRecorder = new MediaRecorder(stream);

       record.onclick = function() {
         mediaRecorder.start();
         console.log(mediaRecorder.state);
         console.log("recorder started");
         record.style.background = "red";

         stop.disabled = false;
         record.disabled = true;

         var aud = document.getElementById("audioClip");
         start = aud.currentTime;
       }

       stop.onclick = function() {
         console.log(mediaRecorder.state);
         console.log("Recording request sent.");
         mediaRecorder.stop();
       }

       mediaRecorder.onstop = function(e) {
         console.log("data available after MediaRecorder.stop() called.");

         var audio = document.createElement('audio');
         audio.setAttribute('controls', '');
         audio.setAttribute('id', 'audioClip');

         audio.controls = true;
         var blob = new Blob(chunks, { 'type' : 'audio/ogg; codecs="vorbis"' });
         chunks = [];
         var audioURL = window.URL.createObjectURL(blob);
         audio.src = audioURL;

         sendRecToPost(blob);   // this just send the audio blob to the server by post
         console.log("recorder stopped");

       }