Recherche avancée

Médias (9)

Mot : - Tags -/soundtrack

Autres articles (57)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (10570)

  • python subprocess ffmpeg return code = 69

    13 juin 2023, par Tim Chen

    I try to call ffmpeg through the subprocess.run(['ffmpeg', '-i', file_name, output_file_name], capture_output=True, text=True) command in python to convert the audio file incoming from the front end to wav format file. The backend code is as follows, using python+fastapi :

    


    @app.post("/api/upload/convert")
async def convert_upload_file(request: Request, file: UploadFile = File(...)):
    token = uuid.uuid4().hex
    tmpFileName = os.path.join(os.path.dirname(__file__), token)
    with open(tmpFileName, "wb") as buffer:
        buffer.write(await file.read())
    await file.seek(0)
    output_path = tmpFileName + '-output.wav'
    command = ['ffmpeg', '-i', tmpFileName, output_path]
    result = subprocess.run(command, capture_output=True, text=True)


    


    This code usually works, but there are some scenarios where it doesn't work. The audio file is recorded by js code (specifically navigator.mediaDevices.getUserMedia({audio: true})).
The code of the audio recorded in windows chrome can run normally and get the converted wav file, but the audio recorded from ios15 safari for more than 3 seconds cannot be converted, prompting returncode=69. The error message is as follows :

    


    CompletedProcess(args=['ffmpeg', '-i', '5cfb52c503a646bda0f422b517c8014a', '5cfb52c503a646bda0f422b517c8014a-output.wav'], returncode=69, stdout='', stderr="
ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers
built with gcc 11 (Ubuntu 11.2.0-19ubuntu1)
configuration: --prefix=/usr --extra-version=0ubuntu0.22.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil      56. 70.100 / 56. 70.100
libavcodec     58.134.100 / 58.134.100
libavformat    58. 76.100 / 58. 76.100
libavdevice    58. 13.100 / 58. 13.100
libavfilter     7.110.100 /  7.110.100
libswscale      5.  9.100 /  5.  9.100
libswresample   3.  9.100 /  3.  9.100
libpostproc    55.  9.100 / 55.  9.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '5cfb52c503a646bda0f422b517c8014a':
  Metadata:
    major_brand     : iso5
    minor_version   : 1
    compatible_brands: isomiso5hlsf
    creation_time   : 2023-06-11T16:36:53.000000Z
  Duration: 00:00:07.06, start: 0.000000, bitrate: 187 kb/s
  Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 184 kb/s (default)
    Metadata:
      creation_time   : 2023-06-11T16:36:53.000000Z
      handler_name    : Core Media Audio
      vendor_id       : [0][0][0][0]
Stream mapping:
  Stream #0:0 -> #0:0 (aac (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, wav, to '5cfb52c503a646bda0f422b517c8014a-output.wav':
  Metadata:
    major_brand     : iso5
    minor_version   : 1
    compatible_brands: isomiso5hlsf
    ISFT            : Lavf58.76.100
  Stream #0:0(und): Audio: pcm_s16le ([1][0][0][0] / 0x0001), 48000 Hz, mono, s16, 768 kb/s (default)
    Metadata:
      creation_time   : 2023-06-11T16:36:53.000000Z
      handler_name    : Core Media Audio
      vendor_id       : [0][0][0][0]
      encoder         : Lavc58.134.100 pcm_s16le
size=       2kB time=00:00:00.00 bitrate=N/A speed=N/A    
[aac @ 0x55f1f8f19fc0] Sample rate index in program config element does not match the sample rate index configured by the container.
[aac @ 0x55f1f8f19fc0] Too large remapped id is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
[aac @ 0x55f1f8f19fc0] If you want to help, upload a sample of this file to https://streams.videolan.org/upload/ and contact the ffmpeg-devel mailing list. (ffmpeg-devel@ffmpeg.org)
Error while decoding stream #0:0: Not yet implemented in FFmpeg, patches welcome
[aac @ 0x55f1f8f19fc0] Multiple frames in a packet.
[aac @ 0x55f1f8f19fc0] Reserved bit set.
[aac @ 0x55f1f8f19fc0] Number of bands (18) exceeds limit (13).
Error while decoding stream #0:0: Invalid data found when processing input
[aac @ 0x55f1f8f19fc0] Reserved bit set.
[aac @ 0x55f1f8f19fc0] Prediction is not allowed in AAC-LC.
Error while decoding stream #0:0: Invalid data found when processing input
[aac @ 0x55f1f8f19fc0] Reserved bit set.


    


    For the abnormal code, I tried to execute ffmpeg -i input output.wav after fastapi handle request on the command line and subprocess.run(['ffmpeg', '-i', file_name, output_path], capture_output =True, text=True), all succeeded, which means that the final file must be normal, otherwise the subsequent verification work will get the same error.

    


    This confuses me, is there some information I'm missing ?

    


  • how to add audio using ffmpeg when recording video from browser and streaming to Youtube/Twitch ?

    26 juillet 2021, par Tosh Velaga

    I have a web application I am working on that allows the user to stream video from their browser and simultaneously livestream to both Youtube and Twitch using ffmpeg. The application works fine when I don't need to send any of the audio. Currently I am getting the error below when I try to record video and audio. I am new to using ffmpeg and so any help would be greatly appreciated. Here is also my repo if needed : https://github.com/toshvelaga/livestream Node Server

    


    Here is my node.js server with ffmpeg

    


    const child_process = require('child_process') // To be used later for running FFmpeg
const express = require('express')
const http = require('http')
const WebSocketServer = require('ws').Server
const NodeMediaServer = require('node-media-server')
const app = express()
const cors = require('cors')
const path = require('path')
const logger = require('morgan')
require('dotenv').config()

app.use(logger('dev'))
app.use(cors())

app.use(express.json({ limit: '200mb', extended: true }))
app.use(
  express.urlencoded({ limit: '200mb', extended: true, parameterLimit: 50000 })
)

var authRouter = require('./routes/auth')
var compareCodeRouter = require('./routes/compareCode')

app.use('/', authRouter)
app.use('/', compareCodeRouter)

if (process.env.NODE_ENV === 'production') {
  // serve static content
  // npm run build
  app.use(express.static(path.join(__dirname, 'client/build')))

  app.get('*', (req, res) => {
    res.sendFile(path.join(__dirname, 'client/build', 'index.html'))
  })
}

const PORT = process.env.PORT || 8080

app.listen(PORT, () => {
  console.log(`Server is starting on port ${PORT}`)
})

const server = http.createServer(app).listen(3000, () => {
  console.log('Listening on PORT 3000...')
})


const wss = new WebSocketServer({
  server: server,
})

wss.on('connection', (ws, req) => {
  const ffmpeg = child_process.spawn('ffmpeg', [
    // works fine when I use this but when I need audio problems arise
    // '-f',
    // 'lavfi',
    // '-i',
    // 'anullsrc',

    '-i',
    '-',

    '-f',
    'flv',
    '-c',
    'copy',
    `${process.env.TWITCH_STREAM_ADDRESS}`,
    '-f',
    'flv',
    '-c',
    'copy',
    `${process.env.YOUTUBE_STREAM_ADDRESS}`,
    // '-f',
    // 'flv',
    // '-c',
    // 'copy',
    // `${process.env.FACEBOOK_STREAM_ADDRESS}`,
  ])

  ffmpeg.on('close', (code, signal) => {
    console.log(
      'FFmpeg child process closed, code ' + code + ', signal ' + signal
    )
    ws.terminate()
  })

  ffmpeg.stdin.on('error', (e) => {
    console.log('FFmpeg STDIN Error', e)
  })

  ffmpeg.stderr.on('data', (data) => {
    console.log('FFmpeg STDERR:', data.toString())
  })

  ws.on('message', (msg) => {
    console.log('DATA', msg)
    ffmpeg.stdin.write(msg)
  })

  ws.on('close', (e) => {
    console.log('kill: SIGINT')
    ffmpeg.kill('SIGINT')
  })
})

const config = {
  rtmp: {
    port: 1935,
    chunk_size: 60000,
    gop_cache: true,
    ping: 30,
    ping_timeout: 60,
  },
  http: {
    port: 8000,
    allow_origin: '*',
  },
}

var nms = new NodeMediaServer(config)
nms.run()


    


    Here is my frontend code that records the video/audio and sends to server :

    


    import React, { useState, useEffect, useRef } from &#x27;react&#x27;&#xA;import Navbar from &#x27;../../components/Navbar/Navbar&#x27;&#xA;import &#x27;./Dashboard.css&#x27;&#xA;&#xA;const CAPTURE_OPTIONS = {&#xA;  audio: true,&#xA;  video: true,&#xA;}&#xA;&#xA;function Dashboard() {&#xA;  const [mute, setMute] = useState(false)&#xA;  const videoRef = useRef()&#xA;  const ws = useRef()&#xA;  const mediaStream = useUserMedia(CAPTURE_OPTIONS)&#xA;&#xA;  let liveStream&#xA;  let liveStreamRecorder&#xA;&#xA;  if (mediaStream &amp;&amp; videoRef.current &amp;&amp; !videoRef.current.srcObject) {&#xA;    videoRef.current.srcObject = mediaStream&#xA;  }&#xA;&#xA;  const handleCanPlay = () => {&#xA;    videoRef.current.play()&#xA;  }&#xA;&#xA;  useEffect(() => {&#xA;    ws.current = new WebSocket(&#xA;      window.location.protocol.replace(&#x27;http&#x27;, &#x27;ws&#x27;) &#x2B;&#xA;        &#x27;//&#x27; &#x2B; // http: -> ws:, https: -> wss:&#xA;        &#x27;localhost:3000&#x27;&#xA;    )&#xA;&#xA;    ws.current.onopen = () => {&#xA;      console.log(&#x27;WebSocket Open&#x27;)&#xA;    }&#xA;&#xA;    return () => {&#xA;      ws.current.close()&#xA;    }&#xA;  }, [])&#xA;&#xA;  const startStream = () => {&#xA;    liveStream = videoRef.current.captureStream(30) // 30 FPS&#xA;    liveStreamRecorder = new MediaRecorder(liveStream, {&#xA;      mimeType: &#x27;video/webm;codecs=h264&#x27;,&#xA;      videoBitsPerSecond: 3 * 1024 * 1024,&#xA;    })&#xA;    liveStreamRecorder.ondataavailable = (e) => {&#xA;      ws.current.send(e.data)&#xA;      console.log(&#x27;send data&#x27;, e.data)&#xA;    }&#xA;    // Start recording, and dump data every second&#xA;    liveStreamRecorder.start(1000)&#xA;  }&#xA;&#xA;  const stopStream = () => {&#xA;    liveStreamRecorder.stop()&#xA;    ws.current.close()&#xA;  }&#xA;&#xA;  const toggleMute = () => {&#xA;    setMute(!mute)&#xA;  }&#xA;&#xA;  return (&#xA;    &lt;>&#xA;      <navbar></navbar>&#xA;      <div style="{{" classname="&#x27;main&#x27;">&#xA;        <div>&#xA;          &#xA;        </div>&#xA;        <div classname="&#x27;button-container&#x27;">&#xA;          <button>Go Live</button>&#xA;          <button>Stop Recording</button>&#xA;          <button>Share Screen</button>&#xA;          <button>Mute</button>&#xA;        </div>&#xA;      </div>&#xA;    >&#xA;  )&#xA;}&#xA;&#xA;const useUserMedia = (requestedMedia) => {&#xA;  const [mediaStream, setMediaStream] = useState(null)&#xA;&#xA;  useEffect(() => {&#xA;    async function enableStream() {&#xA;      try {&#xA;        const stream = await navigator.mediaDevices.getUserMedia(requestedMedia)&#xA;        setMediaStream(stream)&#xA;      } catch (err) {&#xA;        console.log(err)&#xA;      }&#xA;    }&#xA;&#xA;    if (!mediaStream) {&#xA;      enableStream()&#xA;    } else {&#xA;      return function cleanup() {&#xA;        mediaStream.getVideoTracks().forEach((track) => {&#xA;          track.stop()&#xA;        })&#xA;      }&#xA;    }&#xA;  }, [mediaStream, requestedMedia])&#xA;&#xA;  return mediaStream&#xA;}&#xA;&#xA;export default Dashboard&#xA;

    &#xA;

  • ffmpeg failed to load audio file

    14 avril 2024, par Vaishnav Ghenge
    Failed to load audio: ffmpeg version 5.1.4-0&#x2B;deb12u1 Copyright (c) Failed to load audio: ffmpeg version 5.1.4-0&#x2B;deb12u1 Copyright (c) 2000-2023 the FFmpeg developers&#xA;  built with gcc 12 (Debian 12.2.0-14)&#xA;  configuration: --prefix=/usr --extra-version=0&#x2B;deb12u1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libglslang --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librist --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --disable-sndio --enable-libjxl --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-libplacebo --enable-librav1e --enable-shared&#xA;  libavutil      57. 28.100 / 57. 28.100&#xA;  libavcodec     59. 37.100 / 59. 37.100&#xA;  libavformat    59. 27.100 / 59. 27.100&#xA;  libavdevice    59.  7.100 / 59.  7.100&#xA;  libavfilter     8. 44.100 /  8. 44.100&#xA;  libswscale      6.  7.100 /  6.  7.100&#xA;  libswresample   4.  7.100 /  4.  7.100&#xA;  libpostproc    56.  6.100 / 56.  6.100&#xA;/tmp/tmpjlchcpdm.wav: Invalid data found when processing input&#xA;

    &#xA;

    backend :

    &#xA;

    &#xA;@app.route("/transcribe", methods=["POST"])&#xA;def transcribe():&#xA;    # Check if audio file is present in the request&#xA;    if &#x27;audio_file&#x27; not in request.files:&#xA;        return jsonify({"error": "No file part"}), 400&#xA;    &#xA;    audio_file = request.files.get(&#x27;audio_file&#x27;)&#xA;&#xA;    # Check if audio_file is sent in files&#xA;    if not audio_file:&#xA;        return jsonify({"error": "`audio_file` is missing in request.files"}), 400&#xA;&#xA;    # Check if the file is present&#xA;    if audio_file.filename == &#x27;&#x27;:&#xA;        return jsonify({"error": "No selected file"}), 400&#xA;&#xA;    # Save the file with a unique name&#xA;    filename = secure_filename(audio_file.filename)&#xA;    unique_filename = os.path.join("uploads", str(uuid.uuid4()) &#x2B; &#x27;_&#x27; &#x2B; filename)&#xA;    # audio_file.save(unique_filename)&#xA;    &#xA;    # Read the contents of the audio file&#xA;    contents = audio_file.read()&#xA;&#xA;    max_file_size = 500 * 1024 * 1024&#xA;    if len(contents) > max_file_size:&#xA;        return jsonify({"error": "File is too large"}), 400&#xA;&#xA;    # Check if the file extension suggests it&#x27;s a WAV file&#xA;    if not filename.lower().endswith(&#x27;.wav&#x27;):&#xA;        # Delete the file if it&#x27;s not a WAV file&#xA;        os.remove(unique_filename)&#xA;        return jsonify({"error": "Only WAV files are supported"}), 400&#xA;&#xA;    print(f"\033[92m{filename}\033[0m")&#xA;&#xA;    # Call Celery task asynchronously&#xA;    result = transcribe_audio.delay(contents)&#xA;&#xA;    return jsonify({&#xA;        "task_id": result.id,&#xA;        "status": "pending"&#xA;    })&#xA;&#xA;&#xA;@celery_app.task&#xA;def transcribe_audio(contents):&#xA;    # Transcribe the audio&#xA;    try:&#xA;        # Create a temporary file to save the audio data&#xA;        with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as temp_audio:&#xA;            temp_path = temp_audio.name&#xA;            temp_audio.write(contents)&#xA;&#xA;            print(f"\033[92mFile temporary path: {temp_path}\033[0m")&#xA;            transcribe_start_time = time.time()&#xA;&#xA;            # Transcribe the audio&#xA;            transcription = transcribe_with_whisper(temp_path)&#xA;            &#xA;            transcribe_end_time = time.time()&#xA;            print(f"\033[92mTranscripted text: {transcription}\033[0m")&#xA;&#xA;            return transcription, transcribe_end_time - transcribe_start_time&#xA;&#xA;    except Exception as e:&#xA;        print(f"\033[92mError: {e}\033[0m")&#xA;        return str(e)&#xA;

    &#xA;

    frontend :

    &#xA;

        useEffect(() => {&#xA;        const init = () => {&#xA;            navigator.mediaDevices.getUserMedia({audio: true})&#xA;                .then((audioStream) => {&#xA;                    const recorder = new MediaRecorder(audioStream);&#xA;&#xA;                    recorder.ondataavailable = e => {&#xA;                        if (e.data.size > 0) {&#xA;                            setChunks(prevChunks => [...prevChunks, e.data]);&#xA;                        }&#xA;                    };&#xA;&#xA;                    recorder.onerror = (e) => {&#xA;                        console.log("error: ", e);&#xA;                    }&#xA;&#xA;                    recorder.onstart = () => {&#xA;                        console.log("started");&#xA;                    }&#xA;&#xA;                    recorder.start();&#xA;&#xA;                    setStream(audioStream);&#xA;                    setRecorder(recorder);&#xA;                });&#xA;        }&#xA;&#xA;        init();&#xA;&#xA;        return () => {&#xA;            if (recorder &amp;&amp; recorder.state === &#x27;recording&#x27;) {&#xA;                recorder.stop();&#xA;            }&#xA;&#xA;            if (stream) {&#xA;                stream.getTracks().forEach(track => track.stop());&#xA;            }&#xA;        }&#xA;    }, []);&#xA;&#xA;    useEffect(() => {&#xA;        // Send chunks of audio data to the backend at regular intervals&#xA;        const intervalId = setInterval(() => {&#xA;            if (recorder &amp;&amp; recorder.state === &#x27;recording&#x27;) {&#xA;                recorder.requestData(); // Trigger data available event&#xA;            }&#xA;        }, 8000); // Adjust the interval as needed&#xA;&#xA;&#xA;        return () => {&#xA;            if (intervalId) {&#xA;                console.log("Interval cleared");&#xA;                clearInterval(intervalId);&#xA;            }&#xA;        };&#xA;    }, [recorder]);&#xA;&#xA;    useEffect(() => {&#xA;        const processAudio = async () => {&#xA;            if (chunks.length > 0) {&#xA;                // Send the latest chunk to the server for transcription&#xA;                const latestChunk = chunks[chunks.length - 1];&#xA;&#xA;                const audioBlob = new Blob([latestChunk]);&#xA;                convertBlobToAudioFile(audioBlob);&#xA;            }&#xA;        };&#xA;&#xA;        void processAudio();&#xA;    }, [chunks]);&#xA;&#xA;    const convertBlobToAudioFile = useCallback((blob: Blob) => {&#xA;        // Convert Blob to audio file (e.g., WAV)&#xA;        // This conversion may require using a third-party library or service&#xA;        // For example, you can use the MediaRecorder API to record audio in WAV format directly&#xA;        // Alternatively, you can use a library like recorderjs to perform the conversion&#xA;        // Here&#x27;s a simplified example using recorderjs:&#xA;&#xA;        const reader = new FileReader();&#xA;        reader.onload = () => {&#xA;            const audioBuffer = reader.result; // ArrayBuffer containing audio data&#xA;&#xA;            // Send audioBuffer to Flask server or perform further processing&#xA;            sendAudioToFlask(audioBuffer as ArrayBuffer);&#xA;        };&#xA;&#xA;        reader.readAsArrayBuffer(blob);&#xA;    }, []);&#xA;&#xA;    const sendAudioToFlask = useCallback((audioBuffer: ArrayBuffer) => {&#xA;        const formData = new FormData();&#xA;        formData.append(&#x27;audio_file&#x27;, new Blob([audioBuffer]), `speech_audio.wav`);&#xA;&#xA;        console.log(formData.get("audio_file"));&#xA;&#xA;        fetch(&#x27;http://34.87.75.138:8000/transcribe&#x27;, {&#xA;            method: &#x27;POST&#x27;,&#xA;            body: formData&#xA;        })&#xA;            .then(response => response.json())&#xA;            .then((data: { task_id: string, status: string }) => {&#xA;                pendingTaskIdsRef.current.push(data.task_id);&#xA;            })&#xA;            .catch(error => {&#xA;                console.error(&#x27;Error sending audio to Flask server:&#x27;, error);&#xA;            });&#xA;    }, []);&#xA;

    &#xA;

    I was trying to pass the audio from frontend to whisper model which is in flask app

    &#xA;