Recherche avancée

Médias (2)

Mot : - Tags -/rotation

Autres articles (63)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (7200)

  • FFmpeg "Non-monotonous DTS in output stream" error when processing video from Safari's MediaRecorder

    17 juillet 2024, par Hackermon

    I'm recording a video stream in Safari with MediaRecorder, then sending it to a remote server which then uses ffmpeg to reencode the video. When reencoding with FFmpeg, I get a lot of warnings and the final video is broken, frame are glitching and out of sync but the audio sounds fine.

    


    Here's my MediaRecorder script :

    


    const camera = await navigator.mediaDevices.getUserMedia({ audio: true, video: true });
const recorder = new MediaRecorder(camera, {
        mimeType: 'video/mp4', // Safari only supports MP4
        bitsPerSecond: 1_000_000,
});

recorder.ondataavailable = async ({ data: blob }) => {
        // open contents in new tab
        var fileURL = URL.createObjectURL(file);
         window.open(fileURL, '_blank');
};

recorder.start();
setTimeout(() => recorder.stop(), 5000);


    


    I download the video blob from Safari and use this command to reencode it :

    


    ffmpeg -i ./blob.mp4 -preset ultrafast -strict -2 -threads 10 -c copy ./output.mp4


    


    Logs :

    


    ffmpeg version 4.2.7-0ubuntu0.1 Copyright (c) 2000-2022 the FFmpeg developers
  built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1)
  configuration: --prefix=/usr --extra-version=0ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x559826616000] DTS 29 < 313 out of order
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from './chunk1.mp4':
  Metadata:
    major_brand     : iso5
    minor_version   : 1
    compatible_brands: isomiso5hlsf
    creation_time   : 2024-07-17T14:30:47.000000Z
  Duration: 00:00:01.00, start: 0.000000, bitrate: 3937 kb/s
    Stream #0:0(und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p(progressive), 640x480 [SAR 1:1 DAR 4:3], 6218 kb/s, 33.36 fps, 600 tbr, 600 tbn, 1200 tbc (default)
    Metadata:
      creation_time   : 2024-07-17T14:30:47.000000Z
      handler_name    : Core Media Video
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 420 kb/s (default)
    Metadata:
      creation_time   : 2024-07-17T14:30:47.000000Z
      handler_name    : Core Media Audio
File './safari3.mp4' already exists. Overwrite ? [y/N] Output #0, mp4, to './safari3.mp4':
  Metadata:
    major_brand     : iso5
    minor_version   : 1
    compatible_brands: isomiso5hlsf
    encoder         : Lavf58.29.100
    Stream #0:0(und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p(progressive), 640x480 [SAR 1:1 DAR 4:3], q=2-31, 6218 kb/s, 33.36 fps, 600 tbr, 19200 tbn, 600 tbc (default)
    Metadata:
      creation_time   : 2024-07-17T14:30:47.000000Z
      handler_name    : Core Media Video
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 420 kb/s (default)
    Metadata:
      creation_time   : 2024-07-17T14:30:47.000000Z
      handler_name    : Core Media Audio
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[mp4 @ 0x559826644340] Non-monotonous DTS in output stream 0:0; previous: 10016, current: 928; changing to 10017. This may result in incorrect timestamps in the output file.
[mp4 @ 0x559826644340] Non-monotonous DTS in output stream 0:0; previous: 10017, current: 1568; changing to 10018. This may result in incorrect timestamps in the output file.
[mp4 @ 0x559826644340] Non-monotonous DTS in output stream 0:0; previous: 10018, current: 2208; changing to 10019. This may result in incorrect timestamps in the output file.
...x100
frame=  126 fps=0.0 q=-1.0 Lsize=     479kB time=00:00:00.97 bitrate=4026.2kbits/s speed= 130x    
video:425kB audio:51kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.652696%


    


    Not sure what's happening or how to fix it. This issue only happens in Safari, videos from Chrome are perfectly fine.

    


    I've tried various flags :

    


    -fflags +igndts
-bsf:a aac_adtstoasc
-c:v libvpx-vp9 -c:a libopus
etc


    


    None of them seem to fix the issue.

    


  • python subprocess ffmpeg return code = 69

    13 juin 2023, par Tim Chen

    I try to call ffmpeg through the subprocess.run(['ffmpeg', '-i', file_name, output_file_name], capture_output=True, text=True) command in python to convert the audio file incoming from the front end to wav format file. The backend code is as follows, using python+fastapi :

    


    @app.post("/api/upload/convert")
async def convert_upload_file(request: Request, file: UploadFile = File(...)):
    token = uuid.uuid4().hex
    tmpFileName = os.path.join(os.path.dirname(__file__), token)
    with open(tmpFileName, "wb") as buffer:
        buffer.write(await file.read())
    await file.seek(0)
    output_path = tmpFileName + '-output.wav'
    command = ['ffmpeg', '-i', tmpFileName, output_path]
    result = subprocess.run(command, capture_output=True, text=True)


    


    This code usually works, but there are some scenarios where it doesn't work. The audio file is recorded by js code (specifically navigator.mediaDevices.getUserMedia({audio: true})).
The code of the audio recorded in windows chrome can run normally and get the converted wav file, but the audio recorded from ios15 safari for more than 3 seconds cannot be converted, prompting returncode=69. The error message is as follows :

    


    CompletedProcess(args=['ffmpeg', '-i', '5cfb52c503a646bda0f422b517c8014a', '5cfb52c503a646bda0f422b517c8014a-output.wav'], returncode=69, stdout='', stderr="
ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers
built with gcc 11 (Ubuntu 11.2.0-19ubuntu1)
configuration: --prefix=/usr --extra-version=0ubuntu0.22.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil      56. 70.100 / 56. 70.100
libavcodec     58.134.100 / 58.134.100
libavformat    58. 76.100 / 58. 76.100
libavdevice    58. 13.100 / 58. 13.100
libavfilter     7.110.100 /  7.110.100
libswscale      5.  9.100 /  5.  9.100
libswresample   3.  9.100 /  3.  9.100
libpostproc    55.  9.100 / 55.  9.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '5cfb52c503a646bda0f422b517c8014a':
  Metadata:
    major_brand     : iso5
    minor_version   : 1
    compatible_brands: isomiso5hlsf
    creation_time   : 2023-06-11T16:36:53.000000Z
  Duration: 00:00:07.06, start: 0.000000, bitrate: 187 kb/s
  Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 184 kb/s (default)
    Metadata:
      creation_time   : 2023-06-11T16:36:53.000000Z
      handler_name    : Core Media Audio
      vendor_id       : [0][0][0][0]
Stream mapping:
  Stream #0:0 -> #0:0 (aac (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, wav, to '5cfb52c503a646bda0f422b517c8014a-output.wav':
  Metadata:
    major_brand     : iso5
    minor_version   : 1
    compatible_brands: isomiso5hlsf
    ISFT            : Lavf58.76.100
  Stream #0:0(und): Audio: pcm_s16le ([1][0][0][0] / 0x0001), 48000 Hz, mono, s16, 768 kb/s (default)
    Metadata:
      creation_time   : 2023-06-11T16:36:53.000000Z
      handler_name    : Core Media Audio
      vendor_id       : [0][0][0][0]
      encoder         : Lavc58.134.100 pcm_s16le
size=       2kB time=00:00:00.00 bitrate=N/A speed=N/A    
[aac @ 0x55f1f8f19fc0] Sample rate index in program config element does not match the sample rate index configured by the container.
[aac @ 0x55f1f8f19fc0] Too large remapped id is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
[aac @ 0x55f1f8f19fc0] If you want to help, upload a sample of this file to https://streams.videolan.org/upload/ and contact the ffmpeg-devel mailing list. (ffmpeg-devel@ffmpeg.org)
Error while decoding stream #0:0: Not yet implemented in FFmpeg, patches welcome
[aac @ 0x55f1f8f19fc0] Multiple frames in a packet.
[aac @ 0x55f1f8f19fc0] Reserved bit set.
[aac @ 0x55f1f8f19fc0] Number of bands (18) exceeds limit (13).
Error while decoding stream #0:0: Invalid data found when processing input
[aac @ 0x55f1f8f19fc0] Reserved bit set.
[aac @ 0x55f1f8f19fc0] Prediction is not allowed in AAC-LC.
Error while decoding stream #0:0: Invalid data found when processing input
[aac @ 0x55f1f8f19fc0] Reserved bit set.


    


    For the abnormal code, I tried to execute ffmpeg -i input output.wav after fastapi handle request on the command line and subprocess.run(['ffmpeg', '-i', file_name, output_path], capture_output =True, text=True), all succeeded, which means that the final file must be normal, otherwise the subsequent verification work will get the same error.

    


    This confuses me, is there some information I'm missing ?

    


  • how to add audio using ffmpeg when recording video from browser and streaming to Youtube/Twitch ?

    26 juillet 2021, par Tosh Velaga

    I have a web application I am working on that allows the user to stream video from their browser and simultaneously livestream to both Youtube and Twitch using ffmpeg. The application works fine when I don't need to send any of the audio. Currently I am getting the error below when I try to record video and audio. I am new to using ffmpeg and so any help would be greatly appreciated. Here is also my repo if needed : https://github.com/toshvelaga/livestream Node Server

    


    Here is my node.js server with ffmpeg

    


    const child_process = require('child_process') // To be used later for running FFmpeg
const express = require('express')
const http = require('http')
const WebSocketServer = require('ws').Server
const NodeMediaServer = require('node-media-server')
const app = express()
const cors = require('cors')
const path = require('path')
const logger = require('morgan')
require('dotenv').config()

app.use(logger('dev'))
app.use(cors())

app.use(express.json({ limit: '200mb', extended: true }))
app.use(
  express.urlencoded({ limit: '200mb', extended: true, parameterLimit: 50000 })
)

var authRouter = require('./routes/auth')
var compareCodeRouter = require('./routes/compareCode')

app.use('/', authRouter)
app.use('/', compareCodeRouter)

if (process.env.NODE_ENV === 'production') {
  // serve static content
  // npm run build
  app.use(express.static(path.join(__dirname, 'client/build')))

  app.get('*', (req, res) => {
    res.sendFile(path.join(__dirname, 'client/build', 'index.html'))
  })
}

const PORT = process.env.PORT || 8080

app.listen(PORT, () => {
  console.log(`Server is starting on port ${PORT}`)
})

const server = http.createServer(app).listen(3000, () => {
  console.log('Listening on PORT 3000...')
})


const wss = new WebSocketServer({
  server: server,
})

wss.on('connection', (ws, req) => {
  const ffmpeg = child_process.spawn('ffmpeg', [
    // works fine when I use this but when I need audio problems arise
    // '-f',
    // 'lavfi',
    // '-i',
    // 'anullsrc',

    '-i',
    '-',

    '-f',
    'flv',
    '-c',
    'copy',
    `${process.env.TWITCH_STREAM_ADDRESS}`,
    '-f',
    'flv',
    '-c',
    'copy',
    `${process.env.YOUTUBE_STREAM_ADDRESS}`,
    // '-f',
    // 'flv',
    // '-c',
    // 'copy',
    // `${process.env.FACEBOOK_STREAM_ADDRESS}`,
  ])

  ffmpeg.on('close', (code, signal) => {
    console.log(
      'FFmpeg child process closed, code ' + code + ', signal ' + signal
    )
    ws.terminate()
  })

  ffmpeg.stdin.on('error', (e) => {
    console.log('FFmpeg STDIN Error', e)
  })

  ffmpeg.stderr.on('data', (data) => {
    console.log('FFmpeg STDERR:', data.toString())
  })

  ws.on('message', (msg) => {
    console.log('DATA', msg)
    ffmpeg.stdin.write(msg)
  })

  ws.on('close', (e) => {
    console.log('kill: SIGINT')
    ffmpeg.kill('SIGINT')
  })
})

const config = {
  rtmp: {
    port: 1935,
    chunk_size: 60000,
    gop_cache: true,
    ping: 30,
    ping_timeout: 60,
  },
  http: {
    port: 8000,
    allow_origin: '*',
  },
}

var nms = new NodeMediaServer(config)
nms.run()


    


    Here is my frontend code that records the video/audio and sends to server :

    


    import React, { useState, useEffect, useRef } from &#x27;react&#x27;&#xA;import Navbar from &#x27;../../components/Navbar/Navbar&#x27;&#xA;import &#x27;./Dashboard.css&#x27;&#xA;&#xA;const CAPTURE_OPTIONS = {&#xA;  audio: true,&#xA;  video: true,&#xA;}&#xA;&#xA;function Dashboard() {&#xA;  const [mute, setMute] = useState(false)&#xA;  const videoRef = useRef()&#xA;  const ws = useRef()&#xA;  const mediaStream = useUserMedia(CAPTURE_OPTIONS)&#xA;&#xA;  let liveStream&#xA;  let liveStreamRecorder&#xA;&#xA;  if (mediaStream &amp;&amp; videoRef.current &amp;&amp; !videoRef.current.srcObject) {&#xA;    videoRef.current.srcObject = mediaStream&#xA;  }&#xA;&#xA;  const handleCanPlay = () => {&#xA;    videoRef.current.play()&#xA;  }&#xA;&#xA;  useEffect(() => {&#xA;    ws.current = new WebSocket(&#xA;      window.location.protocol.replace(&#x27;http&#x27;, &#x27;ws&#x27;) &#x2B;&#xA;        &#x27;//&#x27; &#x2B; // http: -> ws:, https: -> wss:&#xA;        &#x27;localhost:3000&#x27;&#xA;    )&#xA;&#xA;    ws.current.onopen = () => {&#xA;      console.log(&#x27;WebSocket Open&#x27;)&#xA;    }&#xA;&#xA;    return () => {&#xA;      ws.current.close()&#xA;    }&#xA;  }, [])&#xA;&#xA;  const startStream = () => {&#xA;    liveStream = videoRef.current.captureStream(30) // 30 FPS&#xA;    liveStreamRecorder = new MediaRecorder(liveStream, {&#xA;      mimeType: &#x27;video/webm;codecs=h264&#x27;,&#xA;      videoBitsPerSecond: 3 * 1024 * 1024,&#xA;    })&#xA;    liveStreamRecorder.ondataavailable = (e) => {&#xA;      ws.current.send(e.data)&#xA;      console.log(&#x27;send data&#x27;, e.data)&#xA;    }&#xA;    // Start recording, and dump data every second&#xA;    liveStreamRecorder.start(1000)&#xA;  }&#xA;&#xA;  const stopStream = () => {&#xA;    liveStreamRecorder.stop()&#xA;    ws.current.close()&#xA;  }&#xA;&#xA;  const toggleMute = () => {&#xA;    setMute(!mute)&#xA;  }&#xA;&#xA;  return (&#xA;    &lt;>&#xA;      <navbar></navbar>&#xA;      <div style="{{" classname="&#x27;main&#x27;">&#xA;        <div>&#xA;          &#xA;        </div>&#xA;        <div classname="&#x27;button-container&#x27;">&#xA;          <button>Go Live</button>&#xA;          <button>Stop Recording</button>&#xA;          <button>Share Screen</button>&#xA;          <button>Mute</button>&#xA;        </div>&#xA;      </div>&#xA;    >&#xA;  )&#xA;}&#xA;&#xA;const useUserMedia = (requestedMedia) => {&#xA;  const [mediaStream, setMediaStream] = useState(null)&#xA;&#xA;  useEffect(() => {&#xA;    async function enableStream() {&#xA;      try {&#xA;        const stream = await navigator.mediaDevices.getUserMedia(requestedMedia)&#xA;        setMediaStream(stream)&#xA;      } catch (err) {&#xA;        console.log(err)&#xA;      }&#xA;    }&#xA;&#xA;    if (!mediaStream) {&#xA;      enableStream()&#xA;    } else {&#xA;      return function cleanup() {&#xA;        mediaStream.getVideoTracks().forEach((track) => {&#xA;          track.stop()&#xA;        })&#xA;      }&#xA;    }&#xA;  }, [mediaStream, requestedMedia])&#xA;&#xA;  return mediaStream&#xA;}&#xA;&#xA;export default Dashboard&#xA;

    &#xA;