
Recherche avancée
Médias (91)
-
Head down (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Echoplex (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Discipline (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Letting you (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
999 999 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (98)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (8603)
-
python subprocess ffmpeg return code = 69
13 juin 2023, par Tim ChenI try to call ffmpeg through the
subprocess.run(['ffmpeg', '-i', file_name, output_file_name], capture_output=True, text=True)
command in python to convert the audio file incoming from the front end to wav format file. The backend code is as follows, using python+fastapi :

@app.post("/api/upload/convert")
async def convert_upload_file(request: Request, file: UploadFile = File(...)):
 token = uuid.uuid4().hex
 tmpFileName = os.path.join(os.path.dirname(__file__), token)
 with open(tmpFileName, "wb") as buffer:
 buffer.write(await file.read())
 await file.seek(0)
 output_path = tmpFileName + '-output.wav'
 command = ['ffmpeg', '-i', tmpFileName, output_path]
 result = subprocess.run(command, capture_output=True, text=True)



This code usually works, but there are some scenarios where it doesn't work. The audio file is recorded by js code (specifically
navigator.mediaDevices.getUserMedia({audio: true})
).
The code of the audio recorded in windows chrome can run normally and get the converted wav file, but the audio recorded from ios15 safari for more than 3 seconds cannot be converted, promptingreturncode=69
. The error message is as follows :

CompletedProcess(args=['ffmpeg', '-i', '5cfb52c503a646bda0f422b517c8014a', '5cfb52c503a646bda0f422b517c8014a-output.wav'], returncode=69, stdout='', stderr="
ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers
built with gcc 11 (Ubuntu 11.2.0-19ubuntu1)
configuration: --prefix=/usr --extra-version=0ubuntu0.22.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 70.100 / 56. 70.100
libavcodec 58.134.100 / 58.134.100
libavformat 58. 76.100 / 58. 76.100
libavdevice 58. 13.100 / 58. 13.100
libavfilter 7.110.100 / 7.110.100
libswscale 5. 9.100 / 5. 9.100
libswresample 3. 9.100 / 3. 9.100
libpostproc 55. 9.100 / 55. 9.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '5cfb52c503a646bda0f422b517c8014a':
 Metadata:
 major_brand : iso5
 minor_version : 1
 compatible_brands: isomiso5hlsf
 creation_time : 2023-06-11T16:36:53.000000Z
 Duration: 00:00:07.06, start: 0.000000, bitrate: 187 kb/s
 Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 184 kb/s (default)
 Metadata:
 creation_time : 2023-06-11T16:36:53.000000Z
 handler_name : Core Media Audio
 vendor_id : [0][0][0][0]
Stream mapping:
 Stream #0:0 -> #0:0 (aac (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, wav, to '5cfb52c503a646bda0f422b517c8014a-output.wav':
 Metadata:
 major_brand : iso5
 minor_version : 1
 compatible_brands: isomiso5hlsf
 ISFT : Lavf58.76.100
 Stream #0:0(und): Audio: pcm_s16le ([1][0][0][0] / 0x0001), 48000 Hz, mono, s16, 768 kb/s (default)
 Metadata:
 creation_time : 2023-06-11T16:36:53.000000Z
 handler_name : Core Media Audio
 vendor_id : [0][0][0][0]
 encoder : Lavc58.134.100 pcm_s16le
size= 2kB time=00:00:00.00 bitrate=N/A speed=N/A 
[aac @ 0x55f1f8f19fc0] Sample rate index in program config element does not match the sample rate index configured by the container.
[aac @ 0x55f1f8f19fc0] Too large remapped id is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
[aac @ 0x55f1f8f19fc0] If you want to help, upload a sample of this file to https://streams.videolan.org/upload/ and contact the ffmpeg-devel mailing list. (ffmpeg-devel@ffmpeg.org)
Error while decoding stream #0:0: Not yet implemented in FFmpeg, patches welcome
[aac @ 0x55f1f8f19fc0] Multiple frames in a packet.
[aac @ 0x55f1f8f19fc0] Reserved bit set.
[aac @ 0x55f1f8f19fc0] Number of bands (18) exceeds limit (13).
Error while decoding stream #0:0: Invalid data found when processing input
[aac @ 0x55f1f8f19fc0] Reserved bit set.
[aac @ 0x55f1f8f19fc0] Prediction is not allowed in AAC-LC.
Error while decoding stream #0:0: Invalid data found when processing input
[aac @ 0x55f1f8f19fc0] Reserved bit set.



For the abnormal code, I tried to execute
ffmpeg -i input output.wav
after fastapi handle request on the command line andsubprocess.run(['ffmpeg', '-i', file_name, output_path], capture_output =True, text=True)
, all succeeded, which means that the final file must be normal, otherwise the subsequent verification work will get the same error.

This confuses me, is there some information I'm missing ?


-
how to add audio using ffmpeg when recording video from browser and streaming to Youtube/Twitch ?
26 juillet 2021, par Tosh VelagaI have a web application I am working on that allows the user to stream video from their browser and simultaneously livestream to both Youtube and Twitch using ffmpeg. The application works fine when I don't need to send any of the audio. Currently I am getting the error below when I try to record video and audio. I am new to using ffmpeg and so any help would be greatly appreciated. Here is also my repo if needed : https://github.com/toshvelaga/livestream


Here is my node.js server with ffmpeg


const child_process = require('child_process') // To be used later for running FFmpeg
const express = require('express')
const http = require('http')
const WebSocketServer = require('ws').Server
const NodeMediaServer = require('node-media-server')
const app = express()
const cors = require('cors')
const path = require('path')
const logger = require('morgan')
require('dotenv').config()

app.use(logger('dev'))
app.use(cors())

app.use(express.json({ limit: '200mb', extended: true }))
app.use(
 express.urlencoded({ limit: '200mb', extended: true, parameterLimit: 50000 })
)

var authRouter = require('./routes/auth')
var compareCodeRouter = require('./routes/compareCode')

app.use('/', authRouter)
app.use('/', compareCodeRouter)

if (process.env.NODE_ENV === 'production') {
 // serve static content
 // npm run build
 app.use(express.static(path.join(__dirname, 'client/build')))

 app.get('*', (req, res) => {
 res.sendFile(path.join(__dirname, 'client/build', 'index.html'))
 })
}

const PORT = process.env.PORT || 8080

app.listen(PORT, () => {
 console.log(`Server is starting on port ${PORT}`)
})

const server = http.createServer(app).listen(3000, () => {
 console.log('Listening on PORT 3000...')
})


const wss = new WebSocketServer({
 server: server,
})

wss.on('connection', (ws, req) => {
 const ffmpeg = child_process.spawn('ffmpeg', [
 // works fine when I use this but when I need audio problems arise
 // '-f',
 // 'lavfi',
 // '-i',
 // 'anullsrc',

 '-i',
 '-',

 '-f',
 'flv',
 '-c',
 'copy',
 `${process.env.TWITCH_STREAM_ADDRESS}`,
 '-f',
 'flv',
 '-c',
 'copy',
 `${process.env.YOUTUBE_STREAM_ADDRESS}`,
 // '-f',
 // 'flv',
 // '-c',
 // 'copy',
 // `${process.env.FACEBOOK_STREAM_ADDRESS}`,
 ])

 ffmpeg.on('close', (code, signal) => {
 console.log(
 'FFmpeg child process closed, code ' + code + ', signal ' + signal
 )
 ws.terminate()
 })

 ffmpeg.stdin.on('error', (e) => {
 console.log('FFmpeg STDIN Error', e)
 })

 ffmpeg.stderr.on('data', (data) => {
 console.log('FFmpeg STDERR:', data.toString())
 })

 ws.on('message', (msg) => {
 console.log('DATA', msg)
 ffmpeg.stdin.write(msg)
 })

 ws.on('close', (e) => {
 console.log('kill: SIGINT')
 ffmpeg.kill('SIGINT')
 })
})

const config = {
 rtmp: {
 port: 1935,
 chunk_size: 60000,
 gop_cache: true,
 ping: 30,
 ping_timeout: 60,
 },
 http: {
 port: 8000,
 allow_origin: '*',
 },
}

var nms = new NodeMediaServer(config)
nms.run()



Here is my frontend code that records the video/audio and sends to server :


import React, { useState, useEffect, useRef } from 'react'
import Navbar from '../../components/Navbar/Navbar'
import './Dashboard.css'

const CAPTURE_OPTIONS = {
 audio: true,
 video: true,
}

function Dashboard() {
 const [mute, setMute] = useState(false)
 const videoRef = useRef()
 const ws = useRef()
 const mediaStream = useUserMedia(CAPTURE_OPTIONS)

 let liveStream
 let liveStreamRecorder

 if (mediaStream && videoRef.current && !videoRef.current.srcObject) {
 videoRef.current.srcObject = mediaStream
 }

 const handleCanPlay = () => {
 videoRef.current.play()
 }

 useEffect(() => {
 ws.current = new WebSocket(
 window.location.protocol.replace('http', 'ws') +
 '//' + // http: -> ws:, https: -> wss:
 'localhost:3000'
 )

 ws.current.onopen = () => {
 console.log('WebSocket Open')
 }

 return () => {
 ws.current.close()
 }
 }, [])

 const startStream = () => {
 liveStream = videoRef.current.captureStream(30) // 30 FPS
 liveStreamRecorder = new MediaRecorder(liveStream, {
 mimeType: 'video/webm;codecs=h264',
 videoBitsPerSecond: 3 * 1024 * 1024,
 })
 liveStreamRecorder.ondataavailable = (e) => {
 ws.current.send(e.data)
 console.log('send data', e.data)
 }
 // Start recording, and dump data every second
 liveStreamRecorder.start(1000)
 }

 const stopStream = () => {
 liveStreamRecorder.stop()
 ws.current.close()
 }

 const toggleMute = () => {
 setMute(!mute)
 }

 return (
 <>
 <navbar></navbar>
 <div style="{{" classname="'main'">
 <div>
 
 </div>
 <div classname="'button-container'">
 <button>Go Live</button>
 <button>Stop Recording</button>
 <button>Share Screen</button>
 <button>Mute</button>
 </div>
 </div>
 >
 )
}

const useUserMedia = (requestedMedia) => {
 const [mediaStream, setMediaStream] = useState(null)

 useEffect(() => {
 async function enableStream() {
 try {
 const stream = await navigator.mediaDevices.getUserMedia(requestedMedia)
 setMediaStream(stream)
 } catch (err) {
 console.log(err)
 }
 }

 if (!mediaStream) {
 enableStream()
 } else {
 return function cleanup() {
 mediaStream.getVideoTracks().forEach((track) => {
 track.stop()
 })
 }
 }
 }, [mediaStream, requestedMedia])

 return mediaStream
}

export default Dashboard



-
Announcing our latest open source project : DeviceDetector
This blog post is an announcement for our latest open source project release : DeviceDetector ! The Universal Device Detection library will parse any User Agent and detect the browser, operating system, device used (desktop, tablet, mobile, tv, cars, console, etc.), brand and model.
Read on to learn more about this exciting release.
Why did we create DeviceDetector ?
Our previous library UserAgentParser only had the possibility to detect operating systems and browsers. But as more and more traffic is coming from mobile devices like smartphones and tablets it is getting more and more important to know which devices are used by the websites visitors.
To ensure that the device detection within Piwik will gain the required attention, so it will be as accurate as possible, we decided to move that part of Piwik into a separate project, that we will maintain separately. As an own project we hope the DeviceDetector will gain a better visibility as well as a better support by and for the community !
DeviceDetector is hosted on GitHub at piwik/device-detector. It is also available as composer package through Packagist.
How DeviceDetector works
Every client requesting data from a webserver identifies itself by sending a so-called User-Agent within the request to the server. Those User Agents might contain several information such as :
- client name and version (clients can be browsers or other software like feed readers, media players, apps,…)
- operating system name and version
- device identifier, which can be used to detect the brand and model.
For Example :
Mozilla/5.0 (Linux; Android 4.4.2; Nexus 5 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.99 Mobile Safari/537.36
This User Agent contains following information :
Operating system is
Android 4.4.2
, client uses the browserChrome Mobile 32.0.1700.99
and the device is a GoogleNexus 5
smartphone.What DeviceDetector currently detects
DeviceDetector is able to detect bots, like search engines, feed fetchers, site monitors and so on, five different client types, including around 100 browsers, 15 feed readers, some media players, personal information managers (like mail clients) and mobile apps using the AFNetworking framework, around 80 operating systems and nine different device types (smartphones, tablets, feature phones, consoles, tvs, car browsers, cameras, smart displays and desktop devices) from over 180 brands.
Note : Piwik itself currently does not use the full feature set of DeviceDetector. Client detection is currently not implemented in Piwik (only detected browsers are reported, other clients are marked as Unknown). Client detection will be implemented into Piwik in the future, follow #5413 to stay updated.
Performance of DeviceDetector
Our detections are currently handled by an enormous number of regexes, that are defined in several .YML Files. As parsing these .YML files is a bit slow, DeviceDetector is able to cache the parsed .YML Files. By default DeviceDetector uses a static cache, which means that everything is cached in static variables. As that only improves speed for many detections within one process, there are also adapters to cache in files or memcache for speeding up detections across requests.
How can users help contribute to DeviceDetector ?
Submit your devices that are not detected yet
If you own a device, that is currently not correctly detected by the DeviceDetector, please create a issue on GitHub
In order to check if your device is detected correctly by the DeviceDetector go to your Piwik server, click on ‘Settings’ link, then click on ‘Device Detection’ under the Diagnostic menu. If the data does not match, please copy the displayed User Agent and use that and your device data to create a ticket.Submit a list of your User Agents
In order to create new detections or improve the existing ones, it is necessary for us to have lists of User Agents. If you have a website used by mostly non desktop devices it would be useful if you send a list of the User Agents that visited your website. To do so you need access to your access logs. The following command will extract the User Agents :
zcat ~/path/to/access/logs* | awk -F'"' '{print $6}' | sort | uniq -c | sort -rn | head -n20000 > /home/piwik/top-user-agents.txt
If you want to help us with those data, please get in touch at devicedetector@piwik.org
Submit improvements on GitHub
As DeviceDetector is free/libre library, we invite you to help us improving the detections as well as the code. Please feel free to create tickets and pull requests on Github.
What’s the next big thing for DeviceDetector ?
Please check out the list of issues in device-detector issue tracker.
We hope the community will answer our call for help. Together, we can build DeviceDetector as the most powerful device detection library !
Happy Device Detection,