
Recherche avancée
Médias (3)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (94)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (10116)
-
FFMPEG Queue input backward in time
2 avril 2022, par Spartan 117I am trying to combine two audio files, and delaying the second one. Here's my command


ffmpeg -i RTb295d0534191e1acb22a45bb971a12e6.mka -i RT103bfe5f4b129860f69cd8e820f3a10b.mka -filter_complex "[1:a]adelay=13500s:all=1[apad]; [0:a][apad]amix=inputs=2:weights=1|1[aout]" -map [aout] combined_audio.mka



Here is the output that i'm getting, and it's causing an issue where the second audio is delayed by 5 hours and 45 minutes rather than 3 hours and 45 minutes


ffmpeg -i RTb295d0534191e1acb22a45bb971a12e6.mka -i RT103bfe5f4b129860f69cd8e820f3a10b.mka -filter_complex "[1:a]adelay=13500s:all=1[apad]; [0:a][apad]amix=inputs=2:weights=1|1[aout]" -map [aout] combined_audio.mka
ffmpeg version n5.0-4-g911d7f167c-20220311 Copyright (c) 2000-2022 the FFmpeg developers
 built with gcc 11.2.0 (crosstool-NG 1.24.0.533_681aaef)
 configuration: --prefix=/ffbuild/prefix --pkg-config-flags=--static --pkg-config=pkg-config --cross-prefix=x86_64-w64-mingw32- --arch=x86_64 --target-os=mingw32 --enable-gpl --enable-version3 --disable-debug --disable-w32threads --enable-pthreads --enable-iconv --enable-libxml2 --enable-zlib --enable-libfreetype --enable-libfribidi --enable-gmp --enable-lzma --enable-fontconfig --enable-libvorbis --enable-opencl --disable-libpulse --enable-libvmaf --disable-libxcb --disable-xlib --enable-amf --enable-libaom --enable-avisynth --enable-libdav1d --enable-libdavs2 --disable-libfdk-aac --enable-ffnvcodec --enable-cuda-llvm --enable-frei0r --enable-libgme --enable-libass --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librist --enable-libtheora --enable-libvpx --enable-libwebp --enable-lv2 --enable-libmfx --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-librav1e --enable-librubberband --enable-schannel --enable-sdl2 --enable-libsoxr --enable-libsrt --enable-libsvtav1 --enable-libtwolame --enable-libuavs3d --disable-libdrm --disable-vaapi --enable-libvidstab --enable-vulkan --enable-libshaderc --enable-libplacebo --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libzimg --enable-libzvbi --extra-cflags=-DLIBTWOLAME_STATIC --extra-cxxflags= --extra-ldflags=-pthread --extra-ldexeflags= --extra-libs=-lgomp --extra-version=20220311
 libavutil 57. 17.100 / 57. 17.100
 libavcodec 59. 18.100 / 59. 18.100
 libavformat 59. 16.100 / 59. 16.100
 libavdevice 59. 4.100 / 59. 4.100
 libavfilter 8. 24.100 / 8. 24.100
 libswscale 6. 4.100 / 6. 4.100
 libswresample 4. 3.100 / 4. 3.100
 libpostproc 56. 3.100 / 56. 3.100
Input #0, matroska,webm, from 'RTb295d0534191e1acb22a45bb971a12e6.mka':
 Metadata:
 encoder : GStreamer matroskamux version 1.16.2
 creation_time : 2022-03-23T21:20:27.000000Z
 Duration: 03:45:00.47, start: 0.291000, bitrate: 19 kb/s
 Stream #0:0(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
 Metadata:
 title : Audio
Input #1, matroska,webm, from 'RT103bfe5f4b129860f69cd8e820f3a10b.mka':
 Metadata:
 encoder : GStreamer matroskamux version 1.16.2
 creation_time : 2022-03-24T01:05:30.000000Z
 Duration: 02:45:03.51, start: 13502.587000, bitrate: 5 kb/s
 Stream #1:0(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
 Metadata:
 title : Audio
Stream mapping:
 Stream #0:0 (opus) -> amix
 Stream #1:0 (opus) -> adelay:default
 amix:default -> Stream #0:0 (libvorbis)
Press [q] to stop, [?] for help
Output #0, matroska, to 'combined_audio.mka':
 Metadata:
 encoder : Lavf59.16.100
 Stream #0:0: Audio: vorbis (oV[0][0] / 0x566F), 48000 Hz, stereo, fltp
 Metadata:
 encoder : Lavc59.18.100 libvorbis
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time231x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time184x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time189x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time223x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time275x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time245x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time213x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time209x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time208x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time204x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time199x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time193x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time185x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time181x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time178x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time177x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time176x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time169x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time167x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time163x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time146x
[libvorbis @ 00000229f8a7bbc0] Queue input is backward in time139x
size= 75141kB time=06:07:52.57 bitrate= 27.9kbits/s speed= 130x
video:0kB audio:70470kB subtitle:0kB other streams:0kB global headers:4kB muxing overhead: 6.628071%



The audio files being mixed together - https://www.easyupload.io/m/durisk


How can i resolve this issue ?


-
Title : Getting "invalid_request_error" when trying to pass converted audio file to OpenAI API
19 avril 2023, par Dummy CronI am working on a project where I receive a URL from a webhook on my server whenever users share a voice note on my WhatsApp. I am using WATI as my WhatsApp API Provder


The file URL received is in the .opus format, which I need to convert to WAV and pass to the OpenAI Whisper API translation task.


I am trying convert it to .wav using ffmpeg, and pass it to the OpenAI API for translation processing.
However, I am getting an "invalid_request_error"


import requests
import io
import subprocess
file_url = #.opus file url
api_key = #WATI API Keu

def transcribe_audio_to_text():
 # Fetch the audio file and convert to wav format

 headers = {'Authorization': f'Bearer {api_key}'}
 response = requests.get(file_url, headers=headers)
 audio_bytes = io.BytesIO(response.content)

 process = subprocess.Popen(['ffmpeg', '-i', '-', '-f', 'wav', '-acodec', 'libmp3lame', '-'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
 wav_audio, _ = process.communicate(input=audio_bytes.read())

 # Set the Whisper API endpoint and headers
 WHISPER_API_ENDPOINT = 'https://api.openai.com/v1/audio/translations'
 whisper_api_headers = {'Authorization': 'Bearer ' + WHISPER_API_KEY,
 'Content-Type': 'application/json'}
 print(whisper_api_headers)
 # Send the audio file for transcription

 payload = {'model': 'whisper-1'}
 files = {'file': ('audio.wav', io.BytesIO(wav_audio), 'audio/wav')}

 # files = {'file': ('audio.wav', io.BytesIO(wav_audio), 'application/octet-stream')}

 # files = {'file': ('audio.mp3', io.BytesIO(mp3_audio), 'audio/mp3')}
 response = requests.post(WHISPER_API_ENDPOINT, headers=whisper_api_headers, data=payload)
 print(response)
 # Get the transcription text
 if response.status_code == 200:
 result = response.json()
 text = result['text']
 print(response, text)
 else:
 print('Error:', response)
 err = response.json()
 print(response.status_code)
 print(err)
 print(response.headers)

transcribe_audio_to_text()



Output :


Error: <response>
400
{'error': {'message': "We could not parse the JSON body of your request. (HINT: This likely means you aren't using your HTTP library correctly. The OpenAI API expects a JSON payload, but what was sent was not valid JSON. If you have trouble figuring out how to fix this, please send an email to support@openai.com and include any relevant code you'd like help with.)", 'type': 'invalid_request_error', 'param': None, 'code': None}}
</response>


-
Streaming webm with ffserver error-
4 août 2014, par user3614833I have setup ffserver to stream mpeg-ts, flv from a live rtsp feed via ffmpeg, but when i also include webm format in the configuration and try to play the webm file in browser i get the following error in the log
"Only VP8,VP9 video and Vorbis,Opus(experimental, use -strict -2) audio and WebVTT subtitles are supported for WebM"
The ffmpeg command i use is
ffmpeg -i rtsp ://192.168.1.1:5543/lowQ.sdp -c copy http://xxx.xxxx.xxxx:8080/feed1.ffmThe ffserver configuration is
Feed feed1.ffm
Format webm
NoAudio
AVOptionVideo flags +global_header
VideoBitRate 500k
VideoBufferSize 40
VideoFrameRate 25
VideoCodec libvpx
StartSendOnKey
Preroll 15Appreciate your help in this !