
Recherche avancée
Médias (1)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (111)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)
Sur d’autres sites (11513)
-
FFMPEG HTTP to RTP then RTP to HTTP with OPUS
20 juin 2020, par Brad HambletonI'm taking a HTTP output to FFMPEG and copying the audio (no video) to an RTP :
ffmpeg -i http://192.168.0.40:20110 -c:a copy -f rtp rtp ://192.168.87.40:20210 ?pkt_size=1328 -sdp_file opus.sdp


At the other end receiving the RTP and pushing it back to HTTP :
ffmpeg -re -protocol_whitelist rtp,file,udp -i opus.sdp -c:a copy -listen 1 -method GET -f opus http://192.168.87.40:20220


2 Problems :


- 

- Currently the encoding process doesn't optimize packets.
92 1.004672 192.168.0.40 192.168.0.40 UDP 392 52954 → 20210 Len=332
93 1.004727 192.168.0.40 192.168.0.40 UDP 392 52954 → 20210 Len=332
94 1.004789 192.168.0.40 192.168.0.40 UDP 392 52954 → 20210 Len=332
95 1.004855 192.168.0.40 192.168.0.40 UDP 392 52954 → 20210 Len=332
96 1.004908 192.168.0.40 192.168.0.40 UDP 392 52954 → 20210 Len=332




Each packet length is 332, which leaves a lot of wasted space. I'd like to get close to 1500 (Stack 4 together I get 1328 which is close enough)
Is there a command in the FFMPEG/RTP that will optimize packets ?
I added ?pkt_size=1328 to the RTP however that only sets max, not preferred.


- 

- I get the following error when I try to HTTP to RTP via copy :
C :\Decode>ffmpeg -re -protocol_whitelist rtp,file,udp -i opus.sdp -c:a copy -listen 1 -method GET -f opus http://192.168.0.40:20220
ffmpeg version git-2020-05-23-26b4509 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 9.3.1 (GCC) 20200523
configuration : —enable-gpl —enable-version3 —enable-sdl2 —enable-fontconfig —enable-gnutls —enable-iconv —enable-libass —enable-libdav1d —enable-libbluray —enable-libfreetype —enable-libmp3lame —enable-libopencore-amrnb —enable-libopencore-amrwb —enable-libopenjpeg —enable-libopus —enable-libshine —enable-libsnappy —enable-libsoxr —enable-libsrt —enable-libtheora —enable-libtwolame —enable-libvpx —enable-libwavpack —enable-libwebp —enable-libx264 —enable-libx265 —enable-libxml2 —enable-libzimg —enable-lzma —enable-zlib —enable-gmp —enable-libvidstab —enable-libvmaf —enable-libvorbis —enable-libvo-amrwbenc —enable-libmysofa —enable-libspeex —enable-libxvid —enable-libaom —disable-w32threads —enable-libmfx —enable-ffnvcodec —enable-cuda-llvm —enable-cuvid —enable-d3d11va —enable-nvenc —enable-nvdec —enable-dxva2 —enable-avisynth —enable-libopenmpt —enable-amf
libavutil 56. 48.100 / 56. 48.100
libavcodec 58. 87.101 / 58. 87.101
libavformat 58. 43.100 / 58. 43.100
libavdevice 58. 9.103 / 58. 9.103
libavfilter 7. 83.100 / 7. 83.100
libswscale 5. 6.101 / 5. 6.101
libswresample 3. 6.100 / 3. 6.100
libpostproc 55. 6.100 / 55. 6.100
Input #0, sdp, from 'opus.sdp' :
Metadata :
title : No Name
Duration : N/A, start : 0.000000, bitrate : N/A
Stream #0:0 : Audio : opus, 48000 Hz, stereo, fltp
[opus @ 00000221a9a4d280] No extradata present
Could not write header for output file #0 (incorrect codec parameters ?) : Invalid data found when processing input
Stream mapping :
Stream #0:0 -> #0:0 (copy)
Last message repeated 1 times




Tried a variety of additions to the RTP to HTTP CLI to get it to work, but still nothing.


-flags -global_header -reconnect_streamed 1 -headers "X-Forwarded-For : 13.14.15.66"


Is there a specific OPUS or HTTP header that can be added to get it to work. Decoding and Encoding does work for RTP to HTTP, the idea isn't to decode/encode at either point, just to copy the audio, change the container..


Cheers


-
Updated(reproducible) - Gaps when recording using MediaRecorder API(audio/webm opus)
25 mars 2019, par Jack Juiceson----- UPDATE HAS BEEN ADDED BELOW -----
I have an issue with MediaRecorder API (https://www.w3.org/TR/mediastream-recording/#mediarecorder-api).
I’m using it to record the speech from the web page(Chrome was used in this case) and save it as chunks.
I need to be able to play it while and after it is recorded, so it’s important to keep those chunks.Here is the code which is recording data :
navigator.mediaDevices.getUserMedia({ audio: true, video: false }).then(function(stream) {
recorder = new MediaRecorder(stream, { mimeType: 'audio/webm; codecs="opus"' })
recorder.ondataavailable = function(e) {
// Read blob from `e.data`, decode64 and send to sever;
}
recorder.start(1000)
})The issue is that the WebM file which I get when I concatenate all the parts is corrupted(rarely) !. I can play it as WebM, but when I try to convert it(ffmpeg) to something else, it gives me a file with shifted timings.
For example. I’m trying to convert a file which has duration
00:36:27.78
to wav, but I get a file with duration00:36:26.04
, which is 1.74s less.At the beginning of file - the audio is the same, but after about 10min WebM file plays with a small delay.
After some research, I found out that it also does not play correctly with the browser’s MediaSource API, which I use for playing the chunks. I tried 2 ways of playing those chunks :
In a case when I just merge all the parts into a single blob - it works fine.
In case when I add them via the sourceBuffer object, it has some gaps (i can see them by inspectingbuffered
property).
697.196 - 697.528 ( 330ms)
996.198 - 996.754 ( 550ms)
1597.16 - 1597.531 ( 370ms)
1896.893 - 1897.183 ( 290ms)Those gaps are 1.55s in total and they are exactly in the places where the desync between wav & webm files start. Unfortunately, the file where it is reproducible cannot be shared because it’s customer’s private data and I was not able to reproduce such issue on different media yet.
What can be the cause for such an issue ?
----- UPDATE -----
I was able to reproduce the issue on https://jsfiddle.net/96uj34nf/4/In order to see the problem, click on the "Print buffer zones" button and it will display time ranges. You can see that there are two gaps :
0 - 136.349, 141.388 - 195.439, 197.57 - 198.589- 136.349 - 141.388
- 195.439 - 197.57
So, as you can see there are 5 and 2 second gaps. Would be happy if someone could shed some light on why it is happening or how to avoid this issue.
Thank you
-
FFmpeg Error : Only VP8 or VP9 or AV1 video and Vorbis or Opus audio and WebVTT subtitles are supported for WebM and incorrect codec parameters
24 mars 2023, par Not A BotUsing FFmpeg to record the live stream.


Actually, it is a platform where someone can stream live (camera or screen) and others can join the stream as views.


The person who is streaming live has the option to record that stream. For recording stream
FFmpeg
is being used.

The stream is being recorded in the
WEBM
file format.

The issue is FFmpeg is throwing the error.




Only VP8 or VP9 or AV1 video and Vorbis or Opus audio and WebVTT subtitles are supported for WebM.

Could not write header for output file #0 (incorrect codec parameters ?) : Invalid argument



The command that records the stream is below.


ffmpeg -loglevel debug -protocol_whitelist pipe udp rtp 
-fflags+genpts -f sdp -i pipe:0 -map 0:v:0 -c:v copy 
-map 0:a:0 -strict -2 -c:a copy -flags +global_header 1234_VIDEO_1671647432529.webm 








 Service 

Version 







 Server(EC2) 

AWS Linux 2 




 FFmpeg version 

4.3.2-static 









NOTE : 2 weeks back the same command was working.