
Recherche avancée
Autres articles (26)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (4517)
-
Audio Lag Issue in Long-term FFmpeg Live Streaming with x11grab and Pulse
8 juillet 2024, par Dhairya VermaI am currently working on a live streaming project where I use FFmpeg with x11grab and PulseAudio to stream headlessly from a Linux server to an RTMP endpoint. While the setup generally works well, I am encountering an issue where the audio begins to lag behind the video after approximately two days of continuous streaming.


"-hwaccel", "cuda",

"-f", "x11grab",

"-s", "1920x1080",

"-draw_mouse", "0",

"-thread_queue_size", "1024",

"-i", ":1",

"-f", "pulse",

"-r", "60",

"-thread_queue_size", "1024",

"-i", "VirtualSink.monitor",

"-c:v", "h264_nvenc",

"-preset:v", "hq",

"-b:v", "2500k",

"-maxrate", "2500k",

"-bufsize", "10000k",

"-vf", "fps=60,crop=1280:720:320:180,format=yuv420p",

"-g", "60",

"-c:a", "aac",

"-af", "adelay=900|900",

"-b:a", "128k",

"-ar", "44100",

"-fps_mode", "cfr",

"-async", "1",

"-f", "flv",

'RTMP_LINK',



After two days of streaming, the audio noticeably lags behind the video. I have tried adjusting various settings and buffers, but the issue persists.


Could anyone please provide any insights or suggestions on how to address this issue ?


-
Using FFMPEG to extract audio track from video results in extra samples at beginning
25 janvier 2017, par jreikesIf I extract audio from a video using
ffmpeg -i video.mp4 -c copy audio.m4a
, the resulting audio file is slightly out of sync with the original video file. It’s only a tiny fraction of a second, but it causes problems in my application.It looks like the audio file gets a few extra samples at the beginning that are ignored in the original video file (not 100% sure, but I think this is what’s happening). I’ve tried a bunch of different seek parameters, but can’t seem to get the audio file to match up properly yet.
The top waveform is the audio directly from the video file and the bottom waveform is the audio from the extracted audio file.
One of the end goals here was to do an audio crossfade at the beginning of the file. I suspect I stand a better chance of success by doing the crossfade to the original video file instead of separating audio, then doing the crossfade, then re-combining it with the video. I’m currently using the following :
ffmpeg -i 01.tail.m4a -i 02.mp4 -filter_complex "[0]asetpts=PTS-STARTPTS[a0]; [1]asetpts=PTS-STARTPTS[a1]; [a0][a1]acrossfade=d=0.5" -vcodec copy -mpegts_copyts 1 02.crossfade.mp4
But the difference in timing is still coming through with this approach and I can hear a short echo during the crossfade. Any idea how to fix this command so that doesn’t happen ?
-
Sequencing MIDI From A Chiptune
28 avril 2013, par Multimedia Mike — Outlandish BrainstormsThe feature requests for my game music appreciation website project continue to pour in. Many of them take the form of “please add player support for system XYZ and the chiptune library to go with it.” Most of these requests are A) plausible, and B) in process. I have also received recommendations for UI improvements which I take under consideration. Then there are the numerous requests to port everything from Native Client to JavaScript so that it will work everywhere, even on mobile, a notion which might take a separate post to debunk entirely.
But here’s an interesting request about which I would like to speculate : Automatically convert a chiptune into a MIDI file. I immediately wanted to dismiss it as impossible or highly implausible. But, as is my habit, I started pondering the concept a little more carefully and decided that there’s an outside chance of getting some part of the idea to work.
Intro to MIDI
MIDI stands for Musical Instrument Digital Interface. It’s a standard musical interchange format and allows music instruments and computers to exchange musical information. The file interchange format bears the extension .mid and contains a sequence of numbers that translate into commands separated by time deltas. E.g. : turn key on (this note, this velocity) ; wait x ticks ; turn key off ; wait y ticks ; etc. I’m vastly oversimplifying, as usual.MIDI fascinated me back in the days of dialup internet and discrete sound cards (see also my write-up on the Gravis Ultrasound). Typical song-length MIDI files often ranged from a few kilobytes to a few 10s of kilobytes. They were significantly smaller than the MOD et al. family of tracker music formats mostly by virtue of the fact that MIDI files aren’t burdened by transporting digital audio samples.
I know I’m missing a lot of details. I haven’t dealt much with MIDI in the past… 15 years or so (ever since computer audio became a blur of MP3 and AAC audio). But I’m led to believe it’s still relevant. The individual who requested this feature expressed an interest in being able to import the sequenced data into any of the many music programs that can interpret .mid files.The Pitch
To limit the scope, let’s focus on music that comes from the 8-bit Nintendo Entertainment System or the original Game Boy. The former features 2 square wave channels, a triangle wave, a noise channel, and a limited digital channel. The latter creates music via 2 square waves, a wave channel, and a noise channel. The roles that these various channels usually play typically break down as : square waves represent the primary melody, triangle wave is used to simulate a bass line, noise channel approximates a variety of percussive sounds, and the DPCM/wave channels are fairly free-form. They can have random game sound effects or, if they are to assist in the music, are often used for more authentic percussive sounds.The various channels are controlled via an assortment of memory-mapped hardware registers. These registers are fed values such as frequency, volume, and duty cycle. My idea is to modify the music playback engine to track when various events occur. Whenever a channel is turned on or off, that corresponds to a MIDI key on or off event. If a channel is already playing but a new frequency is written, that would likely count as a note change, so log a key off event followed by a new key on event.
There is the major obstacle of what specific note is represented by a channel in a particular state. The MIDI standard defines 128 different notes spanning 11 octaves. Empirically, I wonder if I could create a table which maps the assorted frequencies to different MIDI notes ?
I think this strategy would only work with the square and triangle waves. Noise and digital channels ? I’m not prepared to tackle that challenge.
Prior Work ?
I have to wonder if there is any existing work in this area. I’m certain that people have wanted to do this before ; I wonder if anyone has succeeded ?Just like reverse engineering a binary program entails trying to obtain a higher level abstraction of a program from a very low level representation, this challenge feels like reverse engineering a piece of music as it is being performed and automatically expressing it in a higher level form.