
Recherche avancée
Autres articles (73)
-
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (6648)
-
How To Write An Oscilloscope
I’m trying to figure out how to write a software oscilloscope audio visualization. It’s made more frustrating by the knowledge that I am certain that I have accomplished this task before.
In this context, the oscilloscope is used to draw the time-domain samples of an audio wave form. I have written such a plugin as part of the xine project. However, for that project, I didn’t have to write the full playback pipeline— my plugin was just handed some PCM data and drew some graphical data in response. Now I’m trying to write the entire engine in a standalone program and I’m wondering how to get it just right.
This is an SDL-based oscilloscope visualizer and audio player for Game Music Emu library. My approach is to have an audio buffer that holds a second of audio (44100 stereo 16-bit samples). The player updates the visualization at 30 frames per second. The o-scope is 512 pixels wide. So, at every 1/30th second interval, the player dips into the audio buffer at position ((frame_number % 30) * 44100 / 30) and takes the first 512 stereo frames for plotting on the graph.
It seems to be working okay, I guess. The only problem is that the A/V sync seems to be slightly misaligned. I am just wondering if this is the correct approach. Perhaps the player should be performing some slightly more complicated calculation over those (44100/30) audio frames during each update in order to obtain a more accurate graph ? I described my process to an electrical engineer friend of mine and he insisted that I needed to apply something called hysteresis to the output or I would never get accurate A/V sync in this scenario.
Further, I know that some schools of thought on these matters require that the dots in those graphs be connected, that the scattered points simply won’t do. I guess it’s a stylistic choice.
Still, I think I have a reasonable, workable approach here. I might just be starting the visualization 1/30th of a second too late.
-
Detailed explanation of struct AVPacket
27 juin 2013, par AppyI am following this tutorial to make a video player in C/C++ , using SDL and FFMpeg libraries . Unfortunately , FFMpeg libraries lack proper documentation . I am in the third part of the tutorial .
Trying to properly understand the code requires me to understand about
struct AVPacket
completely . This is all the information I could find about AVPacket by searching it on Google :-
And most of the information is redundant . Can anyone explain me atleast the first seven members of AVPacket in detail ?
PS : What happens when we allocate an AVPacket by
AVPacket pkt;
?
What are the various members of pkt , when it is just allocated ?
How is it different in case ofstatic AVPacket pkt
?Thanks :)
-
Python FFmpeg : Unable to set audio stream language
15 décembre 2019, par CryptonautI’m using this Python library to programmatically generate a .MOV using a single .WAV (pcm_s24le - 24 bit - 48000 Hz) as input.
I’ve already asked a few questions relating to other aspects of my video pipeline, seen here and here.
All I’m trying to do is assign the
eng
language tag to a single audio stream in the .MOV output.Here’s my code :
audio = ffmpeg.input(input_audio)
output = ffmpeg.output(audio, output_audio,
acodec='copy',
audio_bitrate=bitrate,
metadata='s:a:0 language=eng')
output.run()The .MOV output this generates displays the following via FFprobe :
Stream #0:0: Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, stereo, s32 (24 bit), 2304 kb/s (default)
However, when I run the same input file using the same options/parameters via command line :
ffmpeg -y -i input_audio.wav -c:a copy -metadata:s:a:0 language=eng output_audio.mov
FFprobe states the stream language is
eng
:Stream #0:0(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, stereo, s32 (24 bit), 2304 kb/s (default)
Why is the command line approach outputting
Stream #0:0(eng)
but not the programmatic approach ?