
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (56)
-
Formulaire personnalisable
21 juin 2013, parCette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire. (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Qu’est ce qu’un masque de formulaire
13 juin 2013, parUn masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
Chaque formulaire de publication d’objet peut donc être personnalisé.
Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)
Sur d’autres sites (9179)
-
WASAPI resampling
30 juin 2013, par magingaxI am trying to play 24bit/48000 Hz audio on PC. my PC seems not support 32bit sample (event GetMixFormat return 32bit/44100Hz device)
So I'm trying to convert 24bit/4800Hz audio into 16bit/44100Hz
but It sound like noise.
sample audio file 'titan.wav' is 24bit/48000Hz s32 format
below is sample code#define WINVER 0x0600
#define _WIN32_WINNT 0x0600
#include
#include <iostream>
#include "Mmdeviceapi.h"
#include "Audioclient.h"
#include "Endpointvolume.h"
using namespace std;
extern "C"
{
#include <libavformat></libavformat>avformat.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libswscale></libswscale>swscale.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libavutil></libavutil>samplefmt.h>
#include <libswresample></libswresample>swresample.h>
}
#define MAX_AUDIO_FRAME_SIZE 192000
#define REFTIMES_PER_SEC 10000000 // 1 sec 100ns
#define REFTIMES_PERMILLISEC 10000
const CLSID CLSID_MMDeviceEnumerator = __uuidof(MMDeviceEnumerator);
const IID IID_IMMDeviceEnumerator = __uuidof(IMMDeviceEnumerator);
const IID IID_IAudioClient = __uuidof(IAudioClient);
const IID IID_IAudioRenderClient = __uuidof(IAudioRenderClient);
int alloc_samples_array_and_data(uint8_t*** data, int *linesize, int nb_channels,int nb_samples, enum AVSampleFormat sample_fmt, int align)
{
int nb_planes = av_sample_fmt_is_planar(sample_fmt) ? nb_channels : 1;
*data = (uint8_t**)av_malloc(sizeof(*data) * nb_planes);
return av_samples_alloc(*data, linesize, nb_channels,nb_samples, sample_fmt, align);
}
int main()
{
HRESULT hr;
REFERENCE_TIME buf_duration_request = REFTIMES_PER_SEC;
REFERENCE_TIME buf_duration_actual;
IMMDeviceEnumerator *pEnumerator = NULL;
IMMDevice *pDevice = NULL;
IAudioClient *pAudioClient = NULL;
IAudioRenderClient *pRenderClient = NULL;
IAudioEndpointVolume *endpoint_vol = NULL;
WAVEFORMATEX *fmt = NULL;
UINT32 frame_total = 0; // total frames in buffer
UINT32 frame_avail = 0; // available frame number in buffer
UINT32 frame_fill = 0; // filled frames
BYTE *pData = NULL;
FILE *file_audio = NULL;
BYTE *buf = NULL;
DWORD flags = 0;
CoInitializeEx(NULL,COINIT_MULTITHREADED);
CoCreateInstance (CLSID_MMDeviceEnumerator,NULL,CLSCTX_ALL,IID_IMMDeviceEnumerator, (void**)&pEnumerator);
pEnumerator->GetDefaultAudioEndpoint(eRender, eConsole, &pDevice);
pDevice->Activate(IID_IAudioClient, CLSCTX_ALL, NULL, (void**)&pAudioClient);
pDevice->Activate(__uuidof(IAudioEndpointVolume),CLSCTX_ALL,NULL,(void**)&endpoint_vol);
pAudioClient->GetMixFormat((WAVEFORMATEX**)&fmt);
fmt->wFormatTag = WAVE_FORMAT_PCM;
fmt->nChannels = 2 ;
fmt->nSamplesPerSec = 44100;
fmt->wBitsPerSample = 16;
fmt->nBlockAlign = fmt->nChannels * (fmt->wBitsPerSample/8);
fmt->nAvgBytesPerSec = fmt->nSamplesPerSec * fmt->nBlockAlign;
fmt->cbSize = 0;
WAVEFORMATEX* closest_format = NULL;
hr = pAudioClient->IsFormatSupported(AUDCLNT_SHAREMODE_SHARED, fmt ,&closest_format);
hr = pAudioClient->Initialize(AUDCLNT_SHAREMODE_SHARED, NULL ,buf_duration_request,0, fmt ,NULL);
hr = pAudioClient->GetBufferSize(&frame_total);
hr = pAudioClient->GetService(IID_IAudioRenderClient,(void**)&pRenderClient);
int file_size = 0;
int buf_size = 44100 * 4;
int read_ret = 0;
int read_acc = 0;
fopen_s(&file_audio,"E:\\MOVIE\\titan.wav","rb");
fseek (file_audio , 0 , SEEK_END);
file_size = ftell (file_audio);
rewind (file_audio);
buf = (BYTE*) malloc(buf_size);
pRenderClient->GetBuffer(frame_total,&pData);
read_ret = fread(buf,1,buf_size,file_audio);
read_acc += read_ret;
memcpy(pData,buf,buf_size);
pRenderClient->ReleaseBuffer(frame_total,flags);
buf_duration_actual = (double) REFTIMES_PER_SEC * frame_total / fmt->nSamplesPerSec;
pAudioClient->Start();
//-- resample
uint8_t** src_data;
uint8_t** dst_data;
int src_bufsize = 0;
int dst_bufsize = 0;
int src_linesize = 0;
int dst_linesize = 0;
int src_nb_channels = 0;
int dst_nb_channels = 0;
int src_nb_samples = 1024;
int dst_nb_samples = 0;
int max_dst_nb_samples = 0;
int ret = 0;
SwrContext* swr_ctx = NULL;
swr_ctx = swr_alloc();
av_opt_set_int (swr_ctx,"in_channel_layout" , AV_CH_LAYOUT_MONO, 0);
av_opt_set_int (swr_ctx,"in_sample_rate" , 48000,0);
av_opt_set_sample_fmt (swr_ctx,"in_sample_fmt" , AV_SAMPLE_FMT_S32, 0);
av_opt_set_int (swr_ctx,"out_channel_layout", AV_CH_LAYOUT_STEREO, 0);
av_opt_set_int (swr_ctx,"out_sample_rate" , 44100,0);
av_opt_set_sample_fmt (swr_ctx,"out_sample_fmt" , AV_SAMPLE_FMT_S16, 0);
ret = swr_init(swr_ctx);
src_nb_channels = av_get_channel_layout_nb_channels(AV_CH_LAYOUT_MONO);
src_nb_samples = src_nb_channels * 48000;
dst_nb_samples = av_rescale_rnd(src_nb_samples, 44100,48000,AV_ROUND_UP);
max_dst_nb_samples = dst_nb_samples;
alloc_samples_array_and_data(&src_data,&src_linesize,src_nb_channels,src_nb_samples, AV_SAMPLE_FMT_S32,0);
dst_nb_channels = av_get_channel_layout_nb_channels(AV_CH_LAYOUT_STEREO);
alloc_samples_array_and_data(&dst_data,&dst_linesize,dst_nb_channels,dst_nb_samples, AV_SAMPLE_FMT_S16,0);
//-- end resample
while (read_acc < file_size)
{
Sleep(buf_duration_actual/REFTIMES_PERMILLISEC/2);
pAudioClient->GetCurrentPadding(&frame_fill);
frame_avail = frame_total - frame_fill;
dst_nb_samples = frame_avail;
src_nb_samples = av_rescale_rnd(dst_nb_samples,48000,44100,AV_ROUND_UP);
src_bufsize = av_samples_get_buffer_size(&src_linesize,src_nb_channels,src_nb_samples,AV_SAMPLE_FMT_S32,1);
cout<<"FILLED:"<GetBuffer(frame_avail, &pData );
read_ret = fread(src_data[0],1,src_bufsize,file_audio);
read_acc += read_ret;
ret = swr_convert(swr_ctx,dst_data,dst_nb_samples,(const uint8_t**)src_data,src_nb_samples);
dst_bufsize = av_samples_get_buffer_size(&dst_linesize,dst_nb_channels,ret,AV_SAMPLE_FMT_S16,1);
memcpy(pData, dst_data[0], dst_bufsize);
hr = pRenderClient->ReleaseBuffer(frame_avail,flags);
}
pAudioClient->Stop();
CoTaskMemFree(fmt);
pEnumerator->Release();
pDevice->Release();
pAudioClient->Release();
pRenderClient->Release();
CoUninitialize();
return 0;
}
</iostream> -
EventHandler while changing frame using ffmpeg.exe
7 juin 2015, par Abdur RahimReading a video file using
ffmpeg.exe
inC#.
Is there any good wrapper which can fire an event while changing a frame and
keep a pointer of the frame ?Or how can I generate own eventHandler on that specific event i.e. changing of frames.
-
Managing Music Playback Channels
30 juin 2013, par Multimedia Mike — GeneralMy Game Music Appreciation site allows users to interact with old video game music by toggling various channels, as long as the underlying synthesizer engine supports it.
Users often find their way to the Nintendo DS section pretty quickly. This is when they notice an obnoxious quirk with the channel toggling feature : specifically, one channel doesn’t seem to map to a particular instrument or track.
When it comes to computer music playback methodologies, I have long observed that there are 2 general strategies : Fixed channel and dynamic channel allocation.
Fixed Channel Approach
One of my primary sources of computer-based entertainment used to be watching music. Sure I listened to it as well. But for things like Amiga MOD files and related tracker formats, there was a rich ecosystem of fun music playback programs that visualized the music. There exist music visualization modes in various music players these days (such as iTunes and Windows Media Player), but those largely just show you a single wave form. These files were real time syntheses based on multiple audio channels and usually showed some form of analysis for each channel. My personal favorite was Cubic Player :
Most of these players supported the concept of masking individual channels. In doing so, the user could isolate, study, and enjoy different components of the song. For many 4-channel Amiga MOD files, I observed that the common arrangement was to use the 4 channels for beat (percussion track), bass line, chords, and melody. Thus, it was easy to just listen to, e.g., the bass line in isolation.
MODs and similar formats specified precisely which digital audio sample to play at what time and on which specific audio channel. To view the internals of one of these formats, one gets the impression that they contain an extremely computer-centric view of music.
Dynamic Channel Allocation Algorithm
MODs et al. enjoyed a lot of popularity, but the standard for computer music is MIDI. While MOD and friends took a computer-centric view of music, MIDI takes, well, a music-centric view of music.There are MIDI visualization programs as well. The one that came with my Gravis Ultrasound was called PLAYMIDI.EXE. It looked like this…
… and it confused me. There are 16 distinct channels being visualized but some channels are shown playing multiple notes. When I dug into the technical details, I learned that MIDI just specifies what notes need to be played, at what times and frequencies and using which instrument samples, and it was the MIDI playback program’s job to make it happen.
Thus, if a MIDI file specifies that track 1 should play a C major chord consisting of notes C, E, and G, it would transmit events “key-on C ; delta time 0 ; key-on E ; delta time 0 ; key-on G ; delta time … ; [other commands]“. If the playback program has access to multiple channels (say, up to 32, in the case of the GUS), the intuitive approach would be to maintain a pool of all available channels. Then, when it’s time to process the “key-on C” event, fetch the first available channel from the pool, mark it as in-use, play C on the channel, and return that channel to the pool when either the sample runs its course or the corresponding “key-off C” event is encountered in the MIDI command stream.
About That Game Music
Circling back around to my game music website, numerous supported systems use the fixed channel approach for playback while others use dynamic channel allocation approach, including evey Nintendo DS game I have so far analyzed.Which approach is better ? As in many technical matters, there are trade-offs either way. For many systems, the fixed channel approach is necessary because for many older audio synthesis systems, different channels had very specific purposes. The 8-bit NES had 5 channels : 2 square wave generators (used musically for melody/treble), 1 triangle wave generator (usually used for bass line), a noise generator (subverted for all manner of percussive sounds), and a limited digital channel (was sometimes assigned richer percussive sounds). Dynamic channel allocation wouldn’t work here.
But the dynamic approach works great on hardware with 16 digital channels available like, for example, the Nintendo DS. Digital channels are very general-purpose. What about the SNES, with its 8 digital channels ? Either approach could work. In practice, most games used a fixed channel approach : Games might use 4-6 channels for music while reserving the remainder for various in-game sound effects. Some notable exceptions to this pattern were David Wise’s compositions for Rare’s SNES games (think Battletoads and the various Donkey Kong Country titles). These clearly use some dynamic channel approach since masking all but one channel will give you a variety of instrument sounds.
Epilogue
There ! That took a long time to explain but I find it fascinating for some reason. I need to distill it down to far fewer words because I want to make it a FAQ on my website for “Why can’t I isolate specific tracks for Nintendo DS games ?”Actually, perhaps I should remove the ability to toggle Nintendo DS channels in the first place. Here’s a funny tale of needless work : I found the Vio2sf engine for synthesizing Nintendo DS music and incorporated it into the program. It didn’t support toggling of individual channels so I figured out a way to add that feature to the engine. And then I noticed that most Nintendo DS games render that feature moot. After I released the webapp, I learned that I was out of date on the Vio2sf engine. The final insult was that the latest version already supports channel toggling. So I did the work for nothing. But then again, since I want to remove that feature from the UI, doubly so.