
Recherche avancée
Médias (10)
-
Demon Seed
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Demon seed (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
The four of us are dying (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Corona radiata (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Lights in the sky (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Head down (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (98)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (9794)
-
Audio effect ( a 20ms delay between right and the left channel) using Web Audio API or any Javascript Audio Library like howler.js, tone.js ?
15 juillet 2020, par questionare_101I was wondering if there any option in howler.js, tone.js or any other javascript audio library which I can use to add a 20ms delay between the right and the left channel which makes the audio listening experience more immersive.


Can it be achieved using Audio sprites with howler.js ? (but I guess it can't separate the right and the left channels)
https://medium.com/game-development-stuff/how-to-create-audiosprites-to-use-with-howler-js-beed5d006ac1


Is there any ?


Have also asked the same quest here : https://github.com/goldfire/howler.js/issues/1374


I usually enable this option under ffdshow audio processor while playing audio using MPC-HC (Mega Version) on my pc. I was wondering how can I do it using Web Audio API or howler.js ?




Somewhat like this kind of effect : Just delay the either channel by 20ms
Like we do in Adobe Audition



-
avformat/smacker : Improve timestamps
24 juin 2020, par Andreas Rheinhardtavformat/smacker : Improve timestamps
A Smacker file can contain up to seven audio tracks. Up until now,
the pts for the i. audio packet contained in a Smacker frame was
simply the end timestamp of the last i. audio packet contained in
an earlier Smacker frame.The problem with this is that a Smacker stream need not contain data in
every Smacker frame and so the current i. audio packet present may come
from a different underlying stream than the last i. audio packet
contained in an earlier frame.The sample hypnotix.smk* exhibits this. It has three audio tracks and
the first of the three has a longer first packet, so that the audio for
the first track is contained in only 235 packets contained in the first
235 Smacker frames ; the end timestamp of this track is 166696 (about 7.56s
at a timebase of 1/22050) ; the other two audio tracks both have 253 packets
contained in the first 253 Smacker frames. Up until now, the 236th
packet of the second track being the first audio packet in the 236th
Smacker frame would get the end timestamp of the last first audio packet
from the last Smacker frame containing a first audio packet and said
last audio packet is the first audio packet from the 235th Smacker frame
from the first audio track, so that the timestamp is 166696. In contrast,
the 236th packet from the third track (whose packets contain the same number
of samples as the packets from the second track) has a timestamp of
156116 (because its timestamp is derived from the end timestamp of the
235th packet of the second audio track). In the end, the second track
ended up being 177360/22050 s = 8.044s long ; in contrast, the third
track was 166780/22050 s = 7.56s long which also coincided with the
video.This commit fixes this by not using timestamps from other tracks for
a packet's pts.* : https://samples.ffmpeg.org/game-formats/smacker/wetlands/hypnotix.smk
Reviewed-by : Timotej Lazar <timotej.lazar@araneo.si>
Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com> -
Display FFMPEG decoded frame in a GLFW window
17 juin 2020, par InfectoI am implementing the client program of a game where the server sends encoded frames of the game to the client (via UDP), while the client decodes them (via FFMPEG) and displays them in a GLFW window. 
My program has two threads :



- 

- Thread 1 : renders the content of the uint8_t* variable
dataToRender
- Thread 2 : keeps obtaining frames from the server, decodes them and updates
dataToRender
accordingly







Thread 1 does the typical rendering of a GLFW window in a while-loop. I have already tried to display some dummy frame data (a completely red frame) and it worked :



while (!glfwWindowShouldClose(window)) {
 glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
 ...

 glBindTexture(GL_TEXTURE_2D, tex_handle);
 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, window_width, window_height, 0, GL_RGB, GL_UNSIGNED_BYTE, dataToRender);
 ...
 glfwSwapBuffers(window);
}




Thread 2 is where I am having trouble. I am unable to properly store the decoded frame into my
dataToRender
variable. On top if it, the frame data is originally in YUV format and needs to be converted to RGB. I use FFMPEG'ssws_scale
for that, which also gives me abad dst image pointers
error output in the console. Here's the code snippet responsible for that part :


size_t data_size = frameBuffer.size(); // frameBuffer is a std::vector where I accumulate the frame data chunks
 uint8_t* data = frameBuffer.data(); // convert the vector to a pointer
 picture->format = AV_PIX_FMT_RGB24;
 av_frame_get_buffer(picture, 1);
 while (data_size > 0) {
 int ret = av_parser_parse2(parser, c, &pkt->data, &pkt->size,
 data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
 if (ret < 0) {
 fprintf(stderr, "Error while parsing\n");
 exit(1);
 }
 data += ret;
 data_size -= ret;

 if (pkt->size) {
 swsContext = sws_getContext(
 c->width, c->height,
 AV_PIX_FMT_YUV420P, c->width, c->height,
 AV_PIX_FMT_RGB24, SWS_BILINEAR, NULL, NULL, NULL
 );
 uint8_t* rgb24[1] = { data };
 int rgb24_stride[1] = { 3 * c->width };
 sws_scale(swsContext, rgb24, rgb24_stride, 0, c->height, picture->data, picture->linesize);

 decode(c, picture, pkt, outname);
 // TODO: copy content of picture->data[0] to "dataToRender" maybe?
 }
 }




I have already tried doing another sws_scale to copy the content to
dataToRender
and I cannot get rid of thebad dst image pointers
error. Any advice or solution to the problem would be greatly appreciated as I have been stuck for days on this.

- Thread 1 : renders the content of the uint8_t* variable