
Recherche avancée
Autres articles (57)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;
Sur d’autres sites (6474)
-
Writing A Dreamcast Media Player
6 janvier 2017, par Multimedia Mike — Sega DreamcastI know I’m not the only person to have the idea to port a media player to the Sega Dreamcast video game console. But I did make significant progress on an implementation. I’m a little surprised to realize that I haven’t written anything about it on this blog yet, given my propensity for publishing my programming misadventures.
This old effort had been on my mind lately due to its architectural similarities to something else I was recently brainstorming.
Early Days
Porting a multimedia player was one of the earliest endeavors that I embarked upon in the multimedia domain. It’s a bit fuzzy for me now, but I’m pretty sure that my first exposure to the MPlayer project in 2001 arose from looking for a multimedia player to port. I fed it through the Dreamcast development toolchain but encountered roadblocks pretty quickly. However, this got me looking at the MPlayer source code and made me wonder how I could contribute, which is how I finally broke into practical open source multimedia hacking after studying the concepts and technology for more than a year at that point.Eventually, I jumped over to the xine project. After hacking on that for awhile, I remembered my DC media player efforts and endeavored to compile xine to the console. The first attempt was to simply compile the codebase using the Dreamcast hobbyist community’s toolchain. This is when I came to fear the multithreaded snake pit in xine’s core. Again, my memories are hazy on the specifics, but I remember the engine having a bunch of threading hacks with comments along the lines of “this code deadlocks sometimes, so on shutdown, monitor this lock and deliberately break it if it has been more than 3 seconds”.
Something Workable
Eventually, I settled on a combination of FFmpeg’s libavcodec library for audio and video decoders, xine’s demuxer library, and xine’s input API, combined with my own engine code to tie it all together along with video and output drivers provided by the KallistiOS hobbyist OS for Dreamcast. Here is a simple diagram of the data movement through this player :
Details and Challenges
This is a rare occasion when I actually got to write the core of a media player engine. I made some mistakes.xine’s internal clock ran at 90000 Hz. At least, its internal timestamps were all in reference to a 90 kHz clock. I got this brilliant idea to trigger timer interrupts at 6000 Hz to drive the engine. Whatever the timer facilities on the Dreamcast, I found that 6 kHz was the greatest common divisor with 90 kHz. This means that if I could have found an even higher GCD frequency, I would have used that instead.
So the idea was that, for a 30 fps video, the engine would know to render a frame on every 200th timer interrupt. I eventually realized that servicing 6000 timer interrupts every second would incur a ridiculous amount of overhead. After that, my engine’s philosophy was to set a timer to fire for the next frame while beginning to process the current frame. I.e., when rendering a frame, set a timer to call back in 1/30th of a second. That worked a lot better.
As I was still keen on 8-bit paletted image codecs at the time (especially since they were simple and small for bootstrapping this project), I got to use output palette images directly thanks to the Dreamcast’s paletted textures. So that was exciting. The engine didn’t need to convert the paletted images to a different colorspace before rendering. However, I seem to recall that the Dreamcast’s PowerVR graphics hardware required that 8-bit textures be twiddled/swizzled. Thus, it was still required to manipulate the 8-bit image before rendering.
I made good progress on this player concept. However, a huge blocker for me was that I didn’t know how to make a proper user interface for the media player. Obviously, programming the Dreamcast occurred at a very low level (at least with the approach I was using), so there were no UI widgets easily available.
This was circa 2003. I assumed there must have been some embedded UI widget libraries with amenable open source licenses that I could leverage. I remember searching and checking out a library named libSTK. I think STK stood for “set-top toolkit” and was positioned specifically for doing things like media player UIs on low-spec embedded computing devices. The domain hosting the project is no longer useful but this appears to be a backup of the core code.
It sounded promising, but the libSTK developers had a different definition of “low-spec embedded” device than I did. I seem to recall that they were targeting something along with likes of a Pentium III clocked at 800 MHz with 128 MB RAM. The Dreamcast, by contrast, has a 200 MHz SH-4 CPU and 16 MB RAM. LibSTK was also authored in C++ and leveraged the Boost library (my first exposure to that code), and this all had the effect of making binaries quite large while I was trying to keep the player in lean C.
Regrettably, I never made any serious progress on a proper user interface. I think that’s when the player effort ran out of steam.
The Code
So, that’s another project that I never got around to finishing or publishing. I was able to find the source code so I decided to toss it up on github, along with 2 old architecture outlines that I was able to dig up. It looks like I was starting small, just porting over a few of the demuxers and decoders that I knew well.I’m wondering if it would still be as straightforward to separate out such components now, more than 13 years later ?
The post Writing A Dreamcast Media Player first appeared on Breaking Eggs And Making Omelettes.
-
Managing Music Playback Channels
30 juin 2013, par Multimedia Mike — GeneralMy Game Music Appreciation site allows users to interact with old video game music by toggling various channels, as long as the underlying synthesizer engine supports it.
Users often find their way to the Nintendo DS section pretty quickly. This is when they notice an obnoxious quirk with the channel toggling feature : specifically, one channel doesn’t seem to map to a particular instrument or track.
When it comes to computer music playback methodologies, I have long observed that there are 2 general strategies : Fixed channel and dynamic channel allocation.
Fixed Channel Approach
One of my primary sources of computer-based entertainment used to be watching music. Sure I listened to it as well. But for things like Amiga MOD files and related tracker formats, there was a rich ecosystem of fun music playback programs that visualized the music. There exist music visualization modes in various music players these days (such as iTunes and Windows Media Player), but those largely just show you a single wave form. These files were real time syntheses based on multiple audio channels and usually showed some form of analysis for each channel. My personal favorite was Cubic Player :
Most of these players supported the concept of masking individual channels. In doing so, the user could isolate, study, and enjoy different components of the song. For many 4-channel Amiga MOD files, I observed that the common arrangement was to use the 4 channels for beat (percussion track), bass line, chords, and melody. Thus, it was easy to just listen to, e.g., the bass line in isolation.
MODs and similar formats specified precisely which digital audio sample to play at what time and on which specific audio channel. To view the internals of one of these formats, one gets the impression that they contain an extremely computer-centric view of music.
Dynamic Channel Allocation Algorithm
MODs et al. enjoyed a lot of popularity, but the standard for computer music is MIDI. While MOD and friends took a computer-centric view of music, MIDI takes, well, a music-centric view of music.There are MIDI visualization programs as well. The one that came with my Gravis Ultrasound was called PLAYMIDI.EXE. It looked like this…
… and it confused me. There are 16 distinct channels being visualized but some channels are shown playing multiple notes. When I dug into the technical details, I learned that MIDI just specifies what notes need to be played, at what times and frequencies and using which instrument samples, and it was the MIDI playback program’s job to make it happen.
Thus, if a MIDI file specifies that track 1 should play a C major chord consisting of notes C, E, and G, it would transmit events “key-on C ; delta time 0 ; key-on E ; delta time 0 ; key-on G ; delta time … ; [other commands]“. If the playback program has access to multiple channels (say, up to 32, in the case of the GUS), the intuitive approach would be to maintain a pool of all available channels. Then, when it’s time to process the “key-on C” event, fetch the first available channel from the pool, mark it as in-use, play C on the channel, and return that channel to the pool when either the sample runs its course or the corresponding “key-off C” event is encountered in the MIDI command stream.
About That Game Music
Circling back around to my game music website, numerous supported systems use the fixed channel approach for playback while others use dynamic channel allocation approach, including evey Nintendo DS game I have so far analyzed.Which approach is better ? As in many technical matters, there are trade-offs either way. For many systems, the fixed channel approach is necessary because for many older audio synthesis systems, different channels had very specific purposes. The 8-bit NES had 5 channels : 2 square wave generators (used musically for melody/treble), 1 triangle wave generator (usually used for bass line), a noise generator (subverted for all manner of percussive sounds), and a limited digital channel (was sometimes assigned richer percussive sounds). Dynamic channel allocation wouldn’t work here.
But the dynamic approach works great on hardware with 16 digital channels available like, for example, the Nintendo DS. Digital channels are very general-purpose. What about the SNES, with its 8 digital channels ? Either approach could work. In practice, most games used a fixed channel approach : Games might use 4-6 channels for music while reserving the remainder for various in-game sound effects. Some notable exceptions to this pattern were David Wise’s compositions for Rare’s SNES games (think Battletoads and the various Donkey Kong Country titles). These clearly use some dynamic channel approach since masking all but one channel will give you a variety of instrument sounds.
Epilogue
There ! That took a long time to explain but I find it fascinating for some reason. I need to distill it down to far fewer words because I want to make it a FAQ on my website for “Why can’t I isolate specific tracks for Nintendo DS games ?”Actually, perhaps I should remove the ability to toggle Nintendo DS channels in the first place. Here’s a funny tale of needless work : I found the Vio2sf engine for synthesizing Nintendo DS music and incorporated it into the program. It didn’t support toggling of individual channels so I figured out a way to add that feature to the engine. And then I noticed that most Nintendo DS games render that feature moot. After I released the webapp, I learned that I was out of date on the Vio2sf engine. The final insult was that the latest version already supports channel toggling. So I did the work for nothing. But then again, since I want to remove that feature from the UI, doubly so.
-
libav ffmpeg codec copy rtp_mpegts streaming with very bad quality
27 décembre 2017, par DinkanI am trying to do codec copy of a stream(testing with file now & later
going to use live stream) with format rtp_mpegts over network & play
using VLC player. Started my proof of concept code with slightly
modified remuxing.c in the examples.I am essentially trying to do is to replicate
./ffmpeg -re -i TEST_VIDEO.ts -acodec copy -vcodec copy -f rtp_mpegts
rtp ://239.245.0.2:5002Streaming is happening, but the quality is terrible.
Looks like many frames are skipped plus streaming is happening really
slow(buffer underflow reported by VLC player)File plays perfectly fine directly on VLC player.
Please help.Stream details.
Input #0, mpegts, from ' TEST_VIDEO.ts':
Duration: 00:10:00.40, start: 41313.400811, bitrate: 2840 kb/s
Program 1
Stream #0:0[0x11]: Video: h264 (High) ([27][0][0][0] / 0x001B),
yuv420p(tv, bt709, top first), 1440x1080 [SAR 4:3 DAR 16:9], 29.97
fps, 59.94 tbr, 90k tbn, 59.94 tbc
Stream #0:1[0x14]: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz,
stereo, fltp, 448 kb/s
Output #0, rtp_mpegts, to 'rtp://239.255.0.2:5004':
Metadata:
encoder : Lavf57.83.100
Stream #0:0: Video: h264 (High) ([27][0][0][0] / 0x001B),
yuv420p(tv, bt709, top first), 1440x1080 [SAR 4:3 DAR 16:9], q=2-31,
29.97 fps, 59.94 tbr, 90k tbn, 29.97 tbc
Stream #0:1: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo,
fltp, 448 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
frame= 418 fps=5.2 q=-1.0 size= 3346kB time=00:00:08.50
bitrate=3223.5kbits/s speed=0.106xMy complete source code(This is almost same as remuxing.c)
#include <libavutil></libavutil>timestamp.h>
#include <libavformat></libavformat>avformat.h>
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket
*pkt, const char *tag)
{
AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
printf("%s: pts:%s pts_time:%s dts:%s dts_time:%s duration:%s
duration_time:%s stream_index:%d\n",
tag,
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
pkt->stream_index);
}
int main(int argc, char **argv)
{
AVOutputFormat *ofmt = NULL;
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
AVPacket pkt;
const char *in_filename, *out_filename;
int ret, i;
int stream_index = 0;
int *stream_mapping = NULL;
int stream_mapping_size = 0;
AVRational mux_timebase;
int64_t start_time = 0; //(of->start_time == AV_NOPTS_VALUE) ? 0 :
of->start_time;
int64_t ost_tb_start_time = 0; //av_rescale_q(start_time,
AV_TIME_BASE_Q, ost->mux_timebase);
if (argc < 3) {
printf("usage: %s input output\n"
"API example program to remux a media file with
libavformat and libavcodec.\n"
"The output format is guessed according to the file extension.\n"
"\n", argv[0]);
return 1;
}
in_filename = argv[1];
out_filename = argv[2];
av_register_all();
avcodec_register_all();
avformat_network_init();
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
fprintf(stderr, "Could not open input file '%s'", in_filename);
goto end;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
fprintf(stderr, "Failed to retrieve input stream information");
goto end;
}
av_dump_format(ifmt_ctx, 0, in_filename, 0);
avformat_alloc_output_context2(&ofmt_ctx, NULL, "rtp_mpegts", out_filename);
if (!ofmt_ctx) {
fprintf(stderr, "Could not create output context\n");
ret = AVERROR_UNKNOWN;
goto end;
}
stream_mapping_size = ifmt_ctx->nb_streams;
stream_mapping = av_mallocz_array(stream_mapping_size,
sizeof(*stream_mapping));
if (!stream_mapping) {
ret = AVERROR(ENOMEM);
goto end;
}
ofmt = ofmt_ctx->oformat;
for (i = 0; i < ifmt_ctx->nb_streams; i++)
{
AVStream *out_stream;
AVStream *in_stream = ifmt_ctx->streams[i];
AVCodecParameters *in_codecpar = in_stream->codecpar;
if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &&
in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &&
in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {
stream_mapping[i] = -1;
continue;
}
stream_mapping[i] = stream_index++;
out_stream = avformat_new_stream(ofmt_ctx, NULL);
if (!out_stream) {
fprintf(stderr, "Failed allocating output stream\n");
ret = AVERROR_UNKNOWN;
goto end;
}
//out_stream->codecpar->codec_tag = 0;
if (0 == out_stream->codecpar->codec_tag)
{
unsigned int codec_tag_tmp;
if (!out_stream->codecpar->codec_tag ||
av_codec_get_id (ofmt->codec_tag,
in_codecpar->codec_tag) == in_codecpar->codec_id ||
!av_codec_get_tag2(ofmt->codec_tag,
in_codecpar->codec_id, &codec_tag_tmp))
out_stream->codecpar->codec_tag = in_codecpar->codec_tag;
}
//ret = avcodec_parameters_to_context(ost->enc_ctx, ist->st->codecpar);
ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
if (ret < 0) {
fprintf(stderr, "Failed to copy codec parameters\n");
goto end;
}
//out_stream->codecpar->codec_tag = codec_tag;
// copy timebase while removing common factors
printf("bit_rate %lld sample_rate %d frame_size %d\n",
in_codecpar->bit_rate, in_codecpar->sample_rate,
in_codecpar->frame_size);
out_stream->avg_frame_rate = in_stream->avg_frame_rate;
ret = avformat_transfer_internal_stream_timing_info(ofmt,
out_stream, in_stream,
AVFMT_TBCF_AUTO);
if (ret < 0) {
fprintf(stderr,
"avformat_transfer_internal_stream_timing_info failed\n");
goto end;
}
if (out_stream->time_base.num <= 0 || out_stream->time_base.den <= 0)
out_stream->time_base =
av_add_q(av_stream_get_codec_timebase(out_stream), (AVRational){0,
1});
// copy estimated duration as a hint to the muxer
if (out_stream->duration <= 0 && in_stream->duration > 0)
out_stream->duration = av_rescale_q(in_stream->duration,
in_stream->time_base, out_stream->time_base);
// copy disposition
out_stream->disposition = in_stream->disposition;
out_stream->sample_aspect_ratio = in_stream->sample_aspect_ratio;
out_stream->avg_frame_rate = in_stream->avg_frame_rate;
out_stream->r_frame_rate = in_stream->r_frame_rate;
if ( in_codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
{
mux_timebase = in_stream->time_base;
}
if (in_stream->nb_side_data) {
for (i = 0; i < in_stream->nb_side_data; i++) {
const AVPacketSideData *sd_src = &in_stream->side_data[i];
uint8_t *dst_data;
dst_data = av_stream_new_side_data(out_stream,
sd_src->type, sd_src->size);
if (!dst_data)
return AVERROR(ENOMEM);
memcpy(dst_data, sd_src->data, sd_src->size);
}
}
}
av_dump_format(ofmt_ctx, 0, out_filename, 1);
start_time = ofmt_ctx->duration;
ost_tb_start_time = av_rescale_q(ofmt_ctx->duration,
AV_TIME_BASE_Q, mux_timebase);
if (!(ofmt->flags & AVFMT_NOFILE))
{
ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Could not open output file '%s'", out_filename);
goto end;
}
}
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
fprintf(stderr, "Error occurred when opening output file\n");
goto end;
}
while (1)
{
AVStream *in_stream, *out_stream;
ret = av_read_frame(ifmt_ctx, &pkt);
if (ret < 0)
break;
in_stream = ifmt_ctx->streams[pkt.stream_index];
if (pkt.stream_index >= stream_mapping_size ||
stream_mapping[pkt.stream_index] < 0) {
av_packet_unref(&pkt);
continue;
}
pkt.stream_index = stream_mapping[pkt.stream_index];
out_stream = ofmt_ctx->streams[pkt.stream_index];
//log_packet(ifmt_ctx, &pkt, "in");
//ofmt_ctx->bit_rate = ifmt_ctx->bit_rate;
ofmt_ctx->duration = ifmt_ctx->duration;
/* copy packet */
//pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base,
out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
//pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base,
out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
if (pkt.pts != AV_NOPTS_VALUE)
pkt.pts = av_rescale_q(pkt.pts,
in_stream->time_base,mux_timebase) - ost_tb_start_time;
else
pkt.pts = AV_NOPTS_VALUE;
if (pkt.dts == AV_NOPTS_VALUE)
pkt.dts = av_rescale_q(pkt.dts, AV_TIME_BASE_Q, mux_timebase);
else
pkt.dts = av_rescale_q(pkt.dts, in_stream->time_base, mux_timebase);
pkt.dts -= ost_tb_start_time;
pkt.duration = av_rescale_q(pkt.duration,
in_stream->time_base, mux_timebase);
//pkt.duration = av_rescale_q(1,
av_inv_q(out_stream->avg_frame_rate), mux_timebase);
pkt.pos = -1;
//log_packet(ofmt_ctx, &pkt, "out");
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
if (ret < 0) {
fprintf(stderr, "Error muxing packet\n");
break;
}
av_packet_unref(&pkt);
}
av_write_trailer(ofmt_ctx);
end:
avformat_close_input(&ifmt_ctx);
/* close output */
if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
av_freep(&stream_mapping);
if (ret < 0 && ret != AVERROR_EOF) {
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
return 1;
}
return 0;
}