
Recherche avancée
Médias (1)
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
Autres articles (60)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (6013)
-
Ogg objections
3 mars 2010, par Mans — MultimediaThe Ogg container format is being promoted by the Xiph Foundation for use with its Vorbis and Theora codecs. Unfortunately, a number of technical shortcomings in the format render it ill-suited to most, if not all, use cases. This article examines the most severe of these flaws.
Overview of Ogg
The basic unit in an Ogg stream is the page consisting of a header followed by one or more packets from a single elementary stream. A page can contain up to 255 packets, and a packet can span any number of pages. The following table describes the page header.
Field Size (bits) Description capture_pattern 32 magic number “OggS” version 8 always zero flags 8 granule_position 64 abstract timestamp bitstream_serial_number 32 elementary stream number page_sequence_number 32 incremented by 1 each page checksum 32 CRC of entire page page_segments 8 length of segment_table segment_table variable list of packet sizes Elementary stream types are identified by looking at the payload of the first few pages, which contain any setup data required by the decoders. For full details, see the official format specification.
Generality
Ogg, legend tells, was designed to be a general-purpose container format. To most multimedia developers, a general-purpose format is one in which encoded data of any type can be encapsulated with a minimum of effort.
The Ogg format defined by the specification does not fit this description. For every format one wishes to use with Ogg, a complex mapping must first be defined. This mapping defines how to identify a codec, how to extract setup data, and even how timestamps are to be interpreted. All this is done differently for every codec. To correctly parse an Ogg stream, every such mapping ever defined must be known.
Under this premise, a centralised repository of codec mappings would seem like a sensible idea, but alas, no such thing exists. It is simply impossible to obtain a exhaustive list of defined mappings, which makes the task of creating a complete implementation somewhat daunting.
One brave soul, Tobias Waldvogel, created a mapping, OGM, capable of storing any Microsoft AVI compatible codec data in Ogg files. This format saw some use in the wild, but was frowned upon by Xiph, and it was eventually displaced by other formats.
True generality is evidently not to be found with the Ogg format.
A good example of a general-purpose format is Matroska. This container can trivially accommodate any codec, all it requires is a unique string to identify the codec. For codecs requiring setup data, a standard location for this is provided in the container. Furthermore, an official list of codec identifiers is maintained, meaning all information required to fully support Matroska files is available from one place.
Matroska also has probably the greatest advantage of all : it is in active, wide-spread use. Historically, standards derived from existing practice have proven more successful than those created by a design committee.
Overhead
When designing a container format, one important consideration is that of overhead, i.e. the extra space required in addition to the elementary stream data being combined. For any given container, the overhead can be divided into a fixed part, independent of the total file size, and a variable part growing with increasing file size. The fixed overhead is not of much concern, its relative contribution being negligible for typical file sizes.
The variable overhead in the Ogg format comes from the page headers, mostly from the segment_table field. This field uses a most peculiar encoding, somewhat reminiscent of Roman numerals. In Roman times, numbers were written as a sequence of symbols, each representing a value, the combined value being the sum of the constituent values.
The segment_table field lists the sizes of all packets in the page. Each value in the list is coded as a number of bytes equal to 255 followed by a final byte with a smaller value. The packet size is simply the sum of all these bytes. Any strictly additive encoding, such as this, has the distinct drawback of coded length being linearly proportional to the encoded value. A value of 5000, a reasonable packet size for video of moderate bitrate, requires no less than 20 bytes to encode.
On top of this we have the 27-byte page header which, although paling in comparison to the packet size encoding, is still much larger than necessary. Starting at the top of the list :
- The version field could be disposed of, a single-bit marker being adequate to separate this first version from hypothetical future versions. One of the unused positions in the flags field could be used for this purpose
- A 64-bit granule_position is completely overkill. 32 bits would be more than enough for the vast majority of use cases. In extreme cases, a one-bit flag could be used to signal an extended timestamp field.
- 32-bit elementary stream number ? Are they anticipating files with four billion elementary streams ? An eight-bit field, if not smaller, would seem more appropriate here.
- The 32-bit page_sequence_number is inexplicable. The intent is to allow detection of page loss due to transmission errors. ISO MPEG-TS uses a 4-bit counter per 188-byte packet for this purpose, and that format is used where packet loss actually happens, unlike any use of Ogg to date.
- A mandatory 32-bit checksum is nothing but a waste of space when using a reliable storage/transmission medium. Again, a flag could be used to signal the presence of an optional checksum field.
With the changes suggested above, the page header would shrink from 27 bytes to 12 bytes in size.
We thus see that in an Ogg file, the packet size fields alone contribute an overhead of 1/255 or approximately 0.4%. This is a hard lower bound on the overhead, not attainable even in theory. In reality the overhead tends to be closer to 1%.
Contrast this with the ISO MP4 file format, which can easily achieve an overhead of less than 0.05% with a 1 Mbps elementary stream.
Latency
In many applications end-to-end latency is an important factor. Examples include video conferencing, telephony, live sports events, interactive gaming, etc. With the codec layer contributing as little as 10 milliseconds of latency, the amount imposed by the container becomes an important factor.
Latency in an Ogg-based system is introduced at both the sender and the receiver. Since the page header depends on the entire contents of the page (packet sizes and checksum), a full page of packets must be buffered by the sender before a single bit can be transmitted. This sets a lower bound for the sending latency at the duration of a page.
On the receiving side, playback cannot commence until packets from all elementary streams are available. Hence, with two streams (audio and video) interleaved at the page level, playback is delayed by at least one page duration (two if checksums are verified).
Taking both send and receive latencies into account, the minimum end-to-end latency for Ogg is thus twice the duration of a page, triple if strict checksum verification is required. If page durations are variable, the maximum value must be used in order to avoid buffer underflows.
Minimum latency is clearly achieved by minimising the page duration, which in turn implies sending only one packet per page. This is where the size of the page header becomes important. The header for a single-packet page is 27 + packet_size/255 bytes in size. For a 1 Mbps video stream at 25 fps this gives an overhead of approximately 1%. With a typical audio packet size of 400 bytes, the overhead becomes a staggering 7%. The average overhead for a multiplex of these two streams is 1.4%.
As it stands, the Ogg format is clearly not a good choice for a low-latency application. The key to low latency is small packets and fine-grained interleaving of streams, and although Ogg can provide both of these, by sending a single packet per page, the price in overhead is simply too high.
ISO MPEG-PS has an overhead of 9 bytes on most packets (a 5-byte timestamp is added a few times per second), and Microsoft’s ASF has a 12-byte packet header. My suggestions for compacting the Ogg page header would bring it in line with these formats.
Random access
Any general-purpose container format needs to allow random access for direct seeking to any given position in the file. Despite this goal being explicitly mentioned in the Ogg specification, the format only allows the most crude of random access methods.
While many container formats include an index allowing a time to be directly translated into an offset into the file, Ogg has nothing of this kind, the stated rationale for the omission being that this would require a two-pass multiplexing, the second pass creating the index. This is obviously not true ; the index could simply be written at the end of the file. Those objecting that this index would be unavailable in a streaming scenario are forgetting that seeking is impossible there regardless.
The method for seeking suggested by the Ogg documentation is to perform a binary search on the file, after each file-level seek operation scanning for a page header, extracting the timestamp, and comparing it to the desired position. When the elementary stream encoding allows only certain packets as random access points (video key frames), a second search will have to be performed to locate the entry point closest to the desired time. In a large file (sizes upwards of 10 GB are common), 50 seeks might be required to find the correct position.
A typical hard drive has an average seek time of roughly 10 ms, giving a total time for the seek operation of around 500 ms, an annoyingly long time. On a slow medium, such as an optical disc or files served over a network, the times are orders of magnitude longer.
A factor further complicating the seeking process is the possibility of header emulation within the elementary stream data. To safeguard against this, one has to read the entire page and verify the checksum. If the storage medium cannot provide data much faster than during normal playback, this provides yet another substantial delay towards finishing the seeking operation. This too applies to both network delivery and optical discs.
Although optical disc usage is perhaps in decline today, one should bear in mind that the Ogg format was designed at a time when CDs and DVDs were rapidly gaining ground, and network-based storage is most certainly on the rise.
The final nail in the coffin of seeking is the codec-dependent timestamp format. At each step in the seeking process, the timestamp parsing specified by the codec mapping corresponding the current page must be invoked. If the mapping is not known, the best one can do is skip pages until one with a known mapping is found. This delays the seeking and complicates the implementation, both bad things.
Timestamps
A problem old as multimedia itself is that of synchronising multiple elementary streams (e.g. audio and video) during playback ; badly synchronised A/V is highly unpleasant to view. By the time Ogg was invented, solutions to this problem were long since explored and well-understood. The key to proper synchronisation lies in tagging elementary stream packets with timestamps, packets carrying the same timestamp intended for simultaneous presentation. The concept is as simple as it seems, so it is astonishing to see the amount of complexity with which the Ogg designers managed to imbue it. So bizarre is it, that I have devoted an entire article to the topic, and will not cover it further here.
Complexity
Video and audio decoding are time-consuming tasks, so containers should be designed to minimise extra processing required. With the data volumes involved, even an act as simple as copying a packet of compressed data can have a significant impact. Once again, however, Ogg lets us down. Despite the brevity of the specification, the format is remarkably complicated to parse properly.
The unusual and inefficient encoding of the packet sizes limits the page size to somewhat less than 64 kB. To still allow individual packets larger than this limit, it was decided to allow packets spanning multiple pages, a decision with unfortunate implications. A page-spanning packet as it arrives in the Ogg stream will be discontiguous in memory, a situation most decoders are unable to handle, and reassembly, i.e. copying, is required.
The knowledgeable reader may at this point remark that the MPEG-TS format also splits packets into pieces requiring reassembly before decoding. There is, however, a significant difference there. MPEG-TS was designed for hardware demultiplexing feeding directly into hardware decoders. In such an implementation the fragmentation is not a problem. Rather, the fine-grained interleaving is a feature allowing smaller on-chip buffers.
Buffering is also an area in which Ogg suffers. To keep the overhead down, pages must be made as large as practically possible, and page size translates directly into demultiplexer buffer size. Playback of a file with two elementary streams thus requires 128 kB of buffer space. On a modern PC this is perhaps nothing to be concerned about, but in a small embedded system, e.g. a portable media player, it can be relevant.
In addition to the above, a number of other issues, some of them minor, others more severe, make Ogg processing a painful experience. A selection follows :
- 32-bit random elementary stream identifiers mean a simple table-lookup cannot be used. Instead the list of streams must be searched for a match. While trivial to do in software, it is still annoying, and a hardware demultiplexer would be significantly more complicated than with a smaller identifier.
- Semantically ambiguous streams are possible. For example, the continuation flag (bit 1) may conflict with continuation (or lack thereof) implied by the segment table on the preceding page. Such invalid files have been spotted in the wild.
- Concatenating independent Ogg streams forms a valid stream. While finding a use case for this strange feature is difficult, an implementation must of course be prepared to encounter such streams. Detecting and dealing with these adds pointless complexity.
- Unusual terminology : inventing new terms for well-known concepts is confusing for the developer trying to understand the format in relation to others. A few examples :
Ogg name Usual name logical bitstream elementary stream grouping multiplexing lacing value packet size (approximately) segment imaginary element serving no real purpose granule position timestamp
Final words
We have found the Ogg format to be a dubious choice in just about every situation. Why then do certain organisations and individuals persist in promoting it with such ferocity ?
When challenged, three types of reaction are characteristic of the Ogg campaigners.
On occasion, these people will assume an apologetic tone, explaining how Ogg was only ever designed for simple audio-only streams (ignoring it is as bad for these as for anything), and this is no doubt true. Why then, I ask again, do they continue to tout Ogg as the one-size-fits-all solution they already admitted it is not ?
More commonly, the Ogg proponents will respond with hand-waving arguments best summarised as Ogg isn’t bad, it’s just different. My reply to this assertion is twofold :
- Being too different is bad. We live in a world where multimedia files come in many varieties, and a decent media player will need to handle the majority of them. Fortunately, most multimedia file formats share some basic traits, and they can easily be processed in the same general framework, the specifics being taken care of at the input stage. A format deviating too far from the standard model becomes problematic.
- Ogg is bad. When every angle of examination reveals serious flaws, bad is the only fitting description.
The third reaction bypasses all technical analysis : Ogg is patent-free, a claim I am not qualified to directly discuss. Assuming it is true, it still does not alter the fact that Ogg is a bad format. Being free from patents does not magically make Ogg a good choice as file format. If all the standard formats are indeed covered by patents, the only proper solution is to design a new, good format which is not, this time hopefully avoiding the old mistakes.
-
FFMPEG Presentation Time Stamps (PTS) calculation in RTSP stream
8 décembre 2020, par BadaBudaBuduBelow please find en raw example of my code for your better understanding of what it does. Please note that this is an updated (removed deprecated methods, etc.) example code by myself from the official FFMPEG documentation complemented by my encoder.


/// STD
#include <iostream>
#include <string>

/// FFMPEG
extern "C"
{
 #include <libavformat></libavformat>avformat.h>
 #include <libswscale></libswscale>swscale.h>
 #include <libavutil></libavutil>imgutils.h>
}

/// VideoLib
#include <tools></tools>multimediaprocessing.h>
#include 
#include 
#include <enums></enums>codec.h>
#include <enums></enums>pixelformat.h>

/// OpenCV
#include <opencv2></opencv2>opencv.hpp>

inline static const char *inputRtspAddress = "rtsp://192.168.0.186:8080/video/h264";

int main()
{
 AVFormatContext* formatContext = nullptr;

 AVStream* audioStream = nullptr;
 AVStream* videoStream = nullptr;
 AVCodec* audioCodec = nullptr;
 AVCodec* videoCodec = nullptr;
 AVCodecContext* audioCodecContext = nullptr;
 AVCodecContext* videoCodecContext = nullptr;
 vl::AudioSettings audioSettings;
 vl::VideoSettings videoSettings;

 int audioIndex = -1;
 int videoIndex = -1;

 SwsContext* swsContext = nullptr;
 std::vector frameBuffer;
 AVFrame* frame = av_frame_alloc();
 AVFrame* decoderFrame = av_frame_alloc();

 AVPacket packet;
 cv::Mat mat;

 vl::tools::MultimediaProcessing multimediaProcessing("rtsp://127.0.0.1:8080/stream", vl::configs::rtspStream, 0, vl::enums::EPixelFormat::ABGR);

 // *** OPEN STREAM *** //
 if(avformat_open_input(&formatContext, inputRtspAddress, nullptr, nullptr) < 0)
 {
 std::cout << "Failed to open input." << std::endl;
 return EXIT_FAILURE;
 }

 if(avformat_find_stream_info(formatContext, nullptr) < 0)
 {
 std::cout << "Failed to find stream info." << std::endl;
 return EXIT_FAILURE;
 }

 // *** FIND DECODER FOR BOTH AUDIO AND VIDEO STREAM *** //
 audioCodec = avcodec_find_decoder(AVCodecID::AV_CODEC_ID_AAC);
 videoCodec = avcodec_find_decoder(AVCodecID::AV_CODEC_ID_H264);

 if(audioCodec == nullptr || videoCodec == nullptr)
 {
 std::cout << "No AUDIO or VIDEO in stream." << std::endl;
 return EXIT_FAILURE;
 }

 // *** FIND STREAM FOR BOTH AUDIO AND VIDEO STREAM *** //

 audioIndex = av_find_best_stream(formatContext, AVMEDIA_TYPE_AUDIO, -1, -1, &audioCodec, 0);
 videoIndex = av_find_best_stream(formatContext, AVMEDIA_TYPE_VIDEO, -1, -1, &videoCodec, 0);

 if(audioIndex < 0 || videoIndex < 0)
 {
 std::cout << "Failed to find AUDIO or VIDEO stream." << std::endl;
 return EXIT_FAILURE;
 }

 audioStream = formatContext->streams[audioIndex];
 videoStream = formatContext->streams[videoIndex];

 // *** ALLOC CODEC CONTEXT FOR BOTH AUDIO AND VIDEO STREAM *** //
 audioCodecContext = avcodec_alloc_context3(audioCodec);
 videoCodecContext = avcodec_alloc_context3(videoCodec);

 if(audioCodecContext == nullptr || videoCodecContext == nullptr)
 {
 std::cout << "Can not allocate AUDIO or VIDEO context." << std::endl;
 return EXIT_FAILURE;
 }

 if(avcodec_parameters_to_context(audioCodecContext, formatContext->streams[audioIndex]->codecpar) < 0 || avcodec_parameters_to_context(videoCodecContext, formatContext->streams[videoIndex]->codecpar) < 0)
 {
 std::cout << "Can not fill AUDIO or VIDEO codec context." << std::endl;
 return EXIT_FAILURE;
 }

 if(avcodec_open2(audioCodecContext, audioCodec, nullptr) < 0 || avcodec_open2(videoCodecContext, videoCodec, nullptr) < 0)
 {
 std::cout << "Failed to open AUDIO codec" << std::endl;
 return EXIT_FAILURE;
 }

 // *** INITIALIZE MULTIMEDIA PROCESSING *** //
 std::vector<unsigned char="char"> extraData(audioStream->codecpar->extradata_size);
 std::copy_n(audioStream->codecpar->extradata, extraData.size(), extraData.begin());

 audioSettings.sampleRate = audioStream->codecpar->sample_rate,
 audioSettings.bitrate = audioStream->codecpar->bit_rate,
 audioSettings.codec = vl::enums::EAudioCodec::AAC,
 audioSettings.channels = audioStream->codecpar->channels,
 audioSettings.bitsPerCodedSample = audioStream->codecpar->bits_per_coded_sample,
 audioSettings.bitsPerRawSample = audioStream->codecpar->bits_per_raw_sample,
 audioSettings.blockAlign = audioStream->codecpar->block_align,
 audioSettings.channelLayout = audioStream->codecpar->channel_layout,
 audioSettings.format = audioStream->codecpar->format,
 audioSettings.frameSize = audioStream->codecpar->frame_size,
 audioSettings.codecExtraData = std::move(extraData);

 videoSettings.width = 1920;
 videoSettings.height = 1080;
 videoSettings.framerate = 25;
 videoSettings.pixelFormat = vl::enums::EPixelFormat::ARGB;
 videoSettings.bitrate = 8000 * 1000;
 videoSettings.codec = vl::enums::EVideoCodec::H264;

 multimediaProcessing.initEncoder(videoSettings, audioSettings);

 // *** INITIALIZE SWS CONTEXT *** //
 swsContext = sws_getCachedContext(nullptr, videoCodecContext->width, videoCodecContext->height, videoCodecContext->pix_fmt, videoCodecContext->width, videoCodecContext->height, AV_PIX_FMT_RGBA, SWS_FAST_BILINEAR, nullptr, nullptr, nullptr);

 if (const auto inReturn = av_image_get_buffer_size(AV_PIX_FMT_RGBA, videoCodecContext->width, videoCodecContext->height, 1); inReturn > 0)
 {
 frameBuffer.reserve(inReturn);
 }
 else
 {
 std::cout << "Can not get buffer size." << std::endl;
 return EXIT_FAILURE;
 }

 if (const auto inReturn = av_image_fill_arrays(frame->data, frame->linesize, frameBuffer.data(), AV_PIX_FMT_RGBA, videoCodecContext->width, videoCodecContext->height, 1); inReturn < 0)
 {
 std::cout << "Can not fill buffer arrays." << std::endl;
 return EXIT_FAILURE;
 }

 // *** MAIN LOOP *** //
 while(true)
 {
 // Return the next frame of a stream.
 if(av_read_frame(formatContext, &packet) == 0)
 {
 if(packet.stream_index == videoIndex) // Check if it is video packet.
 {
 // Send packet to decoder.
 if(avcodec_send_packet(videoCodecContext, &packet) == 0)
 {
 int returnCode = avcodec_receive_frame(videoCodecContext, decoderFrame); // Get Frame from decoder.

 if (returnCode == 0) // Transform frame and send it to encoder. And re-stream that.
 {
 sws_scale(swsContext, decoderFrame->data, decoderFrame->linesize, 0, decoderFrame->height, frame->data, frame->linesize);

 mat = cv::Mat(videoCodecContext->height, videoCodecContext->width, CV_8UC4, frameBuffer.data(), frame->linesize[0]);

 cv::resize(mat, mat, cv::Size(1920, 1080), cv::INTER_NEAREST);

 multimediaProcessing.encode(mat.data, packet.dts, packet.dts, packet.flags == AV_PKT_FLAG_KEY); // Thise line sends cv::Mat to encoder and re-streams it.

 av_packet_unref(&packet);
 }
 else if(returnCode == AVERROR(EAGAIN))
 {
 av_frame_unref(decoderFrame);
 av_freep(decoderFrame);
 }
 else
 {
 av_frame_unref(decoderFrame);
 av_freep(decoderFrame);

 std::cout << "Error during decoding." << std::endl;
 return EXIT_FAILURE;
 }
 }
 }
 else if(packet.stream_index == audioIndex) // Check if it is audio packet.
 {
 std::vector vectorPacket(packet.data, packet.data + packet.size);

 multimediaProcessing.addAudioPacket(vectorPacket, packet.dts, packet.dts);
 }
 else
 {
 av_packet_unref(&packet);
 }
 }
 else
 {
 std::cout << "Can not send video packet to decoder." << std::endl;
 std::this_thread::sleep_for(std::chrono::seconds(1));
 }
 }

 return EXIT_SUCCESS;
 }
</unsigned></string></iostream>


What does It do ?


It takes a single RTSP stream to decode its data so I can, for example, draw something to its frames or whatever, and then stream it under a different address.


Basically, I am opening the RTSP stream, check if it does contain both audio and video streams, and find a decoder for them. Then I create an encoder to which I will tell how the output stream should look like and that's it.


And this point I will create an endless loop Where I will read all packets coming from the input stream, then decode it does something to it and again encode it and re=stream it.


What is the issue ?


If you take a closer look I am sending both video and audio frame together with lastly received PTS and DTS contained in AVPacket, to the encoder.


The PTS and DTS from the point when I receive the first AVPacket looks for example like this.


IN AUDIO STREAM :




-22783, -21759, -20735, -19711, -18687, -17663, -16639, -15615, -14591, -13567, -12543, -11519, -10495, -9471, -8447, -7423, -6399, -5375, -4351, -3327, -2303, -1279, -255, 769, 1793, 2817, 3841, 4865, 5889, 6913, 7937, 8961, 9985, 11009, 12033, 13057, 14081, 15105, 16129, 17153




As you can see it is every time incremented by 1024 and that is a sample rate of the audio stream. Quite clear here.


IN VIDEO STREAM :




86400, 90000, 93600, 97200, 100800, 104400, 108000, 111600, 115200, 118800, 122400, 126000, 129600, 133200, 136800, 140400, 144000, 147600, 151200, 154800, 158400, 162000, 165600




As you can see it is every time incremented by 3600 but WHY ?. What this number actually mean ?


From what I can understand, those received PTS and DTS are for the following :


DTS should tell the encoder when it should start encoding the frame so the frame in time are in the correct order and not mishmashed.


PTS should say the correct time when the frame should be played/displayed in the output stream so the frame in time are in the correct order and not mishmashed.


What I am trying to achieve ?


As I said I need to restream a RTSP stream. I can not use PTS and DTS which comes from received AVPackets, because at some point it can happen that the input stream can randomly close and I need to open it again. The problem is that when I actually do it, then the PTS and DTS start to generate again from the minus values same as you could see in the samples. I CAN NOT send those "new" PTS and DTS to the encoder because they are now lower than the encoder/muxer expects.


I need to continually stream something (both audio and video), even it is a blank black screen or silent audio. And each frame the PTS and DTS should rise by a specific number. I need to figure out how the increment is calculated.


----------------------------------


The final result should look like a mosaic of multiple input streams in a single output stream. A single input stream (main) has both audio and video and the rest (side) has just video. Some of those streams can randomly close in time and I need to ensure that it will be back again once it is possible.


-
Dreamcast Serial Extractor
31 décembre 2017, par Multimedia Mike — Sega DreamcastIt has not been a very productive year for blogging. But I started the year by describing an unfinished project that I developed for the Sega Dreamcast, so I may as well end the year the same way. The previous project was a media player. That initiative actually met with some amount of success and could have developed into something interesting if I had kept at it.
By contrast, this post describes an effort that was ultimately a fool’s errand that I spent way too much time trying to make work.
Problem Statement
In my neverending quest to analyze the structure of video games while also hoarding a massive collection of them (though I’m proud to report that I did play at least a few of them this past year), I wanted to be able to extract the data from my many Dreamcast titles, both games and demo discs. I had a tool called the DC Coder’s Cable, a serial cable that enables communication between a Dreamcast and a PC. With the right software, you could dump an entire Dreamcast GD-ROM, which contained a gigabyte worth of sectors.Problem : The dumping software (named ‘dreamrip’ and written by noted game hacker BERO) operated in a very basic mode, methodically dumping sector after sector and sending it down the serial cable. This meant that it took about 28 hours to extract all the data on a single disc by running at the maximum speed of 115,200 bits/second, or about 11 kilobytes/second. I wanted to create a faster method.
The Pitch
I formed a mental model of dreamrip’s operation that looked like this :
As an improvement, I envisioned this beautiful architecture :
Architectural Assumptions
My proposed architecture was predicated on the assumption that the disc reading and serial output functions were both I/O-bound operations and that the CPU would be idle much of the time. My big idea was to use that presumably idle CPU time to compress the sectors before sending them over the wire. As long as the CPU can compress the data faster than 11 kbytes/sec, it should be a win. In order to achieve this, I broke the main program into 3 threads :- The first thread reads the sectors ; more specifically, it asks the drive firmware to please read the sectors and make the data available in system RAM
- The second thread waits for sector data to appear in memory and then compresses it
- The third thread takes the compressed data when it is ready and shuffles it out through the serial cable
Simple and elegant, right ?
For data track compression, I wanted to start with zlib in order to prove the architecture, but then also try bzip2 or lzma. As long as they could compress data faster than the serial port could write it, then it should be a win. For audio track compression, I wanted to use the Flake FLAC encoder. According to my notes, I did get both bzip2 compression and the Flake compressor working on the Dreamcast. I recall choosing Flake over the official FLAC encoder because it was much simpler and had fewer dependencies, always an important consideration for platforms such as this.
Problems
I worked for quite awhile on this project. I have a lot of notes recorded but a lot of the problems I had remain a bit vague in my memory. However, there was one problem I discovered that eventually sunk the entire initiative :The serial output operation is CPU-bound.
My initial mental model was that the a buffer could be “handed off” to the serial subsystem and the CPU could go back to doing other work. Nope. Turns out that the CPU was participating at every step of the serial transfer.
Further, I eventually dug into the serial driver code and learned that there was already some compression taking place via the miniLZO library.
Lessons Learned
- Recognize the assumptions that you’re making up front at the start of the project.
- Prototype in order to ensure plausibility
- Profile to make sure you’re optimizing the right thing (this is something I have learned again and again).
Another interesting tidbit from my notes : it doesn’t matter how many sectors you read at a time, the overall speed is roughly the same. I endeavored to read 1000 2048-byte data sectors, 1 or 10 or 100 at a time, or all 1000 at once. My results :
- 1 : 19442 ms
- 10 : 19207 ms
- 100 : 19194 ms
- 1000 : 19320 ms
No difference. That surprised me.
Side Benefits
At one point, I needed to understand how BERO’s dreamrip software was operating. I knew I used to have the source code but I could no longer find it. Instead, I decided to try to reverse engineer what I needed from the SH-4 binary image that I had. It wasn’t an ELF image ; rather, it was a raw binary meant to be loaded at a particular memory location which makes it extra challenging for ‘objdump’. This led to me asking my most viewed and upvoted question on Stack Overflow : “Disassembling A Flat Binary File Using objdump”. The next day, it also led me to post one of my most upvoted answers when I found the solution elsewhere.Strangely, I have since tried out the command line shown in my answer and have been unable to make it work. But people keep upvoting both the question and the answer.
Eventually this all became moot when I discovered a misplaced copy of the source code on one of my computers.
I strongly recall binging through the Alias TV show while I was slogging away on this project, so I guess that’s a positive association since I got so many fun screenshots out of it.
The Final Resolution
Strangely, I was still determined to make this project work even though the Dreamcast SD adapter arrived for me about halfway through the effort. Part of this was just stubbornness, but part of it was my assumptions about serial port speeds, in particular, my assumption that there was a certain speed-of-light type of limitation on serial port speeds so that the SD adapter, operating over the DC’s serial port, would not be appreciably faster than the serial cable.This turned out to be very incorrect. In fact, the SD adapter is capable of extracting an entire gigabyte disc image in 35-40 minutes. This is the method I have since been using to extract Dreamcast disc images.
The post Dreamcast Serial Extractor first appeared on Breaking Eggs And Making Omelettes.