
Recherche avancée
Médias (29)
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (64)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...)
Sur d’autres sites (8156)
-
Developing A Shader-Based Video Codec
22 juin 2013, par Multimedia Mike — Outlandish BrainstormsEarly last month, this thing called ORBX.js was in the news. It ostensibly has something to do with streaming video and codec technology, which naturally catches my interest. The hype was kicked off by Mozilla honcho Brendan Eich when he posted an article asserting that HD video decoding could be entirely performed in JavaScript. We’ve seen this kind of thing before using Broadway– an H.264 decoder implemented entirely in JS. But that exposes some very obvious limitations (notably CPU usage).
But this new video codec promises 1080p HD playback directly in JavaScript which is a lofty claim. How could it possibly do this ? I got the impression that performance was achieved using WebGL, an extension which allows JavaScript access to accelerated 3D graphics hardware. Browsing through the conversations surrounding the ORBX.js announcement, I found this confirmation from Eich himself :
You’re right that WebGL does heavy lifting.
As of this writing, ORBX.js remains some kind of private tech demo. If there were a public demo available, it would necessarily be easy to reverse engineer the downloadable JavaScript decoder.
But the announcement was enough to make me wonder how it could be possible to create a video codec which effectively leverages 3D hardware.
Prior Art
In theorizing about this, it continually occurs to me that I can’t possibly be the first person to attempt to do this (or the ORBX.js people, for that matter). In googling on the matter, I found various forums and Q&A posts where people asked if it were possible to, e.g., accelerate JPEG decoding and presentation using 3D hardware, with no answers. I also found a blog post which describes a plan to use 3D hardware to accelerate VP8 video decoding. It was a project done under the banner of Google’s Summer of Code in 2011, though I’m not sure which open source group mentored the effort. The project did not end up producing the shader-based VP8 codec originally chartered but mentions that “The ‘client side’ of the VP8 VDPAU implementation is working and is currently being reviewed by the libvdpau maintainers.” I’m not sure what that means. Perhaps it includes modifications to the public API that supports VP8, but is waiting for the underlying hardware to actually implement VP8 decoding blocks in hardware.What’s So Hard About This ?
Video decoding is a computationally intensive task. GPUs are known to be really awesome at chewing through computationally intensive tasks. So why aren’t GPUs a natural fit for decoding video codecs ?Generally, it boils down to parallelism, or lack of opportunities thereof. GPUs are really good at doing the exact same operations over lots of data at once. The problem is that decoding compressed video usually requires multiple phases that cannot be parallelized, and the individual phases often cannot be parallelized. In strictly mathematical terms, a compressed data stream will need to be decoded by applying a function f(x) over each data element, x0 .. xn. However, the function relies on having applied the function to the previous data element, i.e. :
f(xn) = f(f(xn-1))
What happens when you try to parallelize such an algorithm ? Temporal rifts in the space/time continuum, if you’re in a Star Trek episode. If you’re in the real world, you’ll get incorrect, unusuable data as the parallel computation is seeded with a bunch of invalid data at multiple points (which is illustrated in some of the pictures in the aforementioned blog post about accelerated VP8).
Example : JPEG
Let’s take a very general look at the various stages involved in decoding the ubiquitous JPEG format :
What are the opportunities to parallelize these various phases ?
- Huffman decoding (run length decoding and zig-zag reordering is assumed to be rolled into this phase) : not many opportunities for parallelizing the various Huffman formats out there, including this one. Decoding most Huffman streams is necessarily a sequential operation. I once hypothesized that it would be possible to engineer a codec to achieve some parallelism during the entropy decoding phase, and later found that On2′s VP8 codec employs the scheme. However, such a scheme is unlikely to break down to such a fine level that WebGL would require.
- Reverse DC prediction : JPEG — and many other codecs — doesn’t store full DC coefficients. It stores differences in successive DC coefficients. Reversing this process can’t be parallelized. See the discussion in the previous section.
- Dequantize coefficients : This could be very parallelized. It should be noted that software decoders often don’t dequantize all coefficients. Many coefficients are 0 and it’s a waste of a multiplication operation to dequantize. Thus, this phase is sometimes rolled into the Huffman decoding phase.
- Invert discrete cosine transform : This seems like it could be highly parallelizable. I will be exploring this further in this post.
- Convert YUV -> RGB for final display : This is a well-established use case for 3D acceleration.
Crash Course in 3D Shaders and Humility
So I wanted to see if I could accelerate some parts of JPEG decoding using something called shaders. I made an effort to understand 3D programming and its associated math throughout the 1990s but 3D technology left me behind a very long time ago while I got mixed up in this multimedia stuff. So I plowed through a few books concerning WebGL (thanks to my new Safari Books Online subscription). After I learned enough about WebGL/JS to be dangerous and just enough about shader programming to be absolutely lethal, I set out to try my hand at optimizing IDCT using shaders.Here’s my extremely high level (and probably hopelessly naive) view of the modern GPU shader programming model :
The WebGL program written in JavaScript drives the show. It sends a set of vertices into the WebGL system and each vertex is processed through a vertex shader. Then, each pixel that falls within a set of vertices is sent through a fragment shader to compute the final pixel attributes (R, G, B, and alpha value). Another consideration is textures : This is data that the program uploads to GPU memory which can be accessed programmatically by the shaders).
These shaders (vertex and fragment) are key to the GPU’s programmability. How are they programmed ? Using a special C-like shading language. Thought I : “C-like language ? I know C ! I should be able to master this in short order !” So I charged forward with my assumptions and proceeded to get smacked down repeatedly by the overall programming paradigm. I came to recognize this as a variation of the scientific method : Develop a hypothesis– in my case, a mental model of how the system works ; develop an experiment (short program) to prove or disprove the model ; realize something fundamental that I was overlooking ; formulate new hypothesis and repeat.
First Approach : Vertex Workhorse
My first pitch goes like this :- Upload DCT coefficients to GPU memory in the form of textures
- Program a vertex mesh that encapsulates 16×16 macroblocks
- Distribute the IDCT effort among multiple vertex shaders
- Pass transformed Y, U, and V blocks to fragment shader which will convert the samples to RGB
So the idea is that decoding of 16×16 macroblocks is parallelized. A macroblock embodies 6 blocks :
It would be nice to process one of these 6 blocks in each vertex. But that means drawing a square with 6 vertices. How do you do that ? I eventually realized that drawing a square with 6 vertices is the recommended method for drawing a square on 3D hardware. Using 2 triangles, each with 3 vertices (0, 1, 2 ; 3, 4, 5) :
A vertex shader knows which (x, y) coordinates it has been assigned, so it could figure out which sections of coefficients it needs to access within the textures. But how would a vertex shader know which of the 6 blocks it should process ? Solution : Misappropriate the vertex’s z coordinate. It’s not used for anything else in this case.
So I set all of that up. Then I hit a new roadblock : How to get the reconstructed Y, U, and V samples transported to the fragment shader ? I have found that communicating between shaders is quite difficult. Texture memory ? WebGL doesn’t allow shaders to write back to texture memory ; shaders can only read it. The standard way to communicate data from a vertex shader to a fragment shader is to declare variables as “varying”. Up until this point, I knew about varying variables but there was something I didn’t quite understand about them and it nagged at me : If 3 different executions of a vertex shader set 3 different values to a varying variable, what value is passed to the fragment shader ?
It turns out that the varying variable varies, which means that the GPU passes interpolated values to each fragment shader invocation. This completely destroys this idea.
Second Idea : Vertex Workhorse, Take 2
The revised pitch is to work around the interpolation issue by just having each vertex shader invocation performs all 6 block transforms. That seems like a lot of redundant. However, I figured out that I can draw a square with only 4 vertices by arranging them in an ‘N’ pattern and asking WebGL to draw a TRIANGLE_STRIP instead of TRIANGLES. Now it’s only doing the 4x the extra work, and not 6x. GPUs are supposed to be great at this type of work, so it shouldn’t matter, right ?I wired up an experiment and then ran into a new problem : While I was able to transform a block (or at least pretend to), and load up a varying array (that wouldn’t vary since all vertex shaders wrote the same values) to transmit to the fragment shader, the fragment shader can’t access specific values within the varying block. To clarify, a WebGL shader can use a constant value — or a value that can be evaluated as a constant at compile time — to index into arrays ; a WebGL shader can not compute an index into an array. Per my reading, this is a WebGL security consideration and the limitation may not be present in other OpenGL(-ES) implementations.
Not Giving Up Yet : Choking The Fragment Shader
You might want to be sitting down for this pitch :- Vertex shader only interpolates texture coordinates to transmit to fragment shader
- Fragment shader performs IDCT for a single Y sample, U sample, and V sample
- Fragment shader converts YUV -> RGB
Seems straightforward enough. However, that step concerning IDCT for Y, U, and V entails a gargantuan number of operations. When computing the IDCT for an entire block of samples, it’s possible to leverage a lot of redundancy in the math which equates to far fewer overall operations. If you absolutely have to compute each sample individually, for an 8×8 block, that requires 64 multiplication/accumulation (MAC) operations per sample. For 3 color planes, and including a few extra multiplications involved in the RGB conversion, that tallies up to about 200 MACs per pixel. Then there’s the fact that this approach means a 4x redundant operations on the color planes.
It’s crazy, but I just want to see if it can be done. My approach is to pre-compute a pile of IDCT constants in the JavaScript and transmit them to the fragment shader via uniform variables. For a first order optimization, the IDCT constants are formatted as 4-element vectors. This allows computing 16 dot products rather than 64 individual multiplication/addition operations. Ideally, GPU hardware executes the dot products faster (and there is also the possibility of lining these calculations up as matrices).
I can report that I actually got a sample correctly transformed using this approach. Just one sample, through. Then I ran into some new problems :
Problem #1 : Computing sample #1 vs. sample #0 requires a different table of 64 IDCT constants. Okay, so create a long table of 64 * 64 IDCT constants. However, this suffers from the same problem as seen in the previous approach : I can’t dynamically compute the index into this array. What’s the alternative ? Maintain 64 separate named arrays and implement 64 branches, when branching of any kind is ill-advised in shader programming to begin with ? I started to go down this path until I ran into…
Problem #2 : Shaders can only be so large. 64 * 64 floats (4 bytes each) requires 16 kbytes of data and this well exceeds the amount of shader storage that I can assume is allowed. That brings this path of exploration to a screeching halt.
Further Brainstorming
I suppose I could forgo pre-computing the constants and directly compute the IDCT for each sample which would entail lots more multiplications as well as 128 cosine calculations per sample (384 considering all 3 color planes). I’m a little stuck with the transform idea right now. Maybe there are some other transforms I could try.Another idea would be vector quantization. What little ORBX.js literature is available indicates that there is a method to allow real-time streaming but that it requires GPU assistance to yield enough horsepower to make it feasible. When I think of such severe asymmetry between compression and decompression, my mind drifts towards VQ algorithms. As I come to understand the benefits and limitations of GPU acceleration, I think I can envision a way that something similar to SVQ1, with its copious, hierarchical vector tables stored as textures, could be implemented using shaders.
So far, this all pertains to intra-coded video frames. What about opportunities for inter-coded frames ? The only approach that I can envision here is to use WebGL’s readPixels() function to fetch the rasterized frame out of the GPU, and then upload it again as a new texture which a new frame processing pipeline could reference. Whether this idea is plausible would require some profiling.
Using interframes in such a manner seems to imply that the entire codec would need to operate in RGB space and not YUV.
Conclusions
The people behind ORBX.js have apparently figured out a way to create a shader-based video codec. I have yet to even begin to reason out a plausible approach. However, I’m glad I did this exercise since I have finally broken through my ignorance regarding modern GPU shader programming. It’s nice to have a topic like multimedia that allows me a jumping-off point to explore other areas. -
Conversion from mp3 to aac/mp4 container (FFmpeg/c++)
1er juillet 2013, par taansariI have made a small application to extract audio from an mp4 file, or simply convert an existing audio file to AAC/mp4 format (both raw AAC, or inside mp4 container). I have run this application with existing mp4 files as input, and it properly extracts audio, and encodes to mp4 (audio only:AAC), or even directly in AAC format (i.e. test.aac also works). But when I tried running it on mp3 files, output clip plays faster than it should be (a clip of 1:12 seconds plays back till 1:05 seconds only).
Edit : I have made improvements in code - now, it no longer plays back faster, but is still only converted till 1:05 seconds, remaining clip is missing (this is about 89% conversion done, and remaining 11% remaining).
Here is the code I have written to achieve this :
////////////////////////////////////////////////
#include "stdafx.h"
#include <iostream>
#include <fstream>
#include <string>
#include <vector>
#include <map>
#include <deque>
#include <queue>
#include
#include
#include
#include
extern "C"
{
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libavdevice/avdevice.h"
#include "libswscale/swscale.h"
#include "libavutil/dict.h"
#include "libavutil/error.h"
#include "libavutil/opt.h"
#include <libavutil></libavutil>fifo.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>samplefmt.h>
#include <libswresample></libswresample>swresample.h>
}
AVFormatContext* fmt_ctx= NULL;
int audio_stream_index = -1;
AVCodecContext * codec_ctx_audio = NULL;
AVCodec* codec_audio = NULL;
AVFrame* decoded_frame = NULL;
uint8_t** audio_dst_data = NULL;
int got_frame = 0;
int audiobufsize = 0;
AVPacket input_packet;
int audio_dst_linesize = 0;
int audio_dst_bufsize = 0;
SwrContext * swr = NULL;
AVOutputFormat * output_format = NULL ;
AVFormatContext * output_fmt_ctx= NULL;
AVStream * audio_st = NULL;
AVCodec * audio_codec = NULL;
double audio_pts = 0.0;
AVFrame * out_frame = avcodec_alloc_frame();
int audio_input_frame_size = 0;
uint8_t * audio_data_buf = NULL;
uint8_t * audio_out = NULL;
int audio_bit_rate;
int audio_sample_rate;
int audio_channels;
int decode_packet();
int open_audio_input(char* src_filename);
int decode_frame();
int open_encoder(char* output_filename);
AVStream *add_audio_stream(AVFormatContext *oc, AVCodec **codec,
enum AVCodecID codec_id);
int open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st);
void close_audio(AVFormatContext *oc, AVStream *st);
void write_audio_frame(uint8_t ** audio_src_data, int audio_src_bufsize);
int open_audio_input(char* src_filename)
{
int i =0;
/* open input file, and allocate format context */
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0)
{
fprintf(stderr, "Could not open source file %s\n", src_filename);
exit(1);
}
// Retrieve stream information
if(avformat_find_stream_info(fmt_ctx, NULL)<0)
return -1; // Couldn't find stream information
// Dump information about file onto standard error
av_dump_format(fmt_ctx, 0, src_filename, 0);
// Find the first video stream
for(i=0; inb_streams; i++)
{
if(fmt_ctx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO)
{
audio_stream_index=i;
break;
}
}
if ( audio_stream_index != -1 )
{
// Get a pointer to the codec context for the audio stream
codec_ctx_audio=fmt_ctx->streams[audio_stream_index]->codec;
// Find the decoder for the video stream
codec_audio=avcodec_find_decoder(codec_ctx_audio->codec_id);
if(codec_audio==NULL) {
fprintf(stderr, "Unsupported audio codec!\n");
return -1; // Codec not found
}
// Open codec
AVDictionary *codecDictOptions = NULL;
if(avcodec_open2(codec_ctx_audio, codec_audio, &codecDictOptions)<0)
return -1; // Could not open codec
// Set up SWR context once you've got codec information
swr = swr_alloc();
av_opt_set_int(swr, "in_channel_layout", codec_ctx_audio->channel_layout, 0);
av_opt_set_int(swr, "out_channel_layout", codec_ctx_audio->channel_layout, 0);
av_opt_set_int(swr, "in_sample_rate", codec_ctx_audio->sample_rate, 0);
av_opt_set_int(swr, "out_sample_rate", codec_ctx_audio->sample_rate, 0);
av_opt_set_sample_fmt(swr, "in_sample_fmt", codec_ctx_audio->sample_fmt, 0);
av_opt_set_sample_fmt(swr, "out_sample_fmt", AV_SAMPLE_FMT_S16, 0);
swr_init(swr);
// Allocate audio frame
if ( decoded_frame == NULL ) decoded_frame = avcodec_alloc_frame();
int nb_planes = 0;
AVStream* audio_stream = fmt_ctx->streams[audio_stream_index];
nb_planes = av_sample_fmt_is_planar(codec_ctx_audio->sample_fmt) ? codec_ctx_audio->channels : 1;
int tempSize = sizeof(uint8_t *) * nb_planes;
audio_dst_data = (uint8_t**)av_mallocz(tempSize);
if (!audio_dst_data)
{
fprintf(stderr, "Could not allocate audio data buffers\n");
}
else
{
for ( int i = 0 ; i < nb_planes ; i ++ )
{
audio_dst_data[i] = NULL;
}
}
}
}
int decode_frame()
{
int rv = 0;
got_frame = 0;
if ( fmt_ctx == NULL )
{
return rv;
}
int ret = 0;
audiobufsize = 0;
rv = av_read_frame(fmt_ctx, &input_packet);
if ( rv < 0 )
{
return rv;
}
rv = decode_packet();
// Free the input_packet that was allocated by av_read_frame
//av_free_packet(&input_packet);
return rv;
}
int decode_packet()
{
int rv = 0;
int ret = 0;
//audio stream?
if(input_packet.stream_index == audio_stream_index)
{
/* decode audio frame */
rv = avcodec_decode_audio4(codec_ctx_audio, decoded_frame, &got_frame, &input_packet);
if (rv < 0)
{
fprintf(stderr, "Error decoding audio frame\n");
//return ret;
}
else
{
if (got_frame)
{
if ( audio_dst_data[0] == NULL )
{
ret = av_samples_alloc(audio_dst_data, &audio_dst_linesize, decoded_frame->channels,
decoded_frame->nb_samples, (AVSampleFormat)decoded_frame->format, 1);
if (ret < 0)
{
fprintf(stderr, "Could not allocate audio buffer\n");
return AVERROR(ENOMEM);
}
/* TODO: extend return code of the av_samples_* functions so that this call is not needed */
audio_dst_bufsize = av_samples_get_buffer_size(NULL, audio_st->codec->channels,
decoded_frame->nb_samples, (AVSampleFormat)decoded_frame->format, 1);
//int16_t* outputBuffer = ...;
swr_convert( swr, audio_dst_data, out_frame->nb_samples, (const uint8_t**) decoded_frame->extended_data, decoded_frame->nb_samples );
}
/* copy audio data to destination buffer:
* this is required since rawaudio expects non aligned data */
//av_samples_copy(audio_dst_data, decoded_frame->data, 0, 0,
// decoded_frame->nb_samples, decoded_frame->channels, (AVSampleFormat)decoded_frame->format);
}
}
}
return rv;
}
int open_encoder(char* output_filename )
{
int rv = 0;
/* allocate the output media context */
AVOutputFormat *opfmt = NULL;
avformat_alloc_output_context2(&output_fmt_ctx, opfmt, NULL, output_filename);
if (!output_fmt_ctx) {
printf("Could not deduce output format from file extension: using MPEG.\n");
avformat_alloc_output_context2(&output_fmt_ctx, NULL, "mpeg", output_filename);
}
if (!output_fmt_ctx) {
rv = -1;
}
else
{
output_format = output_fmt_ctx->oformat;
}
/* Add the audio stream using the default format codecs
* and initialize the codecs. */
audio_st = NULL;
if ( output_fmt_ctx )
{
if (output_format->audio_codec != AV_CODEC_ID_NONE)
{
audio_st = add_audio_stream(output_fmt_ctx, &audio_codec, output_format->audio_codec);
}
/* Now that all the parameters are set, we can open the audio and
* video codecs and allocate the necessary encode buffers. */
if (audio_st)
{
rv = open_audio(output_fmt_ctx, audio_codec, audio_st);
if ( rv < 0 ) return rv;
}
av_dump_format(output_fmt_ctx, 0, output_filename, 1);
/* open the output file, if needed */
if (!(output_format->flags & AVFMT_NOFILE))
{
if (avio_open(&output_fmt_ctx->pb, output_filename, AVIO_FLAG_WRITE) < 0) {
fprintf(stderr, "Could not open '%s'\n", output_filename);
rv = -1;
}
else
{
/* Write the stream header, if any. */
if (avformat_write_header(output_fmt_ctx, NULL) < 0)
{
fprintf(stderr, "Error occurred when opening output file\n");
rv = -1;
}
}
}
}
return rv;
}
AVStream *add_audio_stream(AVFormatContext *oc, AVCodec **codec,
enum AVCodecID codec_id)
{
AVCodecContext *c;
AVStream *st;
/* find the audio encoder */
*codec = avcodec_find_encoder(codec_id);
if (!(*codec)) {
fprintf(stderr, "Could not find codec\n");
exit(1);
}
st = avformat_new_stream(oc, *codec);
if (!st) {
fprintf(stderr, "Could not allocate stream\n");
exit(1);
}
st->id = 1;
c = st->codec;
/* put sample parameters */
c->sample_fmt = AV_SAMPLE_FMT_S16;
c->bit_rate = audio_bit_rate;
c->sample_rate = audio_sample_rate;
c->channels = audio_channels;
// some formats want stream headers to be separate
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
return st;
}
int open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st)
{
int ret=0;
AVCodecContext *c;
st->duration = fmt_ctx->duration;
c = st->codec;
/* open it */
ret = avcodec_open2(c, codec, NULL) ;
if ( ret < 0)
{
fprintf(stderr, "could not open codec\n");
return -1;
//exit(1);
}
if (c->codec->capabilities & CODEC_CAP_VARIABLE_FRAME_SIZE)
audio_input_frame_size = 10000;
else
audio_input_frame_size = c->frame_size;
int tempSize = audio_input_frame_size *
av_get_bytes_per_sample(c->sample_fmt) *
c->channels;
return ret;
}
void close_audio(AVFormatContext *oc, AVStream *st)
{
avcodec_close(st->codec);
}
void write_audio_frame(uint8_t ** audio_src_data, int audio_src_bufsize)
{
AVFormatContext *oc = output_fmt_ctx;
AVStream *st = audio_st;
if ( oc == NULL || st == NULL ) return;
AVCodecContext *c;
AVPacket pkt = { 0 }; // data and size must be 0;
int got_packet;
av_init_packet(&pkt);
c = st->codec;
out_frame->nb_samples = audio_input_frame_size;
int buf_size = audio_src_bufsize *
av_get_bytes_per_sample(c->sample_fmt) *
c->channels;
avcodec_fill_audio_frame(out_frame, c->channels, c->sample_fmt,
(uint8_t *) *audio_src_data,
buf_size, 1);
avcodec_encode_audio2(c, &pkt, out_frame, &got_packet);
if (!got_packet)
{
}
else
{
if (pkt.pts != AV_NOPTS_VALUE)
pkt.pts = av_rescale_q(pkt.pts, st->codec->time_base, st->time_base);
if (pkt.dts != AV_NOPTS_VALUE)
pkt.dts = av_rescale_q(pkt.dts, st->codec->time_base, st->time_base);
if ( c && c->coded_frame && c->coded_frame->key_frame)
pkt.flags |= AV_PKT_FLAG_KEY;
pkt.stream_index = st->index;
pkt.flags |= AV_PKT_FLAG_KEY;
/* Write the compressed frame to the media file. */
if (av_interleaved_write_frame(oc, &pkt) != 0)
{
fprintf(stderr, "Error while writing audio frame\n");
exit(1);
}
}
av_free_packet(&pkt);
}
void write_delayed_frames(AVFormatContext *oc, AVStream *st)
{
AVCodecContext *c = st->codec;
int got_output = 0;
int ret = 0;
AVPacket pkt;
pkt.data = NULL;
pkt.size = 0;
av_init_packet(&pkt);
int i = 0;
for (got_output = 1; got_output; i++)
{
ret = avcodec_encode_audio2(c, &pkt, NULL, &got_output);
if (ret < 0)
{
fprintf(stderr, "error encoding frame\n");
exit(1);
}
static int64_t tempPts = 0;
static int64_t tempDts = 0;
/* If size is zero, it means the image was buffered. */
if (got_output)
{
if (pkt.pts != AV_NOPTS_VALUE)
pkt.pts = av_rescale_q(pkt.pts, st->codec->time_base, st->time_base);
if (pkt.dts != AV_NOPTS_VALUE)
pkt.dts = av_rescale_q(pkt.dts, st->codec->time_base, st->time_base);
if ( c && c->coded_frame && c->coded_frame->key_frame)
pkt.flags |= AV_PKT_FLAG_KEY;
pkt.stream_index = st->index;
/* Write the compressed frame to the media file. */
ret = av_interleaved_write_frame(oc, &pkt);
}
else
{
ret = 0;
}
av_free_packet(&pkt);
}
}
int main(int argc, char **argv)
{
/* register all formats and codecs */
av_register_all();
avcodec_register_all();
avformat_network_init();
avdevice_register_all();
int i =0;
char src_filename[90] = "mp3.mp3";
char dst_filename[90] = "test.mp4";
open_audio_input(src_filename);
audio_bit_rate = codec_ctx_audio->bit_rate;
audio_sample_rate = codec_ctx_audio->sample_rate;
audio_channels = codec_ctx_audio->channels;
open_encoder( dst_filename );
while(1)
{
int rv = decode_frame();
if ( rv < 0 )
{
break;
}
if (audio_st)
{
audio_pts = (double)audio_st->pts.val * audio_st->time_base.num /
audio_st->time_base.den;
}
else
{
audio_pts = 0.0;
}
if ( codec_ctx_audio )
{
if ( got_frame)
{
write_audio_frame( audio_dst_data, audio_dst_bufsize );
}
}
if ( audio_dst_data[0] )
{
av_freep(&audio_dst_data[0]);
audio_dst_data[0] = NULL;
}
av_free_packet(&input_packet);
printf("\naudio_pts: %.3f", audio_pts);
}
write_delayed_frames( output_fmt_ctx, audio_st );
av_write_trailer(output_fmt_ctx);
close_audio( output_fmt_ctx, audio_st);
swr_free(&swr);
avcodec_free_frame(&out_frame);
return 0;
}
///////////////////////////////////////////////
</queue></deque></map></vector></string></fstream></iostream>I have been looking at this problem from many angles since about two days now, but cant seem to figure out what I'm doing wrong.
Note also : the printf() statement I've inserted shows audio_pts up to 64.551 (that's about 1:05 seconds that also proves encoder is not going to full duration of input file : 1:12 secs).
Can anyone please guide me what I may be doing wrong ?
Thanks in advance for any guidance !
p.s. when run through command line like : ffmpeg -i test.mp3 test.mp4, it converts the file just fine.
-
Metal Gear Solid VP3 Easter Egg
4 août 2011, par Multimedia Mike — Game HackingMetal Gear Solid : The Twin Snakes for the Nintendo GameCube is very heavy on the cutscenes. Most of them are animated in real-time but there are a bunch of clips — normally of a more photo-realistic nature — that the developers needed to compress using a conventional video codec. What did they decide to use for this task ? On2 VP3 (forerunner of Theora) in a custom transport format. This is only the second game I have seen in the wild that uses pure On2 VP3 (first was a horse game). Reimar and I sorted out most of the details sometime ago. I sat down today and wrote a FFmpeg / Libav demuxer for the format, mostly to prove to myself that I still could.
Things went pretty smoothly. We suspected that there was an integer field that indicated the frame rate, but 18 fps is a bit strange. I kept fixating on a header field that read
0x41F00000
. Where have I seen that number before ? Oh, of course — it’s the number 30.0 expressed as an IEEE 32-bit float. The 4XM format pulled the same trick.Hexadecimal Easter Egg
I know I finished the game years ago but I really can’t recall any of the clips present in the samples directory. The file mgs1-60.vp3 contains a computer screen granting the player access and illustrates this with a hexdump. It looks something like this :
Funny, there are only 22 bytes on a line when there should be 32 according to the offsets. But, leave it to me to try to figure out what the file type is, regardless. I squinted and copied the first 22 bytes into a file :
1F 8B 08 00 85 E2 17 38 00 03 EC 3A 0D 78 54 D5 38 00 03 EC 3A 0D
And the answer to the big question :
$ file mgsfile mgsfile : gzip compressed data, from Unix, last modified : Wed Oct 27 22:43:33 1999
A gzip’d file from 1999. I don’t know why I find this stuff so interesting, but I do. I guess it’s no more and less strange than writing playback systems like this.