
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (111)
-
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.
Sur d’autres sites (6030)
-
What Every Programmer Should Know
24 décembre 2012, par Multimedia Mike — GeneralDuring my recent effort to force myself to understand Unicode and modern text encoding/processing, I was reminded that this is something that “every programmer should just know”, an idea that comes up every so often, usually in relation to a subject in which the speaker is already an expert. One of the most absurd examples I ever witnessed was a blog post along the lines of “What every working programmer ought to know about [some very specific niche of enterprise-level Java programming]“. I remember reading through the article and recognizing that I had almost no knowledge of the material. Disturbing, since I am demonstrably a “working programmer”.
For fun, I queried the googles on the matter of what ever programmer ought to know.
Specific Topics
Here is what every programmer should know about : Unicode, time, memory (simple), memory (extremely in-depth), regular expressions, search engine optimization, floating point, security, basic number theory, race conditions, managed C++, VIM commands, distributed systems, object-oriented design, latency numbers, rate monotonic algorithm, merging branches in Mercurial, classes of algorithms, and human names.Broader Topics
20 subjects every programmer should know, 97 things every programmer should know, 12 things every programmer should know, things every programmer should know (27 items), 10 papers every programmer should read at least twice, 10 things every programmer should know for their first job.Meanwhile, I remain fond of this xkcd comic whose mouseover text describes all that a person genuinely needs to know. Still, the new year is upon us, a time when people often make commitments to bettering themselves, and it couldn’t hurt (much) to at least skim some of the lists and find out what you never knew that you never knew.
What About Multimedia ?
Reading the foregoing (or the titles of the foregoing pieces), I naturally wonder if I should write something about what every programmer should know about multimedia. I think it would look something like a multimedia programming FAQ. These are some items that I can think of :- YUV : The other colorspace (since most programmers are only familiar with RGB and have no idea what to make of the YUV that comes out of most video decoding APIs)
- Why you can’t easily seek randomly to any specific frame in a video file (keyframe/interframe discussion and their implications)
- Understand your platform before endeavoring to implement multimedia software (modern platforms, particularly mobile platforms, probably provide everything you need in the native APIs and there is likely little reason to compile libavcodec for the platform)
- Difference between containers and codecs (longstanding item, but I would argue it’s less relevant these days due to standardization on the MPEG — MP4/H.264/AAC — stack)
- What counts as a multimedia standard in this day and age (comparing the foregoing MPEG stack with the WebM/VP8/Vorbis stack)
- Trade-offs to consider when engineering a multimedia solution
- Optimization doesn’t always work the way you think it does (not everything touted as a massive speed-up in the world of computing — whether it be multithreaded CPUs, GPGPUs, new SIMD instruction sets — will necessarily be applicable to multimedia processing)
- A practical guide to legal issues would not be amiss
- ???
What other items count as “something multimedia-related that every programmer should know” ?
-
avcodec/tdsc : use ff_codec_open2_recursive()
13 mars 2015, par Michael Niedermayer -
Muxing with libav
14 février 2014, par LordDoskiasI have a program which is supposed to demux input mpeg-ts, transcode the mpeg2 into h264 and then mux the audio alongside the transcoded video. When I open the resulting muxed file with VLC I neither get audio nor video. Here is the relevant code.
My main worker loop is as follows :
void
*writer_thread(void *thread_ctx) {
struct transcoder_ctx_t *ctx = (struct transcoder_ctx_t *) thread_ctx;
AVStream *video_stream = NULL, *audio_stream = NULL;
AVFormatContext *output_context = init_output_context(ctx, &video_stream, &audio_stream);
struct mux_state_t mux_state = {0};
//from omxtx
mux_state.pts_offset = av_rescale_q(ctx->input_context->start_time, AV_TIME_BASE_Q, output_context->streams[ctx->video_stream_index]->time_base);
//write stream header if any
avformat_write_header(output_context, NULL);
//do not start doing anything until we get an encoded packet
pthread_mutex_lock(&ctx->pipeline.video_encode.is_running_mutex);
while (!ctx->pipeline.video_encode.is_running) {
pthread_cond_wait(&ctx->pipeline.video_encode.is_running_cv, &ctx->pipeline.video_encode.is_running_mutex);
}
while (!ctx->pipeline.video_encode.eos || !ctx->processed_audio_queue->queue_finished) {
//FIXME a memory barrier is required here so that we don't race
//on above variables
//fill a buffer with video data
OERR(OMX_FillThisBuffer(ctx->pipeline.video_encode.h, omx_get_next_output_buffer(&ctx->pipeline.video_encode)));
write_audio_frame(output_context, audio_stream, ctx); //write full audio frame
//FIXME no guarantee that we have a full frame per packet?
write_video_frame(output_context, video_stream, ctx, &mux_state); //write full video frame
//encoded_video_queue is being filled by the previous command
}
av_write_trailer(output_context);
//free all the resources
avcodec_close(video_stream->codec);
avcodec_close(audio_stream->codec);
/* Free the streams. */
for (int i = 0; i < output_context->nb_streams; i++) {
av_freep(&output_context->streams[i]->codec);
av_freep(&output_context->streams[i]);
}
if (!(output_context->oformat->flags & AVFMT_NOFILE)) {
/* Close the output file. */
avio_close(output_context->pb);
}
/* free the stream */
av_free(output_context);
free(mux_state.pps);
free(mux_state.sps);
}The code for initialising libav output context is this :
static
AVFormatContext *
init_output_context(const struct transcoder_ctx_t *ctx, AVStream **video_stream, AVStream **audio_stream) {
AVFormatContext *oc;
AVOutputFormat *fmt;
AVStream *input_stream, *output_stream;
AVCodec *c;
AVCodecContext *cc;
int audio_copied = 0; //copy just 1 stream
fmt = av_guess_format("mpegts", NULL, NULL);
if (!fmt) {
fprintf(stderr, "[DEBUG] Error guessing format, dying\n");
exit(199);
}
oc = avformat_alloc_context();
if (!oc) {
fprintf(stderr, "[DEBUG] Error allocating context, dying\n");
exit(200);
}
oc->oformat = fmt;
snprintf(oc->filename, sizeof(oc->filename), "%s", ctx->output_filename);
oc->debug = 1;
oc->start_time_realtime = ctx->input_context->start_time;
oc->start_time = ctx->input_context->start_time;
oc->duration = 0;
oc->bit_rate = 0;
for (int i = 0; i < ctx->input_context->nb_streams; i++) {
input_stream = ctx->input_context->streams[i];
output_stream = NULL;
if (input_stream->index == ctx->video_stream_index) {
//copy stuff from input video index
c = avcodec_find_encoder(CODEC_ID_H264);
output_stream = avformat_new_stream(oc, c);
*video_stream = output_stream;
cc = output_stream->codec;
cc->width = input_stream->codec->width;
cc->height = input_stream->codec->height;
cc->codec_id = CODEC_ID_H264;
cc->codec_type = AVMEDIA_TYPE_VIDEO;
cc->bit_rate = ENCODED_BITRATE;
cc->time_base = input_stream->codec->time_base;
output_stream->avg_frame_rate = input_stream->avg_frame_rate;
output_stream->r_frame_rate = input_stream->r_frame_rate;
output_stream->start_time = AV_NOPTS_VALUE;
} else if ((input_stream->codec->codec_type == AVMEDIA_TYPE_AUDIO) && !audio_copied) {
/* i care only about audio */
c = avcodec_find_encoder(input_stream->codec->codec_id);
output_stream = avformat_new_stream(oc, c);
*audio_stream = output_stream;
avcodec_copy_context(output_stream->codec, input_stream->codec);
/* Apparently fixes a crash on .mkvs with attachments: */
av_dict_copy(&output_stream->metadata, input_stream->metadata, 0);
/* Reset the codec tag so as not to cause problems with output format */
output_stream->codec->codec_tag = 0;
audio_copied = 1;
}
}
for (int i = 0; i < oc->nb_streams; i++) {
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
oc->streams[i]->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
if (oc->streams[i]->codec->sample_rate == 0)
oc->streams[i]->codec->sample_rate = 48000; /* ish */
}
if (!(fmt->flags & AVFMT_NOFILE)) {
fprintf(stderr, "[DEBUG] AVFMT_NOFILE set, allocating output container\n");
if (avio_open(&oc->pb, ctx->output_filename, AVIO_FLAG_WRITE) < 0) {
fprintf(stderr, "[DEBUG] error creating the output context\n");
exit(1);
}
}
return oc;
}Finally this is the code for writing audio :
static
void
write_audio_frame(AVFormatContext *oc, AVStream *st, struct transcoder_ctx_t *ctx) {
AVPacket pkt = {0}; // data and size must be 0;
struct packet_t *source_audio;
av_init_packet(&pkt);
if (!(source_audio = packet_queue_get_next_item_asynch(ctx->processed_audio_queue))) {
return;
}
pkt.stream_index = st->index;
pkt.size = source_audio->data_length;
pkt.data = source_audio->data;
pkt.pts = source_audio->PTS;
pkt.dts = source_audio->DTS;
pkt.duration = source_audio->duration;
pkt.destruct = avpacket_destruct;
/* Write the compressed frame to the media file. */
if (av_interleaved_write_frame(oc, &pkt) != 0) {
fprintf(stderr, "[DEBUG] Error while writing audio frame\n");
}
packet_queue_free_packet(source_audio, 0);
}A resulting mpeg4 file can be obtained from here :
http://87.120.131.41/dl/mpeg4.h264
I have ommited the write_video_frame code since it is a lot more complicated and I might be making something wrong there as I'm doing timebase conversation etc. For audio however I'm doing 1:1 copy. Each packet_t packet contains data from av_read_frame from the input mpegts container. In the worst case I'd expect that my audio is working and not my video. However I cannot get either of those to work. Seems the documentation is rather vague on making things like that - I've tried both libav and ffmpeg irc channels to no avail. Any information regarding how I can debug the issue will be greatly appreciated.