
Recherche avancée
Autres articles (100)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (15032)
-
MediaCodec - save timing info for ffmpeg ?
18 novembre 2014, par MarkI have a requirement to encrypt video before it hits the disk. It seems on Android the only way to do this is to use MediaCodec, and encrypt and save the raw h264 elementary streams. (The MediaRecorder and Muxer classes operate on FileDescriptors, not an OutputStream, so I can’t wrap it with a CipherOutputStream).
Using the grafika code as a base, I’m able to save a raw h264 elementary stream by replacing the Muxer in the VideoEncoderCore class with a WriteableByteChannel, backed by a CipherOutputStream (code below, minus the CipherOutputStream).
If I take the resulting output file over to the desktop I’m able to use ffmpeg to mux the h264 stream to a playable mp4 file. What’s missing however is timing information. ffmpeg always assumes 25fps. What I’m looking for is a way to save the timing info, perhaps to a separate file, that I can use to give ffmpeg the right information on the desktop.
I’m not doing audio yet, but I can imagine I’ll need to do the same thing there, if I’m to have any hope of remotely accurate syncing.
FWIW, I’m a total newbie here, and I really don’t know much of anything about SPS, NAL, Atoms, etc.
/*
* Copyright 2014 Google Inc. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import android.media.MediaCodec;
import android.media.MediaCodecInfo;
import android.media.MediaFormat;
import android.util.Log;
import android.view.Surface;
import java.io.BufferedOutputStream;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.Channels;
import java.nio.channels.WritableByteChannel;
/**
* This class wraps up the core components used for surface-input video encoding.
* <p>
* Once created, frames are fed to the input surface. Remember to provide the presentation
* time stamp, and always call drainEncoder() before swapBuffers() to ensure that the
* producer side doesn't get backed up.
* </p><p>
* This class is not thread-safe, with one exception: it is valid to use the input surface
* on one thread, and drain the output on a different thread.
*/
public class VideoEncoderCore {
private static final String TAG = MainActivity.TAG;
private static final boolean VERBOSE = false;
// TODO: these ought to be configurable as well
private static final String MIME_TYPE = "video/avc"; // H.264 Advanced Video Coding
private static final int FRAME_RATE = 30; // 30fps
private static final int IFRAME_INTERVAL = 5; // 5 seconds between I-frames
private Surface mInputSurface;
private MediaCodec mEncoder;
private MediaCodec.BufferInfo mBufferInfo;
private int mTrackIndex;
//private MediaMuxer mMuxer;
//private boolean mMuxerStarted;
private WritableByteChannel outChannel;
/**
* Configures encoder and muxer state, and prepares the input Surface.
*/
public VideoEncoderCore(int width, int height, int bitRate, File outputFile)
throws IOException {
mBufferInfo = new MediaCodec.BufferInfo();
MediaFormat format = MediaFormat.createVideoFormat(MIME_TYPE, width, height);
// Set some properties. Failing to specify some of these can cause the MediaCodec
// configure() call to throw an unhelpful exception.
format.setInteger(MediaFormat.KEY_COLOR_FORMAT,
MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
format.setInteger(MediaFormat.KEY_BIT_RATE, bitRate);
format.setInteger(MediaFormat.KEY_FRAME_RATE, FRAME_RATE);
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, IFRAME_INTERVAL);
if (VERBOSE) Log.d(TAG, "format: " + format);
// Create a MediaCodec encoder, and configure it with our format. Get a Surface
// we can use for input and wrap it with a class that handles the EGL work.
mEncoder = MediaCodec.createEncoderByType(MIME_TYPE);
mEncoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mInputSurface = mEncoder.createInputSurface();
mEncoder.start();
// Create a MediaMuxer. We can't add the video track and start() the muxer here,
// because our MediaFormat doesn't have the Magic Goodies. These can only be
// obtained from the encoder after it has started processing data.
//
// We're not actually interested in multiplexing audio. We just want to convert
// the raw H.264 elementary stream we get from MediaCodec into a .mp4 file.
//mMuxer = new MediaMuxer(outputFile.toString(),
// MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
mTrackIndex = -1;
//mMuxerStarted = false;
outChannel = Channels.newChannel(new BufferedOutputStream(new FileOutputStream(outputFile)));
}
/**
* Returns the encoder's input surface.
*/
public Surface getInputSurface() {
return mInputSurface;
}
/**
* Releases encoder resources.
*/
public void release() {
if (VERBOSE) Log.d(TAG, "releasing encoder objects");
if (mEncoder != null) {
mEncoder.stop();
mEncoder.release();
mEncoder = null;
}
try {
outChannel.close();
}
catch (Exception e) {
Log.e(TAG,"Couldn't close output stream.");
}
}
/**
* Extracts all pending data from the encoder and forwards it to the muxer.
* </p><p>
* If endOfStream is not set, this returns when there is no more data to drain. If it
* is set, we send EOS to the encoder, and then iterate until we see EOS on the output.
* Calling this with endOfStream set should be done once, right before stopping the muxer.
* </p><p>
* We're just using the muxer to get a .mp4 file (instead of a raw H.264 stream). We're
* not recording audio.
*/
public void drainEncoder(boolean endOfStream) {
final int TIMEOUT_USEC = 10000;
if (VERBOSE) Log.d(TAG, "drainEncoder(" + endOfStream + ")");
if (endOfStream) {
if (VERBOSE) Log.d(TAG, "sending EOS to encoder");
mEncoder.signalEndOfInputStream();
}
ByteBuffer[] encoderOutputBuffers = mEncoder.getOutputBuffers();
while (true) {
int encoderStatus = mEncoder.dequeueOutputBuffer(mBufferInfo, TIMEOUT_USEC);
if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
// no output available yet
if (!endOfStream) {
break; // out of while
} else {
if (VERBOSE) Log.d(TAG, "no output available, spinning to await EOS");
}
} else if (encoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
// not expected for an encoder
encoderOutputBuffers = mEncoder.getOutputBuffers();
} else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
// should happen before receiving buffers, and should only happen once
//if (mMuxerStarted) {
// throw new RuntimeException("format changed twice");
//}
MediaFormat newFormat = mEncoder.getOutputFormat();
Log.d(TAG, "encoder output format changed: " + newFormat);
// now that we have the Magic Goodies, start the muxer
//mTrackIndex = mMuxer.addTrack(newFormat);
//mMuxer.start();
//mMuxerStarted = true;
} else if (encoderStatus < 0) {
Log.w(TAG, "unexpected result from encoder.dequeueOutputBuffer: " +
encoderStatus);
// let's ignore it
} else {
ByteBuffer encodedData = encoderOutputBuffers[encoderStatus];
if (encodedData == null) {
throw new RuntimeException("encoderOutputBuffer " + encoderStatus +
" was null");
}
/*
FFMPEG needs this info.
if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
// The codec config data was pulled out and fed to the muxer when we got
// the INFO_OUTPUT_FORMAT_CHANGED status. Ignore it.
if (VERBOSE) Log.d(TAG, "ignoring BUFFER_FLAG_CODEC_CONFIG");
mBufferInfo.size = 0;
}
*/
if (mBufferInfo.size != 0) {
/*
if (!mMuxerStarted) {
throw new RuntimeException("muxer hasn't started");
}
*/
// adjust the ByteBuffer values to match BufferInfo (not needed?)
encodedData.position(mBufferInfo.offset);
encodedData.limit(mBufferInfo.offset + mBufferInfo.size);
try {
outChannel.write(encodedData);
}
catch (Exception e) {
Log.e(TAG,"Error writing output.",e);
}
if (VERBOSE) {
Log.d(TAG, "sent " + mBufferInfo.size + " bytes to muxer, ts=" +
mBufferInfo.presentationTimeUs);
}
}
mEncoder.releaseOutputBuffer(encoderStatus, false);
if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
if (!endOfStream) {
Log.w(TAG, "reached end of stream unexpectedly");
} else {
if (VERBOSE) Log.d(TAG, "end of stream reached");
}
break; // out of while
}
}
}
}
}
</p> -
Python : call ffmpeg command line with subprocess
15 janvier 2015, par Jacques le lezardI’m trying to call simple ffmpeg command line with subprocess.call.
(Example :ffmpeg -i input\video.mp4 -r 30 input\video.avi
)By typing directly the ffmpeg command it works, but when I try to call it with subprocess.call :
subprocess.call('ffmpeg -i input\video.mp4 -r 30 input\video.avi', shell=True)
there is no error, but it doesn’t produce anything.Any idea where can be the problem ?
(I’m working with python 3.4 or 2.7, I tried both) -
ffmeg mux video and audio into a mp4 file, no sound in quicktime player
10 novembre 2014, par user2789801I’m using ffmpeg to mux a video file and a audio file into a single mp4 file.The mp4 file plays fine on windows, but it has no sound in quicktime player on mac. And I get a error message "2041 invalid sample description".
Here’s what I’m doing,
First, I open the video file and the audio file, init a output frame context.
Then I add a video stream and a audio stream according to the video and audio files.
Then write the header, then start muxing, then write the trailer.Here’s my code
#include "CoreRender.h"
CoreRender::CoreRender(const char* _vp, const char * _ap, const char * _op)
{
sprintf(videoPath, "%s", _vp);
sprintf(audioPath, "%s", _ap);
sprintf(outputPath, "%s", _op);
formatContext_video = NULL;
formatContext_audio = NULL;
formatContext_output = NULL;
videoStreamIdx = -1;
outputVideoStreamIdx = -1;
videoStreamIdx = -1;
audioStreamIdx = -1;
outputVideoStreamIdx = -1;
outputAudioStreamIdx = -1;
av_init_packet(&pkt);
init();
}
void CoreRender::init()
{
av_register_all();
avcodec_register_all();
// allocate a memory for the AVFrame object
frame = (AVFrame *)av_mallocz(sizeof(AVFrame));
rgbFrame = (AVFrame *)av_mallocz(sizeof(AVFrame));
if (avformat_open_input(&formatContext_video, videoPath, 0, 0) < 0)
{
release();
}
if (avformat_find_stream_info(formatContext_video, 0) < 0)
{
release();
}
if (avformat_open_input(&formatContext_audio, audioPath, 0, 0) < 0)
{
release();
}
if (avformat_find_stream_info(formatContext_audio, 0) < 0)
{
release();
}
avformat_alloc_output_context2(&formatContext_output, NULL, NULL, outputPath);
if (!formatContext_output)
{
release();
}
ouputFormat = formatContext_output->oformat;
for (int i = 0; i < formatContext_video->nb_streams; i++)
{
// create the output AVStream according to the input AVStream
if (formatContext_video->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
{
videoStreamIdx = i;
AVStream * in_stream = formatContext_video->streams[i];
AVStream * out_stream = avformat_new_stream(formatContext_output, in_stream->codec->codec);
if (! out_stream)
{
release();
}
outputVideoStreamIdx = out_stream->index;
if (avcodec_copy_context(out_stream->codec, in_stream->codec) < 0)
{
release();
}
out_stream->codec->codec_tag = 0;
if (formatContext_output->oformat->flags & AVFMT_GLOBALHEADER)
{
out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
break;
}
}
for (int i = 0; i < formatContext_audio->nb_streams; i++)
{
if (formatContext_audio->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
{
AVCodec *encoder;
encoder = avcodec_find_encoder(AV_CODEC_ID_AAC);
audioStreamIdx = i;
AVStream *in_stream = formatContext_audio->streams[i];
AVStream *out_stream = avformat_new_stream(formatContext_output, encoder);
if (!out_stream)
{
release();
}
outputAudioStreamIdx = out_stream->index;
AVCodecContext *dec_ctx, *enc_ctx;
dec_ctx = in_stream->codec;
enc_ctx = out_stream->codec;
enc_ctx->sample_rate = dec_ctx->sample_rate;
enc_ctx->channel_layout = dec_ctx->channel_layout;
enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
enc_ctx->time_base = { 1, enc_ctx->sample_rate };
enc_ctx->bit_rate = 480000;
if (avcodec_open2(enc_ctx, encoder, NULL) < 0)
{
release();
}
if (formatContext_output->oformat->flags & AVFMT_GLOBALHEADER)
{
out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
break;
}
}
if (!(ouputFormat->flags & AVFMT_NOFILE))
{
if (avio_open(&formatContext_output->pb, outputPath, AVIO_FLAG_WRITE) < 0)
{
release();
}
}
if (avformat_write_header(formatContext_output, NULL) < 0)
{
release();
}
}
void CoreRender::mux()
{
// find the decoder for the audio codec
codecContext_a = formatContext_audio->streams[audioStreamIdx]->codec;
codec_a = avcodec_find_decoder(codecContext_a->codec_id);
if (codec == NULL)
{
avformat_close_input(&formatContext_audio);
release();
}
codecContext_a = avcodec_alloc_context3(codec_a);
if (codec_a->capabilities&CODEC_CAP_TRUNCATED)
codecContext_a->flags |= CODEC_FLAG_TRUNCATED; /* we do not send complete frames */
if (avcodec_open2(codecContext_a, codec_a, NULL) < 0)
{
avformat_close_input(&formatContext_audio);
release();
}
int frame_index = 0;
int64_t cur_pts_v = 0, cur_pts_a = 0;
while (true)
{
AVFormatContext *ifmt_ctx;
int stream_index = 0;
AVStream *in_stream, *out_stream;
if (av_compare_ts(cur_pts_v,
formatContext_video->streams[videoStreamIdx]->time_base,
cur_pts_a,
formatContext_audio->streams[audioStreamIdx]->time_base) <= 0)
{
ifmt_ctx = formatContext_video;
stream_index = outputVideoStreamIdx;
if (av_read_frame(ifmt_ctx, &pkt) >=0)
{
do
{
if (pkt.stream_index == videoStreamIdx)
{
cur_pts_v = pkt.pts;
break;
}
} while (av_read_frame(ifmt_ctx, &pkt) >= 0);
}
else
{
break;
}
}
else
{
ifmt_ctx = formatContext_audio;
stream_index = outputAudioStreamIdx;
if (av_read_frame(ifmt_ctx, &pkt) >=0)
{
do
{
if (pkt.stream_index == audioStreamIdx)
{
cur_pts_a = pkt.pts;
break;
}
} while (av_read_frame(ifmt_ctx, &pkt) >=0);
processAudio();
}
else
{
break;
}
}
in_stream = ifmt_ctx->streams[pkt.stream_index];
out_stream = formatContext_output->streams[stream_index];
if (pkt.pts == AV_NOPTS_VALUE)
{
AVRational time_base1 = in_stream->time_base;
int64_t calc_duration = (double)AV_TIME_BASE / av_q2d(in_stream->r_frame_rate);
pkt.pts = (double)(frame_index * calc_duration) / (double)(av_q2d(time_base1) * AV_TIME_BASE);
pkt.dts = pkt.pts;
pkt.duration = (double)calc_duration / (double)(av_q2d(time_base1) * AV_TIME_BASE);
frame_index++;
}
pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (enum AVRounding) (AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (enum AVRounding) (AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
pkt.pos = -1;
pkt.stream_index = stream_index;
LOGE("Write 1 Packet. size:%5d\tpts:%8d", pkt.size, pkt.pts);
if (av_interleaved_write_frame(formatContext_output, &pkt) < 0)
{
break;
}
av_free_packet(&pkt);
}
av_write_trailer(formatContext_output);
}
void CoreRender::processAudio()
{
int got_frame_v = 0;
AVFrame *tempFrame = (AVFrame *)av_mallocz(sizeof(AVFrame));
avcodec_decode_audio4(formatContext_audio->streams[audioStreamIdx]->codec, tempFrame, &got_frame_v, &pkt);
if (got_frame_v)
{
tempFrame->pts = av_frame_get_best_effort_timestamp(tempFrame);
int ret;
int got_frame_local;
int * got_frame = &got_frame_v;
AVPacket enc_pkt;
int(*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) = avcodec_encode_audio2;
if (!got_frame)
{
got_frame = &got_frame_local;
}
// encode filtered frame
enc_pkt.data = NULL;
enc_pkt.size = 0;
av_init_packet(&enc_pkt);
ret = enc_func(codecContext_a, &enc_pkt, tempFrame, got_frame);
av_frame_free(&tempFrame);
av_frame_free(&tempFrame);
if (ret < 0)
{
return ;
}
if (!(*got_frame))
{
return ;
}
enc_pkt.stream_index = outputAudioStreamIdx;
av_packet_rescale_ts(&enc_pkt,
formatContext_output->streams[outputAudioStreamIdx]->codec->time_base,
formatContext_output->streams[outputAudioStreamIdx]->time_base);
}
}
void CoreRender::release()
{
avformat_close_input(&formatContext_video);
avformat_close_input(&formatContext_audio);
if (formatContext_output&& !(ouputFormat->flags & AVFMT_NOFILE))
avio_close(formatContext_output->pb);
avformat_free_context(formatContext_output);
}
CoreRender::~CoreRender()
{
}As you can see, I transcode the audio format into aac, and keep the video as it is.
Here’s how I use itCoreRender render("d:\\bg.mp4", "d:\\music.mp3", "d:\\output.mp4");
render.mux();
return 0;The video file is always in h.264 format.
So what I’m doing wrong ?