
Recherche avancée
Médias (5)
-
ED-ME-5 1-DVD
11 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
-
Valkaama DVD Cover Outside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Valkaama DVD Cover Inside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
Autres articles (36)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (7167)
-
Add mp3 file sound in the middle of the mp4/avi/mpeg [on hold]
17 novembre 2014, par user1018697I tryed to do something, and i meet some diffulties to do :
1- i have a little movie file of 1 minutes ( movie.mp4)
2- i have an mp3 file of 10 seconds (voice.mp3)
I whish to add this mp3 file in the middle of my film (at 30 seconds).
My question is :
Is it possible create simply a new video (movie2.mp4) with my mp3 added in the middle of the film (at 30 s) ?
Or maybe is exist a c# library or c++ to do this ?Thanks
-
ffmeg mux video and audio into a mp4 file, no sound in quicktime player
10 novembre 2014, par user2789801I’m using ffmpeg to mux a video file and a audio file into a single mp4 file.The mp4 file plays fine on windows, but it has no sound in quicktime player on mac. And I get a error message "2041 invalid sample description".
Here’s what I’m doing,
First, I open the video file and the audio file, init a output frame context.
Then I add a video stream and a audio stream according to the video and audio files.
Then write the header, then start muxing, then write the trailer.Here’s my code
#include "CoreRender.h"
CoreRender::CoreRender(const char* _vp, const char * _ap, const char * _op)
{
sprintf(videoPath, "%s", _vp);
sprintf(audioPath, "%s", _ap);
sprintf(outputPath, "%s", _op);
formatContext_video = NULL;
formatContext_audio = NULL;
formatContext_output = NULL;
videoStreamIdx = -1;
outputVideoStreamIdx = -1;
videoStreamIdx = -1;
audioStreamIdx = -1;
outputVideoStreamIdx = -1;
outputAudioStreamIdx = -1;
av_init_packet(&pkt);
init();
}
void CoreRender::init()
{
av_register_all();
avcodec_register_all();
// allocate a memory for the AVFrame object
frame = (AVFrame *)av_mallocz(sizeof(AVFrame));
rgbFrame = (AVFrame *)av_mallocz(sizeof(AVFrame));
if (avformat_open_input(&formatContext_video, videoPath, 0, 0) < 0)
{
release();
}
if (avformat_find_stream_info(formatContext_video, 0) < 0)
{
release();
}
if (avformat_open_input(&formatContext_audio, audioPath, 0, 0) < 0)
{
release();
}
if (avformat_find_stream_info(formatContext_audio, 0) < 0)
{
release();
}
avformat_alloc_output_context2(&formatContext_output, NULL, NULL, outputPath);
if (!formatContext_output)
{
release();
}
ouputFormat = formatContext_output->oformat;
for (int i = 0; i < formatContext_video->nb_streams; i++)
{
// create the output AVStream according to the input AVStream
if (formatContext_video->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
{
videoStreamIdx = i;
AVStream * in_stream = formatContext_video->streams[i];
AVStream * out_stream = avformat_new_stream(formatContext_output, in_stream->codec->codec);
if (! out_stream)
{
release();
}
outputVideoStreamIdx = out_stream->index;
if (avcodec_copy_context(out_stream->codec, in_stream->codec) < 0)
{
release();
}
out_stream->codec->codec_tag = 0;
if (formatContext_output->oformat->flags & AVFMT_GLOBALHEADER)
{
out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
break;
}
}
for (int i = 0; i < formatContext_audio->nb_streams; i++)
{
if (formatContext_audio->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
{
AVCodec *encoder;
encoder = avcodec_find_encoder(AV_CODEC_ID_AAC);
audioStreamIdx = i;
AVStream *in_stream = formatContext_audio->streams[i];
AVStream *out_stream = avformat_new_stream(formatContext_output, encoder);
if (!out_stream)
{
release();
}
outputAudioStreamIdx = out_stream->index;
AVCodecContext *dec_ctx, *enc_ctx;
dec_ctx = in_stream->codec;
enc_ctx = out_stream->codec;
enc_ctx->sample_rate = dec_ctx->sample_rate;
enc_ctx->channel_layout = dec_ctx->channel_layout;
enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
enc_ctx->time_base = { 1, enc_ctx->sample_rate };
enc_ctx->bit_rate = 480000;
if (avcodec_open2(enc_ctx, encoder, NULL) < 0)
{
release();
}
if (formatContext_output->oformat->flags & AVFMT_GLOBALHEADER)
{
out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
break;
}
}
if (!(ouputFormat->flags & AVFMT_NOFILE))
{
if (avio_open(&formatContext_output->pb, outputPath, AVIO_FLAG_WRITE) < 0)
{
release();
}
}
if (avformat_write_header(formatContext_output, NULL) < 0)
{
release();
}
}
void CoreRender::mux()
{
// find the decoder for the audio codec
codecContext_a = formatContext_audio->streams[audioStreamIdx]->codec;
codec_a = avcodec_find_decoder(codecContext_a->codec_id);
if (codec == NULL)
{
avformat_close_input(&formatContext_audio);
release();
}
codecContext_a = avcodec_alloc_context3(codec_a);
if (codec_a->capabilities&CODEC_CAP_TRUNCATED)
codecContext_a->flags |= CODEC_FLAG_TRUNCATED; /* we do not send complete frames */
if (avcodec_open2(codecContext_a, codec_a, NULL) < 0)
{
avformat_close_input(&formatContext_audio);
release();
}
int frame_index = 0;
int64_t cur_pts_v = 0, cur_pts_a = 0;
while (true)
{
AVFormatContext *ifmt_ctx;
int stream_index = 0;
AVStream *in_stream, *out_stream;
if (av_compare_ts(cur_pts_v,
formatContext_video->streams[videoStreamIdx]->time_base,
cur_pts_a,
formatContext_audio->streams[audioStreamIdx]->time_base) <= 0)
{
ifmt_ctx = formatContext_video;
stream_index = outputVideoStreamIdx;
if (av_read_frame(ifmt_ctx, &pkt) >=0)
{
do
{
if (pkt.stream_index == videoStreamIdx)
{
cur_pts_v = pkt.pts;
break;
}
} while (av_read_frame(ifmt_ctx, &pkt) >= 0);
}
else
{
break;
}
}
else
{
ifmt_ctx = formatContext_audio;
stream_index = outputAudioStreamIdx;
if (av_read_frame(ifmt_ctx, &pkt) >=0)
{
do
{
if (pkt.stream_index == audioStreamIdx)
{
cur_pts_a = pkt.pts;
break;
}
} while (av_read_frame(ifmt_ctx, &pkt) >=0);
processAudio();
}
else
{
break;
}
}
in_stream = ifmt_ctx->streams[pkt.stream_index];
out_stream = formatContext_output->streams[stream_index];
if (pkt.pts == AV_NOPTS_VALUE)
{
AVRational time_base1 = in_stream->time_base;
int64_t calc_duration = (double)AV_TIME_BASE / av_q2d(in_stream->r_frame_rate);
pkt.pts = (double)(frame_index * calc_duration) / (double)(av_q2d(time_base1) * AV_TIME_BASE);
pkt.dts = pkt.pts;
pkt.duration = (double)calc_duration / (double)(av_q2d(time_base1) * AV_TIME_BASE);
frame_index++;
}
pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (enum AVRounding) (AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (enum AVRounding) (AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
pkt.pos = -1;
pkt.stream_index = stream_index;
LOGE("Write 1 Packet. size:%5d\tpts:%8d", pkt.size, pkt.pts);
if (av_interleaved_write_frame(formatContext_output, &pkt) < 0)
{
break;
}
av_free_packet(&pkt);
}
av_write_trailer(formatContext_output);
}
void CoreRender::processAudio()
{
int got_frame_v = 0;
AVFrame *tempFrame = (AVFrame *)av_mallocz(sizeof(AVFrame));
avcodec_decode_audio4(formatContext_audio->streams[audioStreamIdx]->codec, tempFrame, &got_frame_v, &pkt);
if (got_frame_v)
{
tempFrame->pts = av_frame_get_best_effort_timestamp(tempFrame);
int ret;
int got_frame_local;
int * got_frame = &got_frame_v;
AVPacket enc_pkt;
int(*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) = avcodec_encode_audio2;
if (!got_frame)
{
got_frame = &got_frame_local;
}
// encode filtered frame
enc_pkt.data = NULL;
enc_pkt.size = 0;
av_init_packet(&enc_pkt);
ret = enc_func(codecContext_a, &enc_pkt, tempFrame, got_frame);
av_frame_free(&tempFrame);
av_frame_free(&tempFrame);
if (ret < 0)
{
return ;
}
if (!(*got_frame))
{
return ;
}
enc_pkt.stream_index = outputAudioStreamIdx;
av_packet_rescale_ts(&enc_pkt,
formatContext_output->streams[outputAudioStreamIdx]->codec->time_base,
formatContext_output->streams[outputAudioStreamIdx]->time_base);
}
}
void CoreRender::release()
{
avformat_close_input(&formatContext_video);
avformat_close_input(&formatContext_audio);
if (formatContext_output&& !(ouputFormat->flags & AVFMT_NOFILE))
avio_close(formatContext_output->pb);
avformat_free_context(formatContext_output);
}
CoreRender::~CoreRender()
{
}As you can see, I transcode the audio format into aac, and keep the video as it is.
Here’s how I use itCoreRender render("d:\\bg.mp4", "d:\\music.mp3", "d:\\output.mp4");
render.mux();
return 0;The video file is always in h.264 format.
So what I’m doing wrong ? -
Apply sound effects on a video file
31 mars 2013, par talhamalik22I am a little miss guided here and it seems i am totally lost. I am developing an android app and its core idea is to develop a video recorder and video player that applies some sound effects on the voice of the people or any sound that it records. Sound effect means that if i make a video of a person who is giving some speech then there should be no effect on video but his/her voice should appear like voice in talking tom cat app. I hope you understand the idea. Similar app is Helium Booth you can check it here. I am trying to use libraries like libSonic, libpd and tried to use XUGGLE too.
Read somewhere that Xuggle is not really developed for mobile devices so left it. Now what i want is that it should apply this effect on voice on the run time i.e while recording the pitch of the sound should be alterd and saved immediately. And what i am getting with these libraries is that i can apply sound effect after video is recorded. So it means i need to rip the audio from the video and then apply the change in pitch and frequency and again concatenate this audio file with the old video file. And i have no idea how to do it.
Please show me the right approach and tools if possible.Regards