
Recherche avancée
Médias (91)
-
Chuck D with Fine Arts Militia - No Meaning No
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Paul Westerberg - Looking Up in Heaven
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Le Tigre - Fake French
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Thievery Corporation - DC 3000
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Dan the Automator - Relaxation Spa Treatment
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Gilberto Gil - Oslodum
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (98)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (12748)
-
FFMPEG : AV out of sync when writing a part of a video to a new file
30 mars 2017, par IT_LaymanI’m developing a data preprocessing program for a computer vision project using FFMPEG and Face detection API. In this program, I need to extract the shots that contain human faces from a given input video file and output them into a new file. But when I played the output video file generated by that program, the video and audio track was out of sync. I think a possible reason is that the timestamp of video frame or audio frame is set incorrectly, but I can’t fix it by myself as I’m not very familiar with FFMPEG library, Please help me solving this out-of-sync issue.
To simplify the code shown below, I have removed all face detection code and use an empty function called
faceDetect
to represent it instead.// ffmpegAPI.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <iostream>
extern "C" {
#include <libavutil></libavutil>opt.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>avutil.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libavutil></libavutil>common.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>mathematics.h>
#include <libavutil></libavutil>samplefmt.h>
#include <libavutil></libavutil>pixdesc.h>
#include <libswscale></libswscale>swscale.h>
}
bool faceDetect(AVFrame *frame)
{
/*...*/
return true;
}
int main(int argc, char **argv)
{
int64_t videoPts = 0, audioPts = 0;
int samples_count = 0;
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
AVOutputFormat *ofmt = NULL;
AVPacket pkt;
AVFrame *frame = NULL;
int videoindex = -1; int audioindex = -1;
double videoTime = DBL_MAX;
const char *in_filename, *out_filename;
int ret, i;
in_filename = "C:\\input.flv";//Input file name
out_filename = "C:\\output.avi";//Output file name
av_register_all();
//Open input file
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
fprintf(stderr, "Could not open input file '%s'", in_filename);
goto end;
}
//Find input streams
if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
fprintf(stderr, "Failed to retrieve input stream information");
goto end;
}
//Retrive AV stream information
for (i = 0; i < ifmt_ctx->nb_streams; i++)
{
AVStream *stream;
AVCodecContext *codec_ctx;
stream = ifmt_ctx->streams[i];//Get current stream
codec_ctx = stream->codec;//Get current stream codec
if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO)
{
videoindex = i;//video stream index
}
else if (codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO)
{
audioindex = i;//audio stream index
}
if (videoindex == -1)//no video stream is found
{
printf("can't find video stream\n");
goto end;
}
}
av_dump_format(ifmt_ctx, 0, in_filename, 0);
//Configure output
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
if (!ofmt_ctx) {
fprintf(stderr, "Could not create output context\n");
ret = AVERROR_UNKNOWN;
goto end;
}
ofmt = ofmt_ctx->oformat;
//Configure output streams
for (i = 0; i < ifmt_ctx->nb_streams; i++) {//Traversal input streams
AVStream *in_stream = ifmt_ctx->streams[i];//Get current stream
AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);//Create a corresponding output stream
if (!out_stream) {
fprintf(stderr, "Failed allocating output stream\n");
ret = AVERROR_UNKNOWN;
goto end;
}
//Copy codec from current input stream to corresponding output stream
ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
if (ret < 0) {
fprintf(stderr, "Failed to copy context from input to output stream codec context\n");
goto end;
}
if (i == videoindex)//Video stream
{
if (out_stream->codec->codec_id == AV_CODEC_ID_H264)
{
out_stream->codec->me_range = 16;
out_stream->codec->max_qdiff = 4;
out_stream->codec->qmin = 10;
out_stream->codec->qmax = 51;
out_stream->codec->qcompress = 1;
}
}
AVCodecContext *codec_ctx = out_stream->codec;
if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|| codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
//Find codec encoder
AVCodec *encoder = avcodec_find_encoder(codec_ctx->codec_id);
if (!encoder) {
av_log(NULL, AV_LOG_FATAL, "Necessary encoder not found\n");
ret = AVERROR_INVALIDDATA;
goto end;
}
//Open encoder
ret = avcodec_open2(codec_ctx, encoder, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
goto end;
}
out_stream->codec->codec_tag = 0;
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
//Open the decoder for input stream
codec_ctx = in_stream->codec;
if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|| codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
ret = avcodec_open2(codec_ctx,
avcodec_find_decoder(codec_ctx->codec_id), NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
}
}
}
av_dump_format(ofmt_ctx, 0, out_filename, 1);
//Open output file for writing
if (!(ofmt->flags & AVFMT_NOFILE)) {
ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Could not open output file '%s'", out_filename);
goto end;
}
}
//Write video header
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
fprintf(stderr, "Error occurred when opening output file\n");
goto end;
}
//Write frames in a loop
while (1) {
AVStream *in_stream, *out_stream;
//Read one frame from the input file
ret = av_read_frame(ifmt_ctx, &pkt);
if (ret < 0)
break;
in_stream = ifmt_ctx->streams[pkt.stream_index];//Get current input stream
out_stream = ofmt_ctx->streams[pkt.stream_index];//Get current output stream
if (pkt.stream_index == videoindex)//video frame
{
int got_frame;
frame = av_frame_alloc();
if (!frame) {
ret = AVERROR(ENOMEM);
break;
}
//Readjust packet timestamp for decoding
av_packet_rescale_ts(&pkt,
in_stream->time_base,
in_stream->codec->time_base);
//Decode video frame
int len = avcodec_decode_video2(in_stream->codec, frame, &got_frame, &pkt);
if (len < 0)
{
av_frame_free(&frame);
av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
break;
}
if (got_frame)//Got a decoded video frame
{
int64_t pts = av_frame_get_best_effort_timestamp(frame);
//determine if the frame image contains human face
bool result = faceDetect(frame);
if (result) //face contained
{
videoTime = pts* av_q2d(out_stream->time_base);
frame->pts = videoPts++;//Set pts of video frame
AVPacket enc_pkt;
av_log(NULL, AV_LOG_INFO, "Encoding video frame\n");
//Create packet for encoding
enc_pkt.data = NULL;
enc_pkt.size = 0;
av_init_packet(&enc_pkt);
//Encoding frame
ret = avcodec_encode_video2(out_stream->codec, &enc_pkt,
frame, &got_frame);
av_frame_free(&frame);
if (!(got_frame))
ret = 0;
/* Configure encoding properties */
enc_pkt.stream_index = videoindex;
av_packet_rescale_ts(&enc_pkt,
out_stream->codec->time_base,
out_stream->time_base);
av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
/* Write encoded frame */
ret = av_interleaved_write_frame(ofmt_ctx, &enc_pkt);
if (ret < 0)
break;
}
else //no face contained
{
//Set the videoTime as maximum double value,
//making the corresponding audio frame not been processed
if (videoTime < DBL_MAX)
videoTime = DBL_MAX;
}
}
else
{
av_frame_free(&frame);
}
}
else//Audio frame
{
//Get current frame time
double audioTime = pkt.pts * av_q2d(in_stream->time_base);
if (audioTime >= videoTime)
{//The current frame should be written into output file
int got_frame;
frame = av_frame_alloc();
if (!frame) {
ret = AVERROR(ENOMEM);
break;
}
//Readjust packet timestamp for decoding
av_packet_rescale_ts(&pkt,
in_stream->time_base,
in_stream->codec->time_base);
//Decode audio frame
int len = avcodec_decode_audio4(in_stream->codec, frame, &got_frame, &pkt);
if (len < 0)
{
av_frame_free(&frame);
av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
break;
}
if (got_frame)//Got a decoded audio frame
{
//Set pts of audio frame
frame->pts = audioPts;
audioPts += frame->nb_samples;
AVPacket enc_pkt;
av_log(NULL, AV_LOG_INFO, "Encoding audio frame");
//Create packet for encoding
enc_pkt.data = NULL;
enc_pkt.size = 0;
av_init_packet(&enc_pkt);
//Encode audio frame
ret = avcodec_encode_audio2(out_stream->codec, &enc_pkt,
frame, &got_frame);
av_frame_free(&frame);
if (!(got_frame))
ret = 0;
/* Configure encoding properties */
enc_pkt.stream_index = audioindex;
av_packet_rescale_ts(&enc_pkt,
out_stream->codec->time_base,
out_stream->time_base);
av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
/* Write encoded frame */
ret = av_interleaved_write_frame(ofmt_ctx, &enc_pkt);
if (ret < 0)
break;
}
else //Shouldn't be written
{
av_frame_free(&frame);
}
}
}
av_packet_unref(&pkt);
}
//Write video trailer
av_write_trailer(ofmt_ctx);
end://Clean up
av_log(NULL, AV_LOG_INFO, "Clean up\n");
av_frame_free(&frame);
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
avcodec_close(ifmt_ctx->streams[i]->codec);
if (ofmt_ctx && ofmt_ctx->nb_streams > i && ofmt_ctx->streams[i] && ofmt_ctx->streams[i]->codec)
avcodec_close(ofmt_ctx->streams[i]->codec);
}
avformat_close_input(&ifmt_ctx);
/* Close output file */
if (ofmt_ctx && !(ofmt_ctx->oformat->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
if (ret < 0 && ret != AVERROR_EOF) {
char buf[256];
av_strerror(ret, buf, sizeof(buf));
av_log(NULL, AV_LOG_ERROR, "Error occurred:%s\n", buf);
system("Pause");
return 1;
}
//Program end
printf("The End.\n");
system("Pause");
return 0;
}
</iostream> -
FFMPEG : Fill/Change (part of) audio waveform color as per actual progress with respect to time progress
13 août 2018, par Software Development ConsultanI am trying to make command which is generating waveform from mp3 file and show on background image and play audio.
Togethr with this, I want to change waveform color left to right (something like progressbar) as per overall video time elapses.I have created following command which shows progress bar using drawbox to fill box color as per current time position.
ffmpeg -y -loop 1 -threads 0 -i sample_background.png -i input.mp3 -filter_complex "color=red@0.5:s=1280x100[Color] ;[0:v]drawbox=0:155:1280:100:gray@1:t=fill[baserect] ;[1:a]aformat=channel_layouts=mono,showwaves=s=1280x100:rate=7:mode=cline:scale=sqrt:colors=0xffffff[waveform] ; [baserect][waveform] overlay=0:155 [v1] ;[v1][Color] overlay=x=’if(gte(t,0), -W+(t)*64, NAN)’:y=155:format=yuv444[v2]" -map "[v2]" -map 1:a -c:v libx264 -crf 35 -ss 0 -t 20 -c:a copy -shortest -pix_fmt yuv420p -threads 0 output_withwave_and_progresbar.mp4
But I want to show progress inside generated audio waveform instead of making / filling rectangle using drawbox.
So I have tried to make 2 waveform of 2 different color and overlay on each other and I wanted to show such a way that top waveform should display only part from x position (left) respective to current time.
ffmpeg -y -loop 1 -threads 0 -i sample_background.png -i input.mp3 -filter_complex "[0:v]drawbox=0:155:1280:100:gray@1:t=fill[baserect] ;[1:a]aformat=channel_layouts=mono,showwaves=s=1280x100:rate=7:mode=cline:scale=sqrt:colors=0xff0000[waveform] ;[1:a]aformat=channel_layouts=mono,showwaves=s=1280x100:rate=7:mode=cline:scale=sqrt:colors=0xffffff[waveform2] ; [baserect][waveform] overlay=0:155 [v1] ;[v1][waveform2] overlay=x=’if(gte(t,0), -W+(t)*64, NAN)’:y=155:format=yuv444[v2]" -map "[v2]" -map 1:a -c:v libx264 -crf 35 -ss 0 -t 20 -c:a copy -shortest -pix_fmt yuv420p -threads 0 test.mp4
But I am not able to find way to do Wipe effect from left to right, currently it is sliding (as I am changing x of overlay)
It might be done using alpha merge and setting all other pixel to transparent and only show pixels which are less than x pos.
but I am not able to find how to do this.we can use any mp3 file file, currently I have set 20 sec duration.
Can someone please guide how we can do this ?
Thanks.
-
ffmpeg replace part of audio file with looped audio
1er décembre 2015, par user1202648I am quite new to ffmpeg and I am trying to replace a part of a first audio file with another second file. The second file can be too short, so some sort of loop should exist.
After some research I came up with the following command arguments and it gives me the output as long as I only do one replacement. But I would like to do multiple replacements. So any help on what I am doing wrong ? Any suggestions/remarks on the way of working are also very welcome.
(Any typos in the commands below can be ignored, I generate the command by script and for ease of use I simplified the names.)
Works (One replacement) :
"ffmpeg.exe" -y -i "first.wav" -i "second.wav" -filter_complex "[1:a][1:a][1:a]concat=n=3:v=0:a=1,asetpts=PTS-STARTPTS[replaceBase];[0:a]atrim=0:3,asetpts=PTS-STARTPTS[partA];[replaceBase]atrim=0:2,asetpts=PTS-STARTPTS[replaceA];[0:a]atrim=start=5,asetpts=PTS-STARTPTS[partB];[partA][replaceA][partB]concat=n=3:v=0:a=1[aout]" -map "[aout]" Out.wav
Works Not (Multiple replacements) :
"ffmpeg.exe" -y -i "first.wav" -i "second.wav" -filter_complex "[1:a][1:a][1:a]concat=n=3:v=0:a=1,asetpts=PTS-STARTPTS[replaceBase];[0:a]atrim=0:3,asetpts=PTS-STARTPTS[partA];[replaceBase]atrim=0:2,asetpts=PTS-STARTPTS[replaceA];[0:a]atrim=5:4,asetpts=PTS-STARTPTS[partB];[replaceBase]atrim=0:2,asetpts=PTS-STARTPTS[replaceB];[0:a]atrim=start=6,asetpts=PTS-STARTPTS[partC];[partA][replaceA][partB][replaceB][PartC]concat=n=4:v=0:a=1[aout]" -map "[aout]" Out.wav
ffmpeg version N-76860-g72eaf72 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 5.2.0 (GCC)
configuration : —enable-gpl —enable-version3 —disable-w32threads —enable-avisynth —enable-bzlib —enable-fontconfig —enable-frei0r —enable-gnutls —enable-iconv —enable-libass —enable-libbluray —enable-libbs2b —enable-libcaca —enable-libdcadec —enable-libfreetype —enable-libgme —enable-libgsm —enable-libilbc —enable-libmodplug —enable-libmp3lame —enable-libopencore-amrnb —enable-libopencore-amrwb —enable-libopenjpeg —enable-libopus —enable-librtmp —enable-libschroedinger —enable-libsoxr —enable-libspeex —enable-libtheora —enable-libtwolame —enable-libvidstab —enable-libvo-aacenc —enable-libvo-amrwbenc —enable-libvorbis —enable-libvpx —enable-libwavpack —enable-libwebp —enable-libx264 —enable-libx265 —enable-libxavs —enable-libxvid —enable-libzimg —enable-lzma —enable-decklink —enable-zlib
libavutil 55. 9.100 / 55. 9.100
libavcodec 57. 16.100 / 57. 16.100
libavformat 57. 19.100 / 57. 19.100
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 15.100 / 6. 15.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, wav, from ’3897583stereo.wav’ :
Duration : 00:00:12.07, bitrate : 256 kb/s
Stream #0:0 : Audio : pcm_s16le ([1][0][0][0] / 0x0001), 8000 Hz, 2 channels, s16, 256 kb/s
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, wav, from ’beep-021.wav’ :
Metadata :
encoder : Lavf57.19.100
Duration : 00:00:00.30, bitrate : 1413 kb/s
Stream #1:0 : Audio : pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 2 channels, s16, 1411 kb/s
[wav @ 057242c0] Invalid stream specifier : replaceBase.
Last message repeated 1 times
Stream specifier ’STREAM CUT matches no streams.Thanks in advance !