
Recherche avancée
Médias (9)
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (75)
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (13558)
-
Why aren't the videos merging ?
25 décembre 2023, par user9987656I have 100 videos (total duration of 10 hours) from one author, and I'm trying to merge them into one large video, but I'm encountering an issue. ffmpeg is giving me several errors with the following message :


"mp4 @ 000002067f56ecc0] Non-monotonic DTS in output stream 0:0 ; previous : 968719404, current : 434585687 ; changing to 968719405. This may result in incorrect timestamps in the output file."


As a result, I end up with a 10-hour video, but I can only view the first 3 hours and the last 2 hours.


What could be causing this problem ? I'm using the streaming copy command.


-f concat -safe 0 -i input.txt -c copy



-
Resampling audio using libswresample, leaves small amount of noise after resampling
20 juillet 2020, par MiloI'm trying to resample audio from 44Khz to 48Khz and I'm getting s small light noise after resampling. As if someone is gently ticking the mic. This happens both ways. From 48Khz to 44Khz and vice versa.


I've read that this can happen because swrContext still has some data left and that I shoudl flush the context before resampling next frame. And although this helps a little (less noticeable noise), it's still present.


I've tried using FFmpeg resample filter instead, but the output is just loud incoherent noise. I'm pretty sure that libswresample should not output any noise on resampling which means that I just don't know how to use it well and I'm missing some options.


This is the code for resampler.


int ResampleFrame(VideoState * videoState, AVFrame *decoded_audio_frame, enum AVSampleFormat out_sample_fmt, uint8_t * out_buf)
{
 int in_sample_rate = videoState->audio->ptrAudioCodecCtx_->sample_rate;
 int out_sample_rate = SAMPLE_RATE;

// get an instance of the AudioResamplingState struct, create if NULL
AudioResamplingState* arState = getAudioResampling(videoState->audio->ptrAudioCodecCtx_->channel_layout);

if (!arState->swr_ctx)
{
 printf("swr_alloc error.\n");
 return -1;
}

// get input audio channels
arState->in_channel_layout = (videoState->audio->ptrAudioCodecCtx_->channels ==
 av_get_channel_layout_nb_channels(videoState->audio->ptrAudioCodecCtx_->channel_layout)) ?
 videoState->audio->ptrAudioCodecCtx_->channel_layout :
 av_get_default_channel_layout(videoState->audio->ptrAudioCodecCtx_->channels);


// check input audio channels correctly retrieved
if (arState->in_channel_layout <= 0)
{
 printf("in_channel_layout error.\n");
 return -1;
}


arState->out_channel_layout = AV_CH_LAYOUT_STEREO;

// retrieve number of audio samples (per channel)
arState->in_nb_samples = decoded_audio_frame->nb_samples;
if (arState->in_nb_samples <= 0)
{
 printf("in_nb_samples error.\n");
 return -1;
}

// Set SwrContext parameters for resampling
av_opt_set_int(arState->swr_ctx, "in_channel_layout", arState->in_channel_layout, 0);
av_opt_set_int(arState->swr_ctx, "in_sample_rate", in_sample_rate, 0);
av_opt_set_sample_fmt(arState->swr_ctx, "in_sample_fmt", videoState->audio->ptrAudioCodecCtx_->sample_fmt, 0);


// Set SwrContext parameters for resampling
av_opt_set_int(arState->swr_ctx, "out_channel_layout", arState->out_channel_layout, 0);
av_opt_set_int(arState->swr_ctx, "out_sample_rate", out_sample_rate, 0);
av_opt_set_sample_fmt(arState->swr_ctx, "out_sample_fmt", out_sample_fmt, 0);


// initialize SWR context after user parameters have been set
int ret = swr_init(arState->swr_ctx);
if (ret < 0)
 {
 printf("Failed to initialize the resampling context.\n");
 return -1;
 }


 // retrieve output samples number taking into account the progressive delay
int64_t delay = swr_get_delay(arState->swr_ctx, videoState->audio->ptrAudioCodecCtx_->sample_rate) + arState->in_nb_samples;
arState->out_nb_samples = av_rescale_rnd(delay, out_sample_rate, in_sample_rate, AV_ROUND_UP );

// check output samples number was correctly rescaled
if (arState->out_nb_samples <= 0)
{
 printf("av_rescale_rnd error\n");
 return -1;
}

// get number of output audio channels
arState->out_nb_channels = av_get_channel_layout_nb_channels(arState->out_channel_layout);

// allocate data pointers array for arState->resampled_data and fill data
// pointers and linesize accordingly
// check memory allocation for the resampled data was successful
ret = av_samples_alloc_array_and_samples(&arState->resampled_data, &arState->out_linesize, arState->out_nb_channels, arState->out_nb_samples, out_sample_fmt, 0);
if (ret < 0)
 {
 printf("av_samples_alloc_array_and_samples() error: Could not allocate destination samples.\n");
 return -1;
 }


if (arState->swr_ctx)
 {
 // do the actual audio data resampling
 // check audio conversion was successful
 int ret_num_samples = swr_convert(arState->swr_ctx,arState->resampled_data,arState->out_nb_samples,(const uint8_t**)decoded_audio_frame->data, decoded_audio_frame->nb_samples);
 //int ret_num_samples = swr_convert_frame(arState->swr_ctx,arState->resampled_data,arState->out_nb_samples,(const uint8_t**)decoded_audio_frame->data, decoded_audio_frame->nb_samples);

 if (ret_num_samples < 0)
 {
 printf("swr_convert_error.\n");
 return -1;
 }


 // get the required buffer size for the given audio parameters
 // check audio buffer size
 arState->resampled_data_size = av_samples_get_buffer_size(&arState->out_linesize, arState->out_nb_channels,ret_num_samples,out_sample_fmt,1);

 if (arState->resampled_data_size < 0)
 {
 printf("av_samples_get_buffer_size error.\n");
 return -1;
 }
 } else {
 printf("swr_ctx null error.\n");
 return -1;
 }



// copy the resampled data to the output buffer
memcpy(out_buf, arState->resampled_data[0], arState->resampled_data_size);


// flush the swr context
int delayed = swr_convert(arState->swr_ctx,arState->resampled_data,arState->out_nb_samples,NULL,0);



if (arState->resampled_data)
 {
 av_freep(&arState->resampled_data[0]);
 }

av_freep(&arState->resampled_data);
arState->resampled_data = NULL;

int ret_data_size = arState->resampled_data_size;



return ret_data_size;
}



I also tries using the filter as shown here but my output is just noise.


This is my filter code


int ResampleFrame(AVFrame *frame, uint8_t *out_buf)
{
 /* Push the decoded frame into the filtergraph */
 qint32 ret;
 ret = av_buffersrc_add_frame_flags(buffersrc_ctx1, frame, AV_BUFFERSRC_FLAG_KEEP_REF);
 if (ret < 0) 
 {
 printf("ResampleFrame: Error adding frame to buffer\n");
 // Delete input frame and return null
 av_frame_unref(frame);
 return 0;
 }


 //printf("resampling\n");
 AVFrame *resampled_frame = av_frame_alloc();


 /* Pull filtered frames from the filtergraph */
 ret = av_buffersink_get_frame(buffersink_ctx1, resampled_frame);

 /* Set the timestamp on the resampled frame */
 resampled_frame->best_effort_timestamp = resampled_frame->pts;

 if (ret < 0) 
 {
 av_frame_unref(frame);
 av_frame_unref(resampled_frame);
 return 0;
 }


 int buffer_size = av_samples_get_buffer_size(NULL, 2,resampled_frame->nb_samples,AV_SAMPLE_FMT_S16,1);

 memcpy(out_buf,resampled_frame->data,buffer_size);

 //av_frame_unref(frame);
 av_frame_unref(resampled_frame);
 return buffer_size;
}





QString filter_description1 = "aresample=48000,aformat=sample_fmts=s16:channel_layouts=stereo,asetnsamples=n=1024:p=0";

int InitAudioFilter(AVStream *inputStream) 
{

 char args[512];
 int ret;
 const AVFilter *buffersrc = avfilter_get_by_name("abuffer");
 const AVFilter *buffersink = avfilter_get_by_name("abuffersink");
 AVFilterInOut *outputs = avfilter_inout_alloc();
 AVFilterInOut *inputs = avfilter_inout_alloc();
 filter_graph = avfilter_graph_alloc();


 const enum AVSampleFormat out_sample_fmts[] = {AV_SAMPLE_FMT_S16, AV_SAMPLE_FMT_NONE};
 const int64_t out_channel_layouts[] = {AV_CH_LAYOUT_STEREO, -1};
 const int out_sample_rates[] = {48000, -1};

 snprintf(args, sizeof(args), "time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%" PRIx64,
 inputStream->codec->time_base.num, inputStream->codec->time_base.den,
 inputStream->codec->sample_rate,
 av_get_sample_fmt_name(inputStream->codec->sample_fmt),
 inputStream->codec->channel_layout);


 ret = avfilter_graph_create_filter(&buffersrc_ctx1, buffersrc, "in", args, NULL, filter_graph);

 if (ret < 0) 
 {
 printf("InitAudioFilter: Unable to create buffersrc\n");
 return -1;
 }

 ret = avfilter_graph_create_filter(&buffersink_ctx1, buffersink, "out", NULL, NULL, filter_graph);

 if (ret < 0) 
 {
 printf("InitAudioFilter: Unable to create buffersink\n");
 return ret;
 }

 // set opt SAMPLE FORMATS
 ret = av_opt_set_int_list(buffersink_ctx1, "sample_fmts", out_sample_fmts, -1, AV_OPT_SEARCH_CHILDREN);

 if (ret < 0) 
 {
 printf("InitAudioFilter: Cannot set output sample format\n");
 return ret;
 }

 // set opt CHANNEL LAYOUTS
 ret = av_opt_set_int_list(buffersink_ctx1, "channel_layouts", out_channel_layouts, -1, AV_OPT_SEARCH_CHILDREN);

 if (ret < 0) {
 printf("InitAudioFilter: Cannot set output channel layout\n");
 return ret;
 }

 // set opt OUT SAMPLE RATES
 ret = av_opt_set_int_list(buffersink_ctx1, "sample_rates", out_sample_rates, -1, AV_OPT_SEARCH_CHILDREN);

 if (ret < 0) 
 {
 printf("InitAudioFilter: Cannot set output sample rate\n");
 return ret;
 }

 /* Endpoints for the filter graph. */
 outputs -> name = av_strdup("in");
 outputs -> filter_ctx = buffersrc_ctx1;
 outputs -> pad_idx = 0;
 outputs -> next = NULL;

 /* Endpoints for the filter graph. */
 inputs -> name = av_strdup("out");
 inputs -> filter_ctx = buffersink_ctx1;
 inputs -> pad_idx = 0;
 inputs -> next = NULL;


 if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_description1.toStdString().c_str(), &inputs, &outputs, NULL)) < 0) 
 {
 printf("InitAudioFilter: Could not add the filter to graph\n");
 }


 if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0) 
 {
 printf("InitAudioFilter: Could not configure the graph\n");
 }

 /* Print summary of the sink buffer
 * Note: args buffer is reused to store channel layout string */
 AVFilterLink *outlink = buffersink_ctx1->inputs[0];
 av_get_channel_layout_string(args, sizeof(args), -1, outlink->channel_layout);

 QString str = args;
 printf("Output: srate:%dHz fmt:%s chlayout: %s\n", (int) outlink->sample_rate, 
 av_get_sample_fmt_name((AVSampleFormat) outlink->format),
 str.toStdString().c_str());


 filterGraphInitialized_ = true; 
}



And since I don't have much experience with filters or audio for that matter, I'm also probably missing something here. But Can't figure out what.


Thanks


-
FFmpeg conversion failed with "Subtitle encoding failed" and "canvas_size is too small" [closed]
21 avril, par Hillol TalukdarI'm converting video files using FFmpeg, but the process fails and shows some errors that I don't fully understand the cause of. How can I avoid the error and fix the issue ? Below is the command I used and the output I received.


FFmpeg Command :


ffmpeg -y -hide_banner -i saf:12.VOB -map 0:3 -c:s:0 dvdsub -map 0:2 -map 0:1 -f mp4 -vcodec copy -map_metadata 0:g -acodec aac -async 1 saf:13.mp4



ErrorMessage :


Input #0, mpeg, from 'saf:12.VOB':
 Duration: 00:00:21.99, start: 0.280633, bitrate: 7147 kb/s
 Stream #0:0[0x1bf]: Data: dvd_nav_packet
 Stream #0:1[0x1e0]: Video: mpeg2video, yuv420p(tv, top first), 720x480 [SAR 32:27 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn
 Side data:
 cpb: bitrate max/min/avg: 7816000/0/0 buffer size: 1835008 vbv_delay: N/A
 Stream #0:2[0xa0]: Audio: pcm_dvd, 48000 Hz, stereo, s16, 1536 kb/s
 Stream #0:3[0x20]: Subtitle: dvd_subtitle
 Stream #0:4[0x21]: Subtitle: dvd_subtitle

Stream mapping:
 Stream #0:3 -> #0:0 (dvd_subtitle (dvdsub) -> dvd_subtitle (dvdsub))
 Stream #0:2 -> #0:1 (pcm_dvd (native) -> aac (native))
 Stream #0:1 -> #0:2 (copy)

Press [q] to stop, [?] for help

Output #0, mp4, to 'saf:13.mp4':
 Metadata:
 encoder : Lavf60.3.100
 Stream #0:0: Subtitle: dvd_subtitle (mp4s / 0x7334706D), 720x480
 Metadata:
 encoder : Lavc60.3.100 dvdsub
 Stream #0:1: Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s
 Metadata:
 encoder : Lavc60.3.100 aac
 Stream #0:2: Video: mpeg2video (mp4v / 0x7634706D), yuv420p(tv, top first), 720x480 [SAR 32:27 DAR 16:9], q=2-31, 29.97 fps, 29.97 tbr, 90k tbn
 Side data:
 cpb: bitrate max/min/avg: 7816000/0/0 buffer size: 1835008 vbv_delay: N/A

frame= 15 fps=0.0 q=-1.0 size= 0kB time=00:00:00.73 bitrate= 0.5kbits/s speed= 212x 
frame= 577 fps=0.0 q=-1.0 size= 5120kB time=00:00:19.18 bitrate=2186.2kbits/s speed=38.1x 

[dvdsub @ 0xb400006ffb03dde0] canvas_size(0:0) is too small(719:479) for render
[sost#0:0/dvdsub @ 0xb400006feaec5730] Subtitle encoding failed
[aac @ 0xb400006ffb02f500] Qavg: 36253.348
[aac @ 0xb400006ffb02f500] 2 frames left in the queue on closing

Conversion failed!