
Recherche avancée
Médias (91)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (81)
-
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (11067)
-
Add watermark-overlay with -filter_complex to multiple outputs (dash)
13 juillet 2020, par WernerI'm creating a set of files for DASH (without audio) with :


ffmpeg -i "input.mov"
 -y
 -keyint_min 100 -g 100
 -sc_threshold 0
 -c:v libx264
 -pix_fmt yuv420p
 -map v:0 -s:0 320x180 -b:v:0 681.125k -maxrate:0 681.125k -bufsize:0 340.5625k
 -map v:0 -s:1 640x360 -b:v:1 2724.5k -maxrate:1 2724.5k -bufsize:1 1362.25k
 -map v:0 -s:2 1280x720 -b:v:2 5449k -maxrate:2 5449k -bufsize:2 2724.5k
 -map v:0 -s:3 1920x1080 -b:v:3 10898k -maxrate:3 10898k -bufsize:3 5449k
 -init_seg_name "myname_$RepresentationID$.$ext$"
 -media_seg_name "myname_$RepresentationID$-$Number%05d$.$ext$"
 -use_template 1 -use_timeline 1
 -seg_duration 4 -adaptation_sets "id=0,streams=v"
 -f dash "myname.mpd"



Now I want to add a watermark. How is it done ? I tried something like :


ffmpeg -i "input.mov"
 -y
 -i "watermark.png" // Added
 -filter_complex "overlay=24:960" // Added
 -keyint_min 100 -g 100
 -sc_threshold 0
 -c:v libx264
 -pix_fmt yuv420p
 -map v:0 -s:0 320x180 -b:v:0 681.125k -maxrate:0 681.125k -bufsize:0 340.5625k
 -map v:0 -s:1 640x360 -b:v:1 2724.5k -maxrate:1 2724.5k -bufsize:1 1362.25k
 -map v:0 -s:2 1280x720 -b:v:2 5449k -maxrate:2 5449k -bufsize:2 2724.5k
 -map v:0 -s:3 1920x1080 -b:v:3 10898k -maxrate:3 10898k -bufsize:3 5449k
 -init_seg_name "myname_$RepresentationID$.$ext$"
 -media_seg_name "myname_$RepresentationID$-$Number%05d$.$ext$"
 -use_template 1 -use_timeline 1
 -seg_duration 4 -adaptation_sets "id=0,streams=v"
 -f dash "myname.mpd"



But this results in only getting the 180p-Version of the video. How can I still get all versions of the video with the overlay ?


Added :
I tried as well :


ffmpeg -i "input.mov"
 -y
 -i "watermark.png"
 -filter_complex "[0:v][1:v]overlay=24:960[out0][out1][out2][out3]"
 -keyint_min 100 -g 100
 -sc_threshold 0
 -c:v libx264
 -pix_fmt yuv420p
 -map "[out0]" -s:0 320x180 -b:v:0 681.125k -maxrate:0 681.125k -bufsize:0 340.5625k
 -map "[out1]" -s:1 640x360 -b:v:1 2724.5k -maxrate:1 2724.5k -bufsize:1 1362.25k
 -map "[out2]" -s:2 1280x720 -b:v:2 5449k -maxrate:2 5449k -bufsize:2 2724.5k
 -map "[out3]" -s:3 1920x1080 -b:v:3 10898k -maxrate:3 10898k -bufsize:3 5449k
 -init_seg_name "myname_$RepresentationID$.$ext$"
 -media_seg_name "myname_$RepresentationID$-$Number%05d$.$ext$"
 -use_template 1 -use_timeline 1
 -seg_duration 4 -adaptation_sets "id=0,streams=v"
 -f dash "myname.mpd"



which results in the error :
No output pad can be associated to link label 'out1'.


-
Exoplayer with FFmpeg module and filtering crash with aac and alac audio formats
25 juin 2020, par Aleksej OtjanHave a code to play audio with exoplayer and ffmpeg decoder. It works. Then I was needed to add equalizer functionality. I did it with ffmpeg avfilters. But now, it crash at some audio formats(if dont use avfilters it works with this formats).


Decode func :


int decodePacket(AVCodecContext *context, AVPacket *packet,
 uint8_t *outputBuffer, int outputSize) {
 int result = 0;
 // Queue input data.
 result = avcodec_send_packet(context, packet);
 if (result) {
 logError("avcodec_send_packet", result);
 return result == AVERROR_INVALIDDATA ? DECODER_ERROR_INVALID_DATA
 : DECODER_ERROR_OTHER;
 }

 // Dequeue output data until it runs out.
 int outSize = 0;
 if (EQUALIZER != nullptr) {
 LOGE("INIT FILTER GRAPH");
 init_filter_graph(context, EQUALIZER);
 }

 while (true) {
 AVFrame *frame = av_frame_alloc();
 if (!frame) {
 LOGE("Failed to allocate output frame.");
 return -1;
 }
 result = avcodec_receive_frame(context, frame);
 if (result) {
 av_frame_free(&frame);
 if (result == AVERROR(EAGAIN)) {
 break;
 }
 logError("avcodec_receive_frame", result);
 return result;
 }

 // Resample output.
 AVSampleFormat sampleFormat = context->sample_fmt;
 int channelCount = context->channels;
 int channelLayout = context->channel_layout;
 int sampleRate = context->sample_rate;
 int sampleCount = frame->nb_samples;
 int dataSize = av_samples_get_buffer_size(NULL, channelCount, sampleCount,
 sampleFormat, 1);
 SwrContext *resampleContext;
 if (context->opaque) {
 resampleContext = (SwrContext *) context->opaque;
 } else {
 resampleContext = swr_alloc();
 av_opt_set_int(resampleContext, "in_channel_layout", channelLayout, 0);
 av_opt_set_int(resampleContext, "out_channel_layout", channelLayout, 0);
 av_opt_set_int(resampleContext, "in_sample_rate", sampleRate, 0);
 av_opt_set_int(resampleContext, "out_sample_rate", sampleRate, 0);
 av_opt_set_int(resampleContext, "in_sample_fmt", sampleFormat, 0);
 // The output format is always the requested format.
 av_opt_set_int(resampleContext, "out_sample_fmt",
 context->request_sample_fmt, 0);
 result = swr_init(resampleContext);
 if (result < 0) {
 logError("swr_init", result);
 av_frame_free(&frame);
 return -1;
 }
 context->opaque = resampleContext;
 }
 int inSampleSize = av_get_bytes_per_sample(sampleFormat);
 int outSampleSize = av_get_bytes_per_sample(context->request_sample_fmt);
 int outSamples = swr_get_out_samples(resampleContext, sampleCount);
 int bufferOutSize = outSampleSize * channelCount * outSamples;
 if (outSize + bufferOutSize > outputSize) {
 LOGE("Output buffer size (%d) too small for output data (%d).",
 outputSize, outSize + bufferOutSize);
 av_frame_free(&frame);
 return -1;
 }
 if (EQUALIZER != nullptr && graph != nullptr) {
 result = av_buffersrc_add_frame_flags(src, frame,AV_BUFFERSRC_FLAG_KEEP_REF);
 if (result < 0) {
 av_frame_unref(frame);
 LOGE("Error submitting the frame to the filtergraph:");
 return -1;
 }
 // Get all the filtered output that is available.
 result = av_buffersink_get_frame(sink, frame);
 LOGE("ERROR SWR %s", av_err2str(result));
 if (result == AVERROR(EAGAIN) || result == AVERROR_EOF) {
 av_frame_unref(frame);
 break;
 }
 if (result < 0) {
 av_frame_unref(frame);
 return -1;
 }
 result = swr_convert(resampleContext, &outputBuffer, bufferOutSize,
 (const uint8_t **) frame->data, frame->nb_samples);
 }else{
 result = swr_convert(resampleContext, &outputBuffer, bufferOutSize,
 (const uint8_t **) frame->data, frame->nb_samples);
 }

 av_frame_free(&frame);
 if (result < 0) {
 logError("swr_convert", result);
 return result;
 }
 int available = swr_get_out_samples(resampleContext, 0);
 if (available != 0) {
 LOGE("Expected no samples remaining after resampling, but found %d.",
 available);
 return -1;
 }
 outputBuffer += bufferOutSize;
 outSize += bufferOutSize;
 }
 avfilter_graph_free(&graph);
 return outSize;
}



Init graph func :


int init_filter_graph(AVCodecContext *dec_ctx, const char *eq) {
 char args[512];
 int ret = 0;
 graph = avfilter_graph_alloc();
 const AVFilter *abuffersrc = avfilter_get_by_name("abuffer");
 const AVFilter *abuffersink = avfilter_get_by_name("abuffersink");
 AVFilterInOut *outputs = avfilter_inout_alloc();
 AVFilterInOut *inputs = avfilter_inout_alloc();
 static const enum AVSampleFormat out_sample_fmts[] = {dec_ctx->request_sample_fmt,
 static_cast<const avsampleformat="avsampleformat">(-1)};
 static const int64_t out_channel_layouts[] = {static_cast(dec_ctx->channel_layout),
 -1};
 static const int out_sample_rates[] = {dec_ctx->sample_rate, -1};
 const AVFilterLink *outlink;
 AVRational time_base = dec_ctx->time_base;

 if (!outputs || !inputs || !graph) {
 ret = AVERROR(ENOMEM);
 goto end;
 }

 /* buffer audio source: the decoded frames from the decoder will be inserted here. */
 if (!dec_ctx->channel_layout)
 dec_ctx->channel_layout = av_get_default_channel_layout(dec_ctx->channels);
 snprintf(args, sizeof(args),
 "time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%" PRIx64,
 1, dec_ctx->sample_rate, dec_ctx->sample_rate,
 av_get_sample_fmt_name(dec_ctx->sample_fmt), dec_ctx->channel_layout);
 ret = avfilter_graph_create_filter(&src, abuffersrc, "in",
 args, NULL, graph);

 if (ret < 0) {
 LOGE("Cannot create audio buffer source\n");
 goto end;
 }

 /* buffer audio sink: to terminate the filter chain. */
 ret = avfilter_graph_create_filter(&sink, abuffersink, "out",
 NULL, NULL, graph);
 if (ret < 0) {
 LOGE("Cannot create audio buffer sink\n");
 goto end;
 }

 ret = av_opt_set_int_list(sink, "sample_fmts", out_sample_fmts, -1,
 AV_OPT_SEARCH_CHILDREN);
 if (ret < 0) {
 LOGE("Cannot set output sample format\n");
 goto end;
 }

 ret = av_opt_set_int_list(sink, "channel_layouts", out_channel_layouts, -1,
 AV_OPT_SEARCH_CHILDREN);
 if (ret < 0) {
 LOGE("Cannot set output channel layout\n");
 goto end;
 }

 ret = av_opt_set_int_list(sink, "sample_rates", out_sample_rates, -1,
 AV_OPT_SEARCH_CHILDREN);
 if (ret < 0) {
 LOGE("Cannot set output sample rate\n");
 goto end;
 }

 /*
 * Set the endpoints for the filter graph. The graph will
 * be linked to the graph described by filters_descr.
 */

 /*
 * The buffer source output must be connected to the input pad of
 * the first filter described by filters_descr; since the first
 * filter input label is not specified, it is set to "in" by
 * default.
 */
 outputs->name = av_strdup("in");
 outputs->filter_ctx = src;
 outputs->pad_idx = 0;
 outputs->next = NULL;

 /*
 * The buffer sink input must be connected to the output pad of
 * the last filter described by filters_descr; since the last
 * filter output label is not specified, it is set to "out" by
 * default.
 */
 inputs->name = av_strdup("out");
 inputs->filter_ctx = sink;
 inputs->pad_idx = 0;
 inputs->next = NULL;

 if ((ret = avfilter_graph_parse_ptr(graph, eq,
 &inputs, &outputs, NULL)) < 0) {
 goto end;
 }

 if ((ret = avfilter_graph_config(graph, NULL)) < 0)
 goto end;

 /* Print summary of the sink buffer
 * Note: args buffer is reused to store channel layout string */
 outlink = sink->inputs[0];
 av_get_channel_layout_string(args, sizeof(args), -1, outlink->channel_layout);
 LOGE("Output: srate:%dHz chlayout:%s\n",
 (int) outlink->sample_rate,
 args);
 end:
 avfilter_inout_free(&inputs);
 avfilter_inout_free(&outputs);
 return ret;
}
</const>


Crash when try to play aac, alac audio at this line :


result = swr_convert(resampleContext, &outputBuffer, bufferOutSize,(const uint8_t **) frame->data, frame->nb_samples);



with


Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x0 



but work fine when play mp3, flac. What is wrong ? Thx for help.


-
How to set a dictionary based off of subprocess.check_output
5 juillet 2020, par Jessie WilsonThis is a confusing question, however I will try to make it as clear as possible.
Currently when I build my app, if I run it via the .py file it works perfectly. However, once I compile it some parts of my app aren't functioning, specifically this code here.


def ffprobe_run():
global output
global acodec_choices
run = subprocess.check_output("ffprobe " + videoinputquoted + " " + ffprobecommand, universal_newlines=True)
print(run)
if run[-2] == '3':
 acodec_choices = {"One": "1",
 "Two": "2",
 "Three": "3"}
elif run[-2] == '2':
 acodec_choices = {"One": "1",
 "Two": "2",}
elif run[-2] == '1':
 acodec_choices = {"One": "1",}
print(acodec_choices.values())



I am able to get the results I want with this command. Currently that's using FFPROBE to check for the amount of audio tracks there is in a file. It returns values like so


1
2
3



If there is 3 tracks. Or


1 
2



If it's two tracks. I use the command[-2]
which will give me the result of '2'


So I'm taking that result and defining a dictionary to automatically populate/change an OptionMenu


It defines this in my main app


# Audio Stream Selection
 acodec_stream = StringVar(audio_window)
 if ffprobeinfo[-2] == '1':
 acodec_stream_choices = {'Track 1': "-map 0:a:0"}
 elif ffprobeinfo[-2] == '2':
 acodec_stream_choices = {'Track 1': "-map 0:a:0",
 'Track 2': "-map 0:a:1"}
 elif ffprobeinfo[-2] == '3':
 acodec_stream_choices = {'Track 1': "-map 0:a:0",
 'Track 2': "-map 0:a:1",
 'Track 3': "-map 0:a:2"}
 elif ffprobeinfo[-2] == '4':
 acodec_stream_choices = {'Track 1': "-map 0:a:0",
 'Track 2': "-map 0:a:1",
 'Track 3': "-map 0:a:2",
 'Track 4': "-map 0:a:3"}
 elif ffprobeinfo[-2] == '5':
 acodec_stream_choices = {'Track 1': "-map 0:a:0",
 'Track 2': "-map 0:a:1",
 'Track 3': "-map 0:a:2",
 'Track 4': "-map 0:a:3",
 'Track 5': "-map 0:a:4"}
 elif ffprobeinfo[-2] == '6':
 acodec_stream_choices = {'Track 1': "-map 0:a:0",
 'Track 2': "-map 0:a:1",
 'Track 3': "-map 0:a:2",
 'Track 4': "-map 0:a:3",
 'Track 5': "-map 0:a:4",
 'Track 6': "-map 0:a:5"}
 elif ffprobeinfo[-2] == '7':
 acodec_stream_choices = {'Track 1': "-map 0:a:0",
 'Track 2': "-map 0:a:1",
 'Track 3': "-map 0:a:2",
 'Track 4': "-map 0:a:3",
 'Track 5': "-map 0:a:4",
 'Track 6': "-map 0:a:5",
 'Track 7': "-map 0:a:6"}
 elif ffprobeinfo[-2] == '8':
 acodec_stream_choices = {'Track 1': "-map 0:a:0",
 'Track 2': "-map 0:a:1",
 'Track 3': "-map 0:a:2",
 'Track 4': "-map 0:a:3",
 'Track 5': "-map 0:a:4",
 'Track 6': "-map 0:a:5",
 'Track 7': "-map 0:a:6",
 'Track 8': "-map 0:a:7"}
 elif ffprobeinfo[-2] == '9':
 acodec_stream_choices = {'Track 1': "-map 0:a:0",
 'Track 2': "-map 0:a:1",
 'Track 3': "-map 0:a:2",
 'Track 4': "-map 0:a:3",
 'Track 5': "-map 0:a:4",
 'Track 6': "-map 0:a:5",
 'Track 7': "-map 0:a:6",
 'Track 8': "-map 0:a:7",
 'Track 9': "-map 0:a:8"}
 elif ffprobeinfo[-2] == '10':
 acodec_stream_choices = {'Track 1': "-map 0:a:0",
 'Track 2': "-map 0:a:1",
 'Track 3': "-map 0:a:2",
 'Track 4': "-map 0:a:3",
 'Track 5': "-map 0:a:4",
 'Track 6': "-map 0:a:5",
 'Track 7': "-map 0:a:6",
 'Track 8': "-map 0:a:7",
 'Track 9': "-map 0:a:8",
 'Track 10': "-map 0:a:9"}
 acodec_stream.set('Track 1') # set the default option
 acodec_stream_label = Label(audio_window, text="Track :")
 acodec_stream_label.grid(row=0, column=0, columnspan=1, padx=5, pady=5)
 acodec_stream_menu = OptionMenu(audio_window, acodec_stream, *acodec_stream_choices.keys())
 acodec_stream_menu.grid(row=1, column=0, columnspan=1, padx=5, pady=5)



This is all working great, If I am running the app via the .py file. Once I compile it's missing the entire defined dictionary selection.


This is what it's supposed to look like
enter image description here


However, this is what it looks like with the code above. enter image description here


If I define the dictionary myself, it works fine. However, then I can't automatically input the correct amount of available audio tracks.


I hope this isn't too much code. I'm very new at this.


EDIT :


If I compile via pyinstaller and remove the -w flag, the program runs correctly, shows the tracks.


I'm assuming I'm not using subprocess/calling something correctly. The program I don't think is calling to FFPROBE when it doesn't have a console, vs calling it and getting the value when it has it's own console.