
Recherche avancée
Médias (1)
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
Autres articles (111)
-
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...) -
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)
Sur d’autres sites (6967)
-
Unable to pass parameters in seconds to FFMpeg afade audio filter
9 septembre 2020, par Anton SerovI'm trying to use afade FFmpeg filter and it does not work as expected. I'm not able to pass its start and duration parameters in seconds.
With this string :



afilter=afade=t=out:st=1:d=0:curve=par




my afade filter starts fading from the very first frame. So I don't have any audio neither on the first nor on any other frames.
But if I set a magic number of 208 as the fade-out start time :



afilter=afade=t=out:st=208:d=0:curve=par




it starts working after 1 second (RMS turns into infinity on fade) :



...
Frame=0.501 Samples=23543 RMS=-35.186275
Frame=0.535 Samples=25014 RMS=-37.393734
Frame=0.568 Samples=26486 RMS=-40.655666
Frame=0.602 Samples=27957 RMS=-38.321899
Frame=0.635 Samples=29429 RMS=-41.370567
Frame=0.669 Samples=30900 RMS=-39.316444
Frame=0.702 Samples=32372 RMS=-27.994545
Frame=0.735 Samples=33843 RMS=-23.577181
Frame=0.769 Samples=35315 RMS=-22.933538
Frame=0.802 Samples=36786 RMS=-25.900106
Frame=0.836 Samples=38258 RMS=-26.836918
Frame=0.869 Samples=39729 RMS=-29.685308
Frame=0.902 Samples=41201 RMS=-32.493404
Frame=0.936 Samples=42672 RMS=-32.552109
Frame=0.969 Samples=44144 RMS=-42.384045
Frame=1.003 Samples=45615 RMS=-inf
Frame=1.036 Samples=47087 RMS=-inf
Frame=1.070 Samples=48558 RMS=-inf
Frame=1.103 Samples=50029 RMS=-inf
Frame=1.136 Samples=51501 RMS=-inf
Frame=1.170 Samples=52972 RMS=-inf
Frame=1.203 Samples=54444 RMS=-inf
Frame=1.237 Samples=55915 RMS=-inf
Frame=1.270 Samples=57387 RMS=-inf
Frame=1.304 Samples=58858 RMS=-inf
Frame=1.337 Samples=60330 RMS=-inf
Frame=1.370 Samples=61801 RMS=-inf
Frame=1.404 Samples=63273 RMS=-inf
Frame=1.437 Samples=64744 RMS=-inf
Frame=1.471 Samples=66216 RMS=-inf
Frame=1.504 Samples=67687 RMS=-inf




Seems like I have to multiple my starting time in seconds by that strange coefficient of 208 (I found this value experimentally for 44100 Hz sample rate). The same thing is with duration parameter. To set a duration of N seconds I should pass N*208 as a parameter. For other sample rates this coefficient changes.



So maybe there is something wrong with my filter graph initialization ?



This is my code for filter graph initialization (almost copied it from some example) :



int _InitFilterGraph(const char *_pszFilterDesc, AVFrame* _pFrame, AVRational* _pTimeBase)
{
 const AVFilter *pBufferSrc = avfilter_get_by_name( "abuffer" );
 const AVFilter *pBufferSink = avfilter_get_by_name( "abuffersink" );
 AVFilterInOut *pOutputs = avfilter_inout_alloc();
 AVFilterInOut *pInputs = avfilter_inout_alloc();

 AVSampleFormat out_sample_fmts[] = { (AVSampleFormat)_pFrame->format, (AVSampleFormat)-1 };
 int64_t out_channel_layouts[] = { (int64_t)_pFrame->channel_layout, -1 };
 int out_sample_rates[] = { _pFrame->sample_rate, -1 };

 m_pFilterGraph = avfilter_graph_alloc();

 // Buffer audio source: the decoded frames from the decoder will be inserted here.
 char args[512] = {};
 snprintf( args, sizeof(args), "time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%llx",
 _pTimeBase->num, _pTimeBase->den,
 _pFrame->sample_rate, av_get_sample_fmt_name( (AVSampleFormat)_pFrame->format ),
 _pFrame->channel_layout );

 int nRet = avfilter_graph_create_filter( &m_pBufferSrcCtx, pBufferSrc, "in",
 args, NULL, m_pFilterGraph );
 if( nRet < 0 ) goto final;

 // Buffer audio sink: to terminate the filter chain.
 AVABufferSinkParams *pBufferSinkParams = av_abuffersink_params_alloc();
 pBufferSinkParams->all_channel_counts = _pFrame->channels;

 nRet = avfilter_graph_create_filter( &m_pBufferSinkCtx, pBufferSink, "out",
 NULL, pBufferSinkParams, m_pFilterGraph );
 av_free( pBufferSinkParams );
 if( nRet < 0 ) goto final;

 nRet = av_opt_set_int_list( m_pBufferSinkCtx, "sample_fmts", out_sample_fmts, -1,
 AV_OPT_SEARCH_CHILDREN );
 if( nRet < 0 ) goto final;

 nRet = av_opt_set_int_list( m_pBufferSinkCtx, "channel_layouts", out_channel_layouts, -1,
 AV_OPT_SEARCH_CHILDREN );
 if( nRet < 0 ) goto final;

 nRet = av_opt_set_int_list( m_pBufferSinkCtx, "sample_rates", out_sample_rates, -1,
 AV_OPT_SEARCH_CHILDREN );
 if( nRet < 0 ) goto final;

 // Endpoints for the filter graph.
 pOutputs->name = av_strdup( "in" );
 pOutputs->filter_ctx = m_pBufferSrcCtx;
 pOutputs->pad_idx = 0;
 pOutputs->next = NULL;

 pInputs->name = av_strdup( "out" );
 pInputs->filter_ctx = m_pBufferSinkCtx;
 pInputs->pad_idx = 0;
 pInputs->next = NULL;

 nRet = avfilter_graph_parse_ptr( m_pFilterGraph, _pszFilterDesc, &pInputs, &pOutputs, NULL );
 if( nRet < 0 ) goto final;

 nRet = avfilter_graph_config( m_pFilterGraph, NULL );

final:
 avfilter_inout_free( &pInputs );
 avfilter_inout_free( &pOutputs );

 return nRet;
}



-
drawtext with ffmpeg python
22 mai 2024, par Wolf WolfI am trying to add a text to a video using ffmpeg and python.
I tried to do this in the following ways, but it didn't work.


first


(
 ffmpeg
 .input(in_video)
 .filter('drawtext',
 fontsize=30,
 fontfile=r"D:\projects\python\editor_bot\downloads\Candara.ttf",
 text='test test test.',
 x='if (eq(mod(t\\, 15)\\, 0)\\, rand(0\\, (w-text_w))\\, x)',
 y='if (eq(mod(t\\, 10)\\, 0)\\, rand(0\\, (h-text_h))\\, y)')
 .output(f'output-final.mp4')
 .run()
 )



second


fil = fr"drawtext=text='test test test':fontsize=30:fontfile=':\projects\python\editor_bot\downloads\Candara.ttf':x='if (eq(mod(t\, 15)\, 0)\, rand(0\, (w-text_w))\, x)':y='if (eq(mod(t\, 10)\, 0)\, rand(0\, (h-text_h))\, y)'"
 (
 ffmpeg
 .input(in_video)
 .output(f'output-final.mkv', filter_complex=fil)
 .run()
 )



But by running this command


ffmpeg -i v1.mp4 -filter:v drawtext="fontsize=30:fontfile=candara.ttf:text='testtest test.':x=if(eq(mod(t\,15)\,0)\,rand(0\,(w-text_w))\,x):y=if(eq(mod(t\,10)\,0)\,rand(0\,(h-text_h))\,y)" -c:a copy -c:v libx264 -preset slow -crf 18 V13.mkv



In the terminal, what I want is done exactly


thanks


-
how to combine all chunk videos path into text file using ffmpeg
31 juillet 2017, par Megha CSTask is to create final output video by combining all chunk videos recording from webcam using ffmpeg.
For that, created process with passing the ffmpeg command as argument and save all chunk videos to local folder.
code snippet :
process =new Process();
process.StartInfo.FileName = Directory.GetCurrentDirectory() + @"\ffmpeg.exe";
process.StartInfo.Arguments = "-re -rtbufsize 1000M -f dshow -i video=" + "\"" + vidDevName + "\"" + " -acodec libvo_aacenc -ab 48kb -ar 22050 -ac 2 -b:a 128k -vcodec libx264 -r 25 -s 480x360 -pix_fmt yuv420p -preset medium -segment_time 10 -f segment output%03d.mp4";
process.Start();Its working fine. But now, have to create text file of listing all chunk videos path and can create final output video by using "-f concat -safe 0 -i mylist.txt -c copy output.mp4" as an argument.
Am stuck in creating text file with listing all chunk videos path in c#.
I have used (for %i in (*.wav) do @echo file ’%i’) > mylist.txt to create text file. Its working fine in command prompt but not in C# application.
So please suggest on this.