
Recherche avancée
Autres articles (70)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (12966)
-
Cut, concatenate, and re-encode to h265 with ffmpeg
30 mars 2022, par Make42I have two h264 videos that I would like to cut (each), concatenate and re-code into h265 - all with ffmpeg. How can I achieve that, considering that the following two approaches do not work ?


First approach


I tried


ffmpeg -ss 00:00:05.500 -to 00:12:06.200 -i video1.mp4 \
 -ss 00:00:10.700 -to 01:43:47.000 -i video2.mp4 \
 -filter_complex "[0:v][0:a][1:v][1:a] concat=n=2:v=1:a=1 [outv] [outa]" \
 -map "[outv]" -map "[outa]" \
 -c:v libx265 -vtag hvc1 -c:a copy \
 final.mp4



but get the error message




Streamcopy requested for output stream 0:1, which is fed from a complex filtergraph. Filtering and streamcopy cannot be used together.




Second approach


Alternatively, I created the file
cutpoints.txt
with the content

file video1.mp4
inpoint 5.5
outpoint 726.2
file video2.mp4
inpoint 600.7
outpoint 6227.0



and ran the command


ffmpeg -f concat -safe 0 -i cutpoints.txt -c:v libx265 -vtag hvc1 -c:a copy final.mp4



but then the video does not start exactly at 5.5 seconds, which is not surprising since




inpoint timestamp


This directive works best with intra frame codecs, because for non-intra frame ones you will usually get extra packets before the actual In point and the decoded content will most likely contain frames before In point too.




-
Unable to pass parameters in seconds to FFMpeg afade audio filter
9 septembre 2020, par Anton SerovI'm trying to use afade FFmpeg filter and it does not work as expected. I'm not able to pass its start and duration parameters in seconds.
With this string :



afilter=afade=t=out:st=1:d=0:curve=par




my afade filter starts fading from the very first frame. So I don't have any audio neither on the first nor on any other frames.
But if I set a magic number of 208 as the fade-out start time :



afilter=afade=t=out:st=208:d=0:curve=par




it starts working after 1 second (RMS turns into infinity on fade) :



...
Frame=0.501 Samples=23543 RMS=-35.186275
Frame=0.535 Samples=25014 RMS=-37.393734
Frame=0.568 Samples=26486 RMS=-40.655666
Frame=0.602 Samples=27957 RMS=-38.321899
Frame=0.635 Samples=29429 RMS=-41.370567
Frame=0.669 Samples=30900 RMS=-39.316444
Frame=0.702 Samples=32372 RMS=-27.994545
Frame=0.735 Samples=33843 RMS=-23.577181
Frame=0.769 Samples=35315 RMS=-22.933538
Frame=0.802 Samples=36786 RMS=-25.900106
Frame=0.836 Samples=38258 RMS=-26.836918
Frame=0.869 Samples=39729 RMS=-29.685308
Frame=0.902 Samples=41201 RMS=-32.493404
Frame=0.936 Samples=42672 RMS=-32.552109
Frame=0.969 Samples=44144 RMS=-42.384045
Frame=1.003 Samples=45615 RMS=-inf
Frame=1.036 Samples=47087 RMS=-inf
Frame=1.070 Samples=48558 RMS=-inf
Frame=1.103 Samples=50029 RMS=-inf
Frame=1.136 Samples=51501 RMS=-inf
Frame=1.170 Samples=52972 RMS=-inf
Frame=1.203 Samples=54444 RMS=-inf
Frame=1.237 Samples=55915 RMS=-inf
Frame=1.270 Samples=57387 RMS=-inf
Frame=1.304 Samples=58858 RMS=-inf
Frame=1.337 Samples=60330 RMS=-inf
Frame=1.370 Samples=61801 RMS=-inf
Frame=1.404 Samples=63273 RMS=-inf
Frame=1.437 Samples=64744 RMS=-inf
Frame=1.471 Samples=66216 RMS=-inf
Frame=1.504 Samples=67687 RMS=-inf




Seems like I have to multiple my starting time in seconds by that strange coefficient of 208 (I found this value experimentally for 44100 Hz sample rate). The same thing is with duration parameter. To set a duration of N seconds I should pass N*208 as a parameter. For other sample rates this coefficient changes.



So maybe there is something wrong with my filter graph initialization ?



This is my code for filter graph initialization (almost copied it from some example) :



int _InitFilterGraph(const char *_pszFilterDesc, AVFrame* _pFrame, AVRational* _pTimeBase)
{
 const AVFilter *pBufferSrc = avfilter_get_by_name( "abuffer" );
 const AVFilter *pBufferSink = avfilter_get_by_name( "abuffersink" );
 AVFilterInOut *pOutputs = avfilter_inout_alloc();
 AVFilterInOut *pInputs = avfilter_inout_alloc();

 AVSampleFormat out_sample_fmts[] = { (AVSampleFormat)_pFrame->format, (AVSampleFormat)-1 };
 int64_t out_channel_layouts[] = { (int64_t)_pFrame->channel_layout, -1 };
 int out_sample_rates[] = { _pFrame->sample_rate, -1 };

 m_pFilterGraph = avfilter_graph_alloc();

 // Buffer audio source: the decoded frames from the decoder will be inserted here.
 char args[512] = {};
 snprintf( args, sizeof(args), "time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%llx",
 _pTimeBase->num, _pTimeBase->den,
 _pFrame->sample_rate, av_get_sample_fmt_name( (AVSampleFormat)_pFrame->format ),
 _pFrame->channel_layout );

 int nRet = avfilter_graph_create_filter( &m_pBufferSrcCtx, pBufferSrc, "in",
 args, NULL, m_pFilterGraph );
 if( nRet < 0 ) goto final;

 // Buffer audio sink: to terminate the filter chain.
 AVABufferSinkParams *pBufferSinkParams = av_abuffersink_params_alloc();
 pBufferSinkParams->all_channel_counts = _pFrame->channels;

 nRet = avfilter_graph_create_filter( &m_pBufferSinkCtx, pBufferSink, "out",
 NULL, pBufferSinkParams, m_pFilterGraph );
 av_free( pBufferSinkParams );
 if( nRet < 0 ) goto final;

 nRet = av_opt_set_int_list( m_pBufferSinkCtx, "sample_fmts", out_sample_fmts, -1,
 AV_OPT_SEARCH_CHILDREN );
 if( nRet < 0 ) goto final;

 nRet = av_opt_set_int_list( m_pBufferSinkCtx, "channel_layouts", out_channel_layouts, -1,
 AV_OPT_SEARCH_CHILDREN );
 if( nRet < 0 ) goto final;

 nRet = av_opt_set_int_list( m_pBufferSinkCtx, "sample_rates", out_sample_rates, -1,
 AV_OPT_SEARCH_CHILDREN );
 if( nRet < 0 ) goto final;

 // Endpoints for the filter graph.
 pOutputs->name = av_strdup( "in" );
 pOutputs->filter_ctx = m_pBufferSrcCtx;
 pOutputs->pad_idx = 0;
 pOutputs->next = NULL;

 pInputs->name = av_strdup( "out" );
 pInputs->filter_ctx = m_pBufferSinkCtx;
 pInputs->pad_idx = 0;
 pInputs->next = NULL;

 nRet = avfilter_graph_parse_ptr( m_pFilterGraph, _pszFilterDesc, &pInputs, &pOutputs, NULL );
 if( nRet < 0 ) goto final;

 nRet = avfilter_graph_config( m_pFilterGraph, NULL );

final:
 avfilter_inout_free( &pInputs );
 avfilter_inout_free( &pOutputs );

 return nRet;
}



-
drawtext with ffmpeg python
22 mai 2024, par Wolf WolfI am trying to add a text to a video using ffmpeg and python.
I tried to do this in the following ways, but it didn't work.


first


(
 ffmpeg
 .input(in_video)
 .filter('drawtext',
 fontsize=30,
 fontfile=r"D:\projects\python\editor_bot\downloads\Candara.ttf",
 text='test test test.',
 x='if (eq(mod(t\\, 15)\\, 0)\\, rand(0\\, (w-text_w))\\, x)',
 y='if (eq(mod(t\\, 10)\\, 0)\\, rand(0\\, (h-text_h))\\, y)')
 .output(f'output-final.mp4')
 .run()
 )



second


fil = fr"drawtext=text='test test test':fontsize=30:fontfile=':\projects\python\editor_bot\downloads\Candara.ttf':x='if (eq(mod(t\, 15)\, 0)\, rand(0\, (w-text_w))\, x)':y='if (eq(mod(t\, 10)\, 0)\, rand(0\, (h-text_h))\, y)'"
 (
 ffmpeg
 .input(in_video)
 .output(f'output-final.mkv', filter_complex=fil)
 .run()
 )



But by running this command


ffmpeg -i v1.mp4 -filter:v drawtext="fontsize=30:fontfile=candara.ttf:text='testtest test.':x=if(eq(mod(t\,15)\,0)\,rand(0\,(w-text_w))\,x):y=if(eq(mod(t\,10)\,0)\,rand(0\,(h-text_h))\,y)" -c:a copy -c:v libx264 -preset slow -crf 18 V13.mkv



In the terminal, what I want is done exactly


thanks