
Recherche avancée
Médias (91)
-
Les Miserables
9 décembre 2019, par
Mis à jour : Décembre 2019
Langue : français
Type : Textuel
-
VideoHandle
8 novembre 2019, par
Mis à jour : Novembre 2019
Langue : français
Type : Video
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
-
Un test - mauritanie
3 avril 2014, par
Mis à jour : Avril 2014
Langue : français
Type : Textuel
-
Pourquoi Obama lit il mes mails ?
4 février 2014, par
Mis à jour : Février 2014
Langue : français
-
IMG 0222
6 octobre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Image
Autres articles (96)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Participer à sa documentation
10 avril 2011La documentation est un des travaux les plus importants et les plus contraignants lors de la réalisation d’un outil technique.
Tout apport extérieur à ce sujet est primordial : la critique de l’existant ; la participation à la rédaction d’articles orientés : utilisateur (administrateur de MediaSPIP ou simplement producteur de contenu) ; développeur ; la création de screencasts d’explication ; la traduction de la documentation dans une nouvelle langue ;
Pour ce faire, vous pouvez vous inscrire sur (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (6030)
-
Impossible to convert between the formats supported by the filter '...' - Error reinitializing filters
14 novembre 2023, par Fabien BillerI am using this ffmpeg command(values removed for simplicity)


ffmpeg -hwaccel cuvid -c:v h264_cuvid -y -ss 1 -i "FILE0001.MOV" -ss 0 -i "GOPR0621.MP4" -filter_complex 
[0:v][1:v]
 midequalizer
[al];
[al]
 yadif
 lenscorrection
 scale
[vl];
[1:v]
 lenscorrection
 scale
[vr];
[vl][vr]
 hstack=shortest=1 
-an -c:v h264_nvenc -preset slow "output.mp4"



on a machine with a cuda graphics card.


I get


ffmpeg version N-90979-g08032331ac Copyright (c) 2000-2018 the FFmpeg developers
 built with gcc 7.3.0 (GCC)
 configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth
 libavutil 56. 18.100 / 56. 18.100
 libavcodec 58. 19.100 / 58. 19.100
 libavformat 58. 13.101 / 58. 13.101
 libavdevice 58. 4.100 / 58. 4.100
 libavfilter 7. 21.100 / 7. 21.100
 libswscale 5. 2.100 / 5. 2.100
 libswresample 3. 2.100 / 3. 2.100
 libpostproc 55. 2.100 / 55. 2.100
[mov,mp4,m4a,3gp,3g2,mj2 @ 00000254a8afc0c0] st: 0 edit list: 1 Missing key frame while searching for timestamp: 6006
[mov,mp4,m4a,3gp,3g2,mj2 @ 00000254a8afc0c0] st: 0 edit list 1 Cannot find an index entry before timestamp: 6006.
....
Stream mapping:
 Stream #0:0 (h264_cuvid) -> midequalizer:in0
 Stream #1:0 (h264) -> midequalizer:in1
 Stream #1:0 (h264) -> lenscorrection
 hstack -> Stream #0:0 (h264_nvenc)
 
Impossible to convert between the formats supported by the filter 'graph 0 input from stream 0:0' and the filter 'auto_scaler_0'
Error reinitializing filters!



The same command without CUDA works, ie


ffmpeg -y -ss 1 -i "FILE0001.MOV" -ss 0 -i "GOPR0621.MP4" -filter_complex 
[0:v][1:v]
 midequalizer
[al];
[al]
 yadif
 lenscorrection
 scale
[vl];
[1:v]
 lenscorrection
 scale
[vr];
[vl][vr]
 hstack=shortest=1 
-an "output.mp4"



How do I make it work on a Windows 10 machine with cuda ?


-
ffmpeg configuration difficulty with filter_complex and hls
4 février 2020, par akc42I am trying to set up ffmpeg so that it will record from a microphone and encode the results at the same time into a .flac file for later syncing up with some video I will be making.
The microphone is plugged into a raspberry pi (4B) and I am currently trying it with a blue yeti mic, but I think I can do the same with a focusrite scarlett 2i2 plugged in instead. However I was puzzling about how to start the server recording and decided I could do it from a web browser if I made a simple nodejs server that spawned ffmpeg as a child process.
But then I was inspired by this sample ffmpeg command which displays (on my desktop with an graphical interface) a volume meter
ffmpeg -hide_banner -i 'http://distribution.bbb3d.renderfarming.net/video/mp4/bbb_sunflower_1080p_30fps_normal.mp4' -filter_complex "showvolume=rate=25:f=0.95:o=v:m=p:dm=3:h=80:w=480:ds=log:s=2" -c:v libx264 -c:a aac -f mpegts - | ffplay -window_title "Peak Volume" -i -
What if I could stream the video produced by the
showvolume
filter to the web browser that I am using to control the ffmpeg process (NOTE I don’t want to send the audio with this). So I tried to read up on hls (since the control device will be an ipad - in fact that is what I will record the video on), and came up with this commandffmpeg -hide_banner -f alsa -ac 2 -ar 48k -i hw:CARD=Microphone -filter_complex "asplit=2[main][vol],[vol]showvolume=rate=25:f=0.95:o=v:m=p:dm=3:h=80:w=480:ds=log:s=2[vid]" -map [main] -c:a:0 flac recordings/session_$(date +%a_%d_%b_%Y___%H_%M_%S).flac -map [vid] -preset veryfast -g 25 -an -sc_threshold 0 -c:v:1 libx264 -b:v:1 2000k -maxrate:v:1 2200k -bufsize:v:3000k -f hls -hls_time 4 -hls_flags independent_segments delete_segments -strftime 1 -hls_segment_filename recordings/volume-%Y%m%d-%s.ts recordings/volume.m3u8
The problem is I am finding the documentation a bit opaque as to what happens once I have generated two streams - the main audio and a video stream, and this command throws both a warning and an error :-
The warning is
Guessed Channel Layout for Input Stream #0.0 : stereo
and the error is
[NULL @ 0x1baa130] Unable to find a suitable output format for 'hls'
hls: Invalid argumentWhat I am trying to do is set up stream labels [main] and [vol] as I split the incoming audio into two parts, then I pass [vol] through the "showvolume" filter and end up with stream [vid].
I think I need to then use
-map
to specify encoding the [main] stream down to flac and writing it out to file (The file exists after I run the command although they have zero length), and use another -map to pass through to the-f hls
section. But I think I have something wrong at this stage.Can someone help me get this command right.
-
How to encode 3840 nb_samples to a codec that asks for 1024 using ffmpeg
26 juillet 2018, par GabulitFFmpeg has an example muxing code on https://ffmpeg.org/doxygen/4.0/muxing_8c-example.html
This code generates frame by frame video and audio. What I am trying to do is to change
ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
c->sample_rate, nb_samples);to
ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
c->sample_rate, 3840);so that it generates 3840 samples per channel instead of 1024 samples which is the default for nb_samples (aac codec).
I tried to combine code from https://ffmpeg.org/doxygen/4.0/transcode_aac_8c-example.html which has an example on buffering the frames.
My resulting program crashes when generating audio samples after a couple of frames when assigning *q++ a new value at the first iteration :
/* Prepare a 16 bit dummy audio frame of 'frame_size' samples and
* 'nb_channels' channels. */
static AVFrame *get_audio_frame(OutputStream *ost)
{
AVFrame *frame = ost->tmp_frame;
int j, i, v;
int16_t *q = (int16_t*)frame->data[0];
/* check if we want to generate more frames */
if (av_compare_ts(ost->next_pts, ost->enc->time_base,
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
return NULL;
for (j = 0; j nb_samples; j++) {
v = (int)(sin(ost->t) * 10000);
for (i = 0; i < ost->enc->channels; i++)
*q++ = v;
ost->t += ost->tincr;
ost->tincr += ost->tincr2;
}
frame->pts = ost->next_pts;
ost->next_pts += frame->nb_samples;
return frame;
}Maybe I don’t get the logic behind encoding.
Here is the full source that i’ve come up with :
The reason i am trying to accomplish this task is that I have a capture card sdk that outputs 2 channel 16 bit raw pcm 48000Hz which has 3840 samples per channel and I am trying to encode its output to aac. So basically if I get the muxing example to work with 3840 nb_samples this will help me understand the concept.
I have already looked at How to encode resampled PCM-audio to AAC using ffmpeg-API when input pcm samples count not equal 1024 but the example uses "encodeFrame", which the examples on ffmpeg documentation doesn’t use or I am mistaken.
Any help is greatly appreciated.