
Recherche avancée
Autres articles (107)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (20121)
-
How do I compose three overlapping videos w/audio in ffmpeg ?
10 avril 2021, par Idan GazitI have three videos : let's call them
intro
,recording
andoutro
. My ultimate goal is to stitch them together like so :



Both
intro
andoutro
have alpha (prores 4444) and a "wipe" to transition, so whenoverlay
ing, they must be on top of the recording. The recording is h264, and ultimately I'm encoding out for youtube with these recommended settings.

I've figured out how to make the thing work correctly for
intro
+recording
:

$ ffmpeg \
 -i intro.mov \
 -i recording.mp4 \
 -filter_complex \
 "[1:v]tpad=start_duration=10:start_mode=add:color=black[rv]; \
 [1:a]adelay=delays=10s:all=1[ra]; \
 [rv][0:v]overlay[v];[0:a][ra]amix[a]" \
 -map "[a]" -map "[v]" \
 -movflags faststart -c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
 out.mp4 -y



However I can't use the
tpad
trick for the outro because it would render black frames over everything.

I've tried various iterations with
setpts
/asetpts
as well as passing-itsoffset
for the input, but haven't come up with a solution that works correctly for both video and audio. This tries to start the outro at 16 seconds into the recording (10s start + 16s of recording is how I got tosetpts=PTS+26/TB
). del, but doesn't work correctly, I get both intro and outro audio from the first frame, and the recording audio cuts out when the outro overlay begins :

$ ffmpeg \
 -i intro.mov \
 -i recording.mp4 \
 -i outro.mov \
 -filter_complex \
 "[1:v]tpad=start_duration=10:start_mode=add:color=black[rv]; \
 [1:a]adelay=delays=10s:all=1[ra]; \
 [2:v]setpts=PTS+26/TB[outv]; \
 [2:a]asetpts=PTS+26/TB[outa]; \
 [rv][0:v]overlay[v4]; \
 [0:a][ra]amix[a4]; \
 [v4][outv]overlay[v]; \
 [a4][outa]amix[a]" \
 -map "[a]" -map "[v]" \
 -movflags faststart -c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
 out.mp4 -y



I think the right solution lies in the direction of using
setpts
correctly but I haven't been able to wrap my brain fully around it. Or, maybe I'm making life complicated and there's an easier approach ?

In the nice-to-have realm, I'd love to be able to specify the start of the
outro
relative to the end of the recording. I will be doing this to a bunch of recordings of varying lengths. It would be nice to have one command to invoke on everything rather than figuring out a specific timestamp for each one.

Thank you !


-
avcodec/dfpwmenc : Correctly pad input
22 juin, par Andreas Rheinhardtavcodec/dfpwmenc : Correctly pad input
Before this patch, the DFPWM1a encoder was marked as supporting
variable frame sizes. The DFPWM1a format converts eight bytes
of input into one output byte and so it simply padded the number
of data output by
frame->nb_samples * frame->ch_layout.nb_channels / 8 +
(frame->nb_samples % 8 > 0 ? 1 : 0)
This has several bugs :
a) The additional byte leads to eight additional input byte being
read ; this can read into the frame's padding, i.e. the data can
be uninitialized.
b) The criterion for whether one should pad is wrong :
nb_samples * nb_channels should be tested for divisibility by eight.
c) The created frames can be undecodable (at least with our decoder) :
Our decoder requires the number of bits per frame to divisible by
the number of channels, yet the above approach does not guarantee this.
d) The padding will be added in the middle of the stream (potentially
for every packet).This commit fixes all of this by removing the variable frame size cap
and using AVCodecInternal.pad_samples to pad the last frame so that
nb_samples * nb_channels is always a multiple of eight.
The lavf-dfpwm FATE-test was affected by a). The frames originated from
lavfi and were part of an audio frame pool, so that the padding
contained data from an earlier (bigger) frame. Now the last frame is
properly filled with silence.Reported-by : Paul B Mahol <onemda@gmail.com>
Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com> -
checkasm : Generalize crash handling
14 décembre 2023, par Martin Storsjöcheckasm : Generalize crash handling
This replaces the riscv specific handling from
7212466e735aa187d82f51dadbce957fe3da77f0 (which essentially is
reverted), with a different implementation of the same (plus a bit
more), based on the corresponding feature in dav1d's checkasm,
supporting both Unix and Windows.See in particular the dav1d commits
0b6ee30eab2400e4f85b735ad29a68a842c34e21,
0421f787ea592fd2cc74c887f20b8dc31393788b,
8501a4b20135f93a4c3b426468e2240e872949c5 and
d23e87f7aee26ddcf5f7a2e185112031477599a7, authored by Henrik Gramner.The overall approach compared to the existing implementation for
riscv is the same ; set up a signal handler, store the state with
sigsetjmp, jump out of the crashing function with siglongjmp.The main difference is in what happens when the signal handler
is invoked. In the previous implementation, it would resume from
right before calling the crashing function, and then skip that call
based on the setjmp return value.In the imported implementation from dav1d, we return to right before
the check_func() call, which will skip testing the current function
(as the pointer is the same as it was before).Other differences are :
Support for other signal handling mechanisms (Windows
AddVectoredExceptionHandler)Using RtlCaptureContext/RtlRestoreContext instead of setjmp/longjmp
on Windows with SEHOnly catching signals once per function - if more than one
signal is delivered before signal handling is reenabled, any
signal is handled as it would without our handlerNot using an arch specific signal handler written in assembly
Signed-off-by : Martin Storsjö <martin@martin.st>