
Recherche avancée
Médias (2)
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (76)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Menus personnalisés
14 novembre 2010, parMediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
Menus créés à l’initialisation du site
Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)
Sur d’autres sites (8744)
-
checkasm/hevc_pel : Fix stack buffer overreads
28 septembre 2021, par Andreas Rheinhardtcheckasm/hevc_pel : Fix stack buffer overreads
This patch increases several stack buffers in order to fix
stack-buffer-overflows (e.g. in put_hevc_qpel_uni_hv_9 in
line 814 of hevcdsp_template.c) detected with ASAN in the hevc_pel
checkasm test.
The buffers are increased by the minimal amount necessary
in order not to mask potential future bugs.Reviewed-by : Martin Storsjö <martin@martin.st>
Reviewed-by : "zhilizhao(赵志立)" <quinkblack@foxmail.com>
Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com> -
Create a mkv file with colored background and containing a given audio and subtitle stream
25 mai 2023, par rdrg109Table of contents


- 

- The context
- Minimal working example
- What I've tried

- 

- Create a mkv file with colored background and an audio stream
- Create a mkv file with colored background, an audio stream and a subtitles stream






- The question












The context


I have a
*.flac
file and a*.srt
file. I want to merge those files in a MKV file, but at the same time, I want to add a video stream. I want the video stream to show a green background the entire time.



Minimal working example


For our experimentation, let's create two sample files : one
*.flac
file and one*.srt
file.

The following command creates a
*.flac
file that lasts 60 seconds and contains a sine wave.

$ ffmpeg -y -f lavfi -i "sine=f=1000:d=60" input.flac



The following command creates a
*.srt
file. Note that our last subtitle lasts until the sixth second, this is intended.

$ cat << EOF > input.srt
1
00:00:00,000 --> 00:00:03,000
This is the first subtitle in a
SRT file.

2
00:00:03,000 --> 00:00:06,000
This is the second subtitle in a
SRT file.
EOF





What I've tried




Create a mkv file with colored background and an audio stream


I know how to create a MKV file containing a given audio stream and a colored background as the video stream.


The following command creates a MKV file containing
input.flac
as the audio stream and green background as the video stream. The MKV file have the same duration asinput.flac
.

$ ffmpeg \
 -y \
 -f lavfi \
 -i color=c=green:s=2x2 \
 -i input.flac \
 -c:v libx264 \
 -c:a copy \
 -shortest \
 output.mkv



The following command shows the duration of the streams in the resulting file.


$ ffprobe -v error -print_format json -show_entries stream=codec_type:stream_tags=duration output.mkv | jq -r ''



{
 "programs": [],
 "streams": [
 {
 "codec_type": "video",
 "tags": {
 "DURATION": "00:00:58.200000000"
 }
 },
 {
 "codec_type": "audio",
 "tags": {
 "DURATION": "00:01:00.000000000"
 }
 }
 ]
}





Create a mkv file with colored background, an audio stream and a subtitles stream


To add a subtitles stream, I just need to specify the
*.srt
file. However, when I do this, the duration of the video is set to the time of the last subtitle in the*.srt
file. This is expected because I have used-shortest
. I would get the result I'm looking for if it were possible to specify the stream that-shortest
gives top priority to. I haven't found this information on the Internet.

$ ffmpeg \
 -y \
 -f lavfi \
 -i color=c=green:s=2x2 \
 -i input.flac \
 -i input.srt \
 -c:v libx264 \
 -c:a copy \
 -shortest \
 output.mkv



The following command shows the duration of the streams in the resulting file. Note that the maximum duration of the resulting file is 6 seconds, while in the resulting file from the previous section it was 1 minute.


$ ffprobe -v error -print_format json -show_entries stream=codec_type:stream_tags=duration output.mkv | jq -r ''



{
 "programs": [],
 "streams": [
 {
 "codec_type": "video",
 "tags": {
 "DURATION": "00:00:01.160000000"
 }
 },
 {
 "codec_type": "audio",
 "tags": {
 "DURATION": "00:00:03.134000000"
 }
 },
 {
 "codec_type": "subtitle",
 "tags": {
 "DURATION": "00:00:06.000000000"
 }
 }
 ]
}





The question


Given a
*.flac
file and a*.srt
file. How to merge them in a*.mkv
file so that it has the*.flac
file as the audio stream, the*.srt
file as the subtitles stream and a green background as the video stream ?

-
How to sum audio from two streams in ffmpeg
9 février 2021, par user3188445I have a video file with two audio streams, representing two people talking at different times. The two people never talk at the same time, so there is no danger of clipping by summing the audio. I would like to sum the audio into one stream without reducing the volume. The ffmpeg amix filter has an option that would seem to do what I want, but the option does not seem to work. Here are two minimal non-working examples (the audio tracks are [0:2] and [0:3]) :


ffmpeg -i input.mkv -map 0:0 -c:v copy \
 -filter_complex '[0:2][0:3]amix' \
 output.m4v

ffmpeg -i input.mkv -map 0:0 -c:v copy \
 -filter_complex '[0:2][0:3]amix=sum=sum' \
 output.m4v



The first example diminishes the audio volume. The second example is a syntax error. I tried other variants like
amix=sum
andamix=sum=1
, but despite the documentation I don't think the sum option exists any more.ffmpeg -h filter=amix
does not mention the sum option (ffmpeg version n4.3.1).

My questions :


- 

-
Can I sum two audio tracks with ffmpeg, without losing resolution. (I'd rather not cut the volume in half and scale it up, but if there's no other way I guess I'd accept and answer that sacrifices a bit.)


-
Is there an easy way to adjust the relative delay of one of the tracks by a few milliseconds ?








-