
Recherche avancée
Autres articles (67)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (8462)
-
How to automatically add date to output file generated in ffmpeg ?
4 avril 2019, par Maxime RichardI am using Terminal to run ffmpeg commands (Mac OS) in order to record radio shows streamed online. The stream is in m3u8 I want to output it in mp3. So far so good, I am able to achieve that. However, I’d like the output file to read
YYYYMMDD-fm93-segal.mp3
whereYYYYMMMDD
are the date the recording was made.I am not able to achieve this using
-strftime 1
for some reason. When using my code, the output file reads%Y%m&d-fm93-segal.mp3
instead of replacing the strings by the real date.Here is the line I’m using :
ffmpeg -i "https://cogecomedia.leanstream.co/cogecomedia/CJMFFM.stream/playlist.m3u8" -acodec mp3 -strftime 1 "%Y%m%d-fm93-segal.mp3"
Anyone knows why and could help me with that ?
-
vsync flag usage in ffmpeg while filtering
13 octobre 2022, par antortjimI am trying to apply a threshold to an input video with ffmpeg, but I observe the following warning emitted for every processed frame


[mp4 @ 0x56360181a200] Non-monotonous DTS in output stream 0:0; previous: 182272, current: 182272; changing to 182273. This may result in incorrect timestamps in the output file.



where the previous and current are always 1 less than the value to which the DTS (Decoding Time Stamp) is changed


I have noticed this warning is emitted only if I set
-vsync passthrough
(which I changed from the original-vsync 0
which is seen in many online examples).

# input.mp4 has resolution 790x790
ffmpeg -vsync passthrough -i input.mp4 -f lavfi -i color=808080:s=790x790 -f lavfi -i color=black:s=790x790 -f lavfi -i color=white:s=790x790 -filter_complex '[0:v][1:v][2:v][3:v]threshold' -an -c:v h264_nvenc threshold.mp4



Shall I leave the vsync flag set to the default (auto or -1), or is
-vsync passthrough
essential to guarantee the frames are displayed in the right order ? In that case, how do I handle this warning ? Some other online examples I found of users experiencing this warning are different from mine, because in their case they are concatenating videos (see 1, 2

From the documentation on the
-vsync
flag, at the end, I see :



With -map you can select from which stream the timestamps should be taken. You can leave either video or audio unchanged and sync the remaining stream(s) to the unchanged one




Maybe this warning should be handled with
-map
? But I don't know how.

Sidenote, I keep getting the deprecation warning asserting me to change
-vsync
for-fps_mode
, however doing so breaks the command.

FFPEG Version :


commit
28ac2279adb860ea8b90d3073603912bf3eb6a83
from ffmpeg master branch

ffmpeg version N-108625-g28ac2279ad Copyright (c) 2000-2022 the FFmpeg developers
built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1)
configuration: --enable-nonfree --enable-cuda-nvcc --enable-libnpp --enable-gpl --extra-cflags=-I/usr/local/cuda/include --extra-ldflags=-L/usr/local/cuda/lib64 --disable-static --enable-shared
libavutil 57. 39.101 / 57. 39.101
libavcodec 59. 50.100 / 59. 50.100
libavformat 59. 34.101 / 59. 34.101
libavdevice 59. 8.101 / 59. 8.101
libavfilter 8. 49.101 / 8. 49.101
libswscale 6. 8.112 / 6. 8.112
libswresample 4. 9.100 / 4. 9.100
libpostproc 56. 7.100 / 56. 7.100



OS


Ubuntu 20.04.4 LTS


-
ffmpeg libswresample channel mapping
21 mars 2024, par Joseph KatrinkaIm currently working on some c++ that uses ffmpeg to take audio from a stereo file and make a mono audio file using only the sample from one channel.
I havent been able to find and examples of people using the swr_set_channel_mapping call online, so Im wondering if anyone knows of the correct usage.
Right now Im doing something like this


swr_alloc_set_opts2(&swrCtx, &out_channel_layout, AV_SAMPLE_FMT_S16, codecCtx->sample_rate, &codecCtx->ch_layout, codecCtx->sample_fmt, codecCtx->sample_rate, 0, NULL);

int* channel_mapping = (int*)av_mallocz(2 * sizeof(int));
if (useLeft)
{
channel_mapping[0] = AV_CH_FRONT_LEFT;
}
else
{
channel_mapping[0] = AV_CH_FRONT_RIGHT;
} 
swr_set_channel_mapping(swrCtx, channel_mapping);

if (swr_init(swrCtx) < 0)
 {
 DBG("failed to init resampler");
 ffmpegCleanup();
 return false;
 }




Is this the correct way to do this ? Its been giving me some problems and Im worried I could be doing something wrong.
Thanks.


Ive tried different ways of defining channel_mapping with no success. Im not sure what the correct way is and an example would be pretty useful.