
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (80)
-
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users.
Sur d’autres sites (7144)
-
Playing RTSP in WPF application with low latency using FFMPEG / FFMediaElement (FFME)
22 mars 2019, par PabokaI’m trying to use FFMediaElement (FFME, WPF MediaElement replacement based on FFmpeg) component to play RSTP live video in my WPF application.
I have a good connection to my camera and I want to play it with minimum available latency.
I’ve reduced the latency by changing
ProbeSize
to its minimal value :private void Media_OnMediaInitializing(object Sender, MediaInitializingRoutedEventArgs e)
{
e.Configuration.GlobalOptions.ProbeSize = 32;
}But I still have about 1 second of latency since the very beginning of the stream. I mean, when I start playing, I have to wait for 1 second till the video appears and then I have 1s of latency.
I’ve also tried to change following parameters :
e.Configuration.GlobalOptions.EnableReducedBuffering = true;
e.Configuration.GlobalOptions.FlagNoBuffer = true;
e.Configuration.GlobalOptions.MaxAnalyzeDuration = TimeSpan.Zero;but it gave no result.
I measured time-interval between FFmpeg output lines (the number in the first column is the time, elapsed from the previous line, ms)
---- OpenCommand: Entered
39 FFInterop.Initialize: FFmpeg v4.0
0 EVENT START: MediaInitializing
0 EVENT DONE : MediaInitializing
379 EVENT START: MediaOpening
0 EVENT DONE : MediaOpening
0 COMP VIDEO: Start Offset: 0,000; Duration: N/A
41 SYNC-BUFFER: Started.
609 SYNC-BUFFER: Finished. Clock set to 1534932751,634
0 EVENT START: MediaOpened
0 EVENT DONE : MediaOpened
0 EVENT START: BufferingStarted
0 EVENT DONE : BufferingStarted
0 OpenCommand: Completed
0 V BLK: 1534932751,634 | CLK: 1534932751,634 | DFT: 0 | IX: 0 | PQ: 0,0k | TQ: 0,0k
0 Command Queue (1 commands): Before ProcessNext
0 Play - ID: 404 Canceled: False; Completed: False; Status: WaitingForActivation; State:
94 V BLK: 1534932751,675 | CLK: 1534932751,699 | DFT: 24 | IX: 1 | PQ: 0,0k | TQ: 0,0kSo, the most the process of "sync-buffering" takes the most of the time.
Is there any parameter of FFmpeg which allows reducing a size of the buffer ?
-
Use ffmpeg libraries to convert stream formats
17 septembre 2021, par SyrinxI'm attempting to write a small program and link it to a minimal set of ffmpeg libraries, like libavformat and whatever other libraries I need.


I am looking for documentation to get me started, or maybe a quick fix to the example program I am using.


I know ffmpeg (the project) provides example programs to help developers get started. I'm using the transcoding example, as it's close to my end goal, but it exits during init with an error about an audio issue.


Here I am using the transcoding example program that come with ffmpeg v4.4, on Ubuntu 18.04. My input source has one video channel (h264) and one audio channel (pcm_mulaw).


$ LD_LIBRARY_PATH=../../dist/lib ./transcoding rtsp://ip-camera/stream out.flv
...
 Stream #0:0: Video: h264, yuv420p, 1280x720, q=2-31, 20 tbn
 Stream #0:1: Audio: pcm_mulaw, 8000 Hz, 0 channels, s16
auto_resampler_0 @ 0x55da787fa140] [SWR @ 0x55da787fa5c0] Rematrix is needed between mono and 0 channels but there is not enough information to do it
[auto_resampler_0 @ 0x55da787fa140] Failed to configure output pad on auto_resampler_0



In libswresample/swresample.c :


320 if ((!s->out_ch_layout || !s->in_ch_layout) && s->used_ch_count != s->out.ch_count && !s->rematrix_custom) {
321 av_log(s, AV_LOG_ERROR, "Rematrix is needed between %s and %s "
322 "but there is not enough information to do it\n", l1, l2);
323 ret = AVERROR(EINVAL);
324 goto fail;
325 }



I'd really like it if I could make the transcoding example program work (fix it, or maybe use it appropriately if I am misunderstanding something). But short of that, where should I look for documentation about using the ffmpeg libraries ?


I don't even care about the audio. If I can just disable the audio, I would be happy with that solution. I tried tracking the "-an" option to ffmpeg (the program) to see how it does that in source code, but the options handling is a mess and I can't distinguish the parts of the code that I need from all the noise.


ffmpeg has web pages like this that aren't useful at all. There is documentation in the source code that looks like it should be viewed as HTML, but I don't see it exported anywhere. "make doc" generates a very small set of man pages that are insufficient to get me started.


-
build : add support for building CUDA files with clang
30 juillet 2019, par Rodger Combsbuild : add support for building CUDA files with clang
This avoids using the CUDA SDK at all ; instead, we provide a minimal
reimplementation of the basic functionality that lavfi actually uses.
It generates very similar code to what NVCC produces.The header contains no implementation code derived from the SDK.
The function and type declarations are derived from the SDK only to the
extent required to build a compatible implementation. This is generally
accepted to qualify as fair use.Because this option does not require the proprietary SDK, it does not require
the "—enable-nonfree" flag in configure.Signed-off-by : Timo Rothenpieler <timo@rothenpieler.org>