
Recherche avancée
Autres articles (24)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Prérequis à l’installation
31 janvier 2010, parPréambule
Cet article n’a pas pour but de détailler les installations de ces logiciels mais plutôt de donner des informations sur leur configuration spécifique.
Avant toute chose SPIPMotion tout comme MediaSPIP est fait pour tourner sur des distributions Linux de type Debian ou dérivées (Ubuntu...). Les documentations de ce site se réfèrent donc à ces distributions. Il est également possible de l’utiliser sur d’autres distributions Linux mais aucune garantie de bon fonctionnement n’est possible.
Il (...) -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)
Sur d’autres sites (8146)
-
Why does ffmpeg have bigger latency on dark images ?
19 novembre 2017, par doodoromaI have a c# application to stream real camera images using ffmpeg. The input images are in raw, 8-bit gray-scale format. I created an ffmpeg stream using the standard input to feed the images and send the output packages to websocket clients.
I start an external ffmpeg process using this config :
-f rawvideo -pixel_format gray -video_size " + camera.Width.ToString() + "x" + camera.Height.ToString() + " -framerate 25 -i - -f mpeg1video -b:v 512k -s 320x240 -
Typical image size is 1040*1392 pixels
I display the stream on the browser, using jsmpeg library
This works with a reasonable latency ( 500ms on localhost), but when the camera image is really dark (black image), the latency is extremely big ( 2-3 seconds on localhost). When there is something bright on the image again after a black period, it takes 2-3 seconds to "synchronize".
I was thinking that black images are really easy to compress and will generate really small packages, jsmpeg has almost no information to display and wait until a complete data package arrives, but I couldn’t prove my theory.
I played with ffmpeg parameters like bitrate and fps but nothing has changed.
Is there any settings which I could try ?
-
How to capture a layered window with transparency background properly ? (using BitBlt)
25 octobre 2016, par Mitra MI want to capture a WPF window (WPF layered window) with transparency background.
To do that I tried FFmpeg, But :
1 - If I set
AllowTransparency
(this is a property of WPF window) tofalse
,I can capture the window by gdigrab (this is an ffmpeg device), but output has black background.(I don’t want black background)2 - If I set
AllowTransparency
totrue
then gdigrab won’t work. (get black frame only)I have read David’s nice article, he has said :
if you use BitBlt to do this, you could “or in” the CAPTUREBLT flag if
you wanted to capture windows that are layeredThe gdigrab uses BitBlt, this is gdigrab.c code snippet :
/* Blit screen grab */
if (!BitBlt(dest_hdc, 0, 0,
clip_rect.right - clip_rect.left,
clip_rect.bottom - clip_rect.top,
source_hdc,
clip_rect.left, clip_rect.top, SRCCOPY | CAPTUREBLT)) {
WIN32_API_ERROR("Failed to capture image");
return AVERROR(EIO);
}You can see the flags . (
SRCCOPY | CAPTUREBLT
).Please tell me :
1- Why the gdigrab can not capture a WPF window properly ?
2 - What changes in this code should be done to do this ?
(Sorry for my English, I used translate.google)
Thanks
-
Using ffmpeg to print onto video clip actual time and duration from original clip
24 mai 2023, par KesI am using arch linux and bash and ffmpeg, all are up to date and the latest versions.


I am clipping a video that is 30 seconds long and wish to clip from 5 secs to 10 seconds to a new file, from the original.


In the bottom right hand corner of the clip I wish to show timestamps from the original video as follows


- 

- in the 5th second "00:00:05/ 00:00:30"
- in the 6th second "00:00:06/ 00:00:30"

etc - in the 10th second "00:00:10/ 00:00:30"








This is an apparentley simple question(?) but the syntax of the command is not at all obvious and I am hoping an expert may shed some light on this.


All I have so far for the drawtext part, which does not do what I want as it only counts the elapsed time from t=0 of the clip, whereas I want it to show the timestamp and total duration of the original clip


drawtext
I started with

"drawtext=text='%{pts\:gmtime\:0\:%M\\\\\:%S}':fontsize=24:fontcolor=black:x=(w-text_w-10):y=(h-text_h-10)"



ffmpeg line with drawtext I have started with


ffmpeg -ss 00:00:05 -i "$in_file" -filter_complex "drawtext=fontfile=font.ttf:text='sample text':x=10:y=10:fontsize=12:fontcolor=white:box=1:boxcolor=black@0.5:boxborderw=5,drawtext=text='%{duration\:hms}':fontsize=12:fontcolor=black:x=(w-text_w-10):y=(h-text_h-10)" -t 5 -c:a copy -c:v libx264 out_file.mp4