
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (65)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...) -
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)
Sur d’autres sites (8307)
-
Gstreamer : Hauppauge HD PVR and Multi-video file output
7 juin 2014, par user3716978I have very specific requirements for a Gstreamer pipeline that I can’t seem to create. I’m running Linux Mint Mate 14 (Nadia).
I have an HD PVR, which records in MPEG TS. It presents, as its interface, a V4L2 device at /dev/video0. What I need is to somehow have it output the captured video to multiple files. That is, like dvgrab’s autosplit, it would output, say, 1800 frames, then create a new output file, then capture another 1800, and on and on.
I’ve tried numerous methods. First, using multifilesink with the keyframe next-file option does what I want, but it doesn’t seem to add stream headers to the segment files, so that they cannot play properly and/or are missing their initial keyframe.
I’ve tried limiting each individual capture length using num-buffers, and just restarting the capture after the previous one ends. This works for maybe 30 or 40 files but all the switching on and off eventually locks up the HD PVR, and it has to be power-cycled.
I could also have it dump images to the disk and work with the individual frames, but this is very slow with MPEG TS since it has to demux, decode, and reencode every frame. It eats up 100% cpu and drops about 60% of the frames on my computer.
ffmpeg doesn’t work, because the HD PVR driver doesn’t support ioctl. I can’t seem to get mencoder to stream it this way either, but maybe it’s possible ?
What I need is to :
- Have a single capture stream, to avoid pissing off the HD PVR
- Have it split the stream into multiple files which can be individually analyzed
- Have those multiple files be valid videos
- Not eat up 100% of my CPU (although high utilization is ok, it needs to run at full speed). Since the stream is 1920x1080x60fps, anything to do with reencoding won’t work. It pretty much needs to be a stream copy.
Thank you
-
Scaling nginx-rtmp livestreaming with ffmpeg transcoding
31 mars 2020, par hoodsyI currently have a functional livestreaming setup using the prolific
nginx-rtmp
library, and I’m usingffmpeg
to provide various resolutions of my stream.The only problem is,
ffmpeg
with only 2 outputs eats up 50% of my CPU. I’d like to be able to support up to 20 streamers at once – with the current demand, that would mean I need 10x the CPU power that I currently have !How can I scale my transcoding setup with
nginx-rtmp
andffmpeg
?rtmp {
server {
listen 1935;
application src {
live on;
exec_push ffmpeg -i rtmp://localhost/src/$name
-c:v copy -preset:v ultrafast -b:v 512K -c:a copy -tune zerolatency -f flv rtmp://localhost/hls/$name_hi
-c:v libx264 -preset:v ultrafast -s 852x480 -b:v 128K -c:a copy -tune zerolatency -f flv rtmp://localhost/hls/$name_low;
# -c:v libx264 -s 852x480 -b:v 128K -c:a copy -tune zerolatency -f flv rtmp://localhost/hls/$name_low;
# -c:v libx264 -s 1280x720 -b:v 256k -c:a copy -tune zerolatency -f flv rtmp://localhost/hls/$name_mid;
}
application hls {
live on;
hls on;
hls_path /tmp/hls;
# hls_fragment 1s;
# hls_playlist_length 4s;
hls_fragment 4s;
hls_playlist_length 12s;
hls_nested on;
hls_variant _low BANDWIDTH=160000;
# hls_variant _mid BANDWIDTH=320000;
hls_variant _hi BANDWIDTH=640000;
}
}}
-
Repair mpeg files using ffmpeg
17 août 2018, par rsdrsdI have a bunch of MPEG files which are somehow invalid or incorrect. I can play the files in different media players but when I upload the files and they should automatically be converted. It takes a very long time to create screenshots and it creates about 10000 screenshots instead of the 50 to be expected. The command is part of an automatic conversion app. With mp4 and other files it works great but whit MPEG it doesn’t work as expected. The creation of screenshots eats up all memory and processor power.
For creating screenshots I have tried the following :
ffmpeg -y -i /input/file.mpeg -f image2 -aspect 16:9 -bt 20M -vsync passthrough -vf select='isnan(prev_selected_t)+gte(t-prev_selected_t\,10)' /output/file-%05d.jpg
this just creates 2 screenshots while I expect 50 or so. The following command :
ffmpeg -y -i /input/file.mpeg -f image2 -vf fps=fps=1/10 -aspect 16:9 -vsync passthrough -bt 20M /output/file-%05d.jpg
gave me errors about buffers :
ffmpeg version N-39361-g1524b0f Copyright (c) 2000-2014 the FFmpeg developers
built on Feb 26 2014 23:46:40 with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-4)
configuration: --prefix=/home/example/ffmpeg_build --extra-cflags=-I/home/example/ffmpeg_build/include --extra-ldflags=-L/home/example/ffmpeg_build/lib --bindir=/home/example/bin --extra-libs=-ldl --enable-gpl --enable-nonfree --enable-libfdk_aac --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libfreetype --enable-libspeex --enable-libtheora
libavutil 52. 66.100 / 52. 66.100
libavcodec 55. 52.102 / 55. 52.102
libavformat 55. 33.100 / 55. 33.100
libavdevice 55. 10.100 / 55. 10.100
libavfilter 4. 2.100 / 4. 2.100
libswscale 2. 5.101 / 2. 5.101
libswresample 0. 18.100 / 0. 18.100
libpostproc 52. 3.100 / 52. 3.100
[mp3 @ 0x200d7c0] Header missing
[mpegts @ 0x2008a60] DTS discontinuity in stream 0: packet 6 with DTS 34185, packet 7 with DTS 8589926735
[mpegts @ 0x2008a60] Invalid timestamps stream=0, pts=7157, dts=8589932741, size=150851
Input #0, mpegts, from '/home/example/app/uploads/21.mpeg':
Duration: 00:03:14.75, start: 0.213000, bitrate: 26112 kb/s
Program 1
Stream #0:0[0x3e9]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv), 1440x1080 [SAR 4:3 DAR 16:9], max. 25000 kb/s, 29.97 fps, 60 tbr, 90k tbn, 59.94 tbc
Stream #0:1[0x3ea]: Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, s16p, 384 kb/s
[swscaler @ 0x1ff9860] deprecated pixel format used, make sure you did set range correctly
Output #0, image2, to '/home/example/app/uploads/21-%05d.jpg':
Metadata:
encoder : Lavf55.33.100
Stream #0:0: Video: mjpeg, yuvj420p, 1440x1080 [SAR 4:3 DAR 16:9], q=2-31, 200 kb/s, 90k tbn, 0.10 tbc
Stream mapping:
Stream #0:0 -> #0:0 (mpeg2video -> mjpeg)
Press [q] to stop, [?] for help
[mpegts @ 0x2008a60] Invalid timestamps stream=0, pts=7157, dts=8589932741, size=150851
[output stream 0:0 @ 0x1ff2ba0] 100 buffers queued in output stream 0:0, something may be wrong.
[output stream 0:0 @ 0x1ff2ba0] 1000 buffers queued in output stream 0:0, something may be wrong.and it creates about 10000 screenshots while I expect 50.
Now I have read somewhere on how to repair some broken files. For this I have the following command :
ffmpeg -y -i input.mpeg -codec:v copy -codec:a copy output.mpeg
This, however, creates a file somewhat smaller, but if I run the same command on the output again, I would expect that it creates the same file, but the following command
ffmpeg -y -i output.mpeg -codec:v copy -codec:a copy output2.mpeg
returns a file which is much smaller and runs for only a few seconds while the original was about 3 minutes.
If I run the "repair" commands for a not broken MPEG then it results in a much smaller file the first time I run the command. With
ffprobe
I checked what changed but the only thing that changes isMPEG-TS
toMPEG-PS
.If I run the command over an mp4 file it results in exactly the same file as expected. Does someone have a clue of what is going wrong. It is boggling me now for about two days and I really have no clue. Or does someone have a good suggestion on how to extract screenshots every 10 seconds without creating too much screenshots and eating up all memory and processor power.