
Advanced search
Other articles (34)
-
Support audio et vidéo HTML5
10 April 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Taille des images et des logos définissables
9 February 2011, byDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...) -
Gestion de la ferme
2 March 2010, byLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation"
On other websites (5717)
-
How to improve quality and latency in FFmpeg
3 August 2020, by AndrewI'm trying to stream FFmpeg over the new SRT protocol that's supported as an output. For now though, I'm sending it in the form of FFmpeg -> UDP -> SRT -> SRT -> UDP -> MPV/FFmpeg.


Somewhere along the line, the quality degrades sharply, and latency increases by quite a bit. This seemed to happen when adding audio. Meaning, if streaming just video, quality is decent and latency is low. If streaming just audio, quality is great but latency is high.


Not sure where I'm going wrong with this, so any help would be appreciated. The main focus points is high quality low latency.



Recording video through:


./ffmpeg.exe -rtbufsize 2147M -f dshow -i video="":audio="" -flush_packets 0 -preset medium -tune zerolatency -f mp4 -b:v 6M -g 30 -f mpegts udp://127.0.0.1:9001?pkt_size=1316



or


./ffmpeg.exe -f dshow -analyzeduration 200k -probesize 6M -i video="":audio="" -flush_packets 0 -preset fast -tune zerolatency -b:v 5M -b:a 384K -c:a libopus -g 25 -f mpegts udp://127.0.0.1:9001?pkt_size=1316




Then when taking it back at the receiver side, I use one of:


./ffmpeg.exe -i udp://127.0.0.1:9001 -c copy -bufsize 32M -f mpegts udp://127.0.0.1:9002?bitrate=26214400
./mpv.exe udp://127.0.0.1:9002



or


./mpv.exe --no-cache --untimed --no-demuxer-thread --video-sync=audio --vd-lavc-threads=1 udp://127.0.0.1:9002




What am I missing? As is quality is generally terrible and latency can get up to 10 seconds. Is it just not possible with this set up?


-
Merge commit ’715f139c9bd407ef7f4d1f564ad683140ec61e6d’
23 March 2017, by Clément BœschMerge commit ’715f139c9bd407ef7f4d1f564ad683140ec61e6d’
* commit ’715f139c9bd407ef7f4d1f564ad683140ec61e6d’: (23 commits)
vp9lpf/x86: make filter_16_h work on 32-bit.
vp9lpf/x86: make filter_48/84/88_h work on 32-bit.
vp9lpf/x86: make filter_44_h work on 32-bit.
vp9lpf/x86: make filter_16_v work on 32-bit.
vp9lpf/x86: make filter_48/84_v work on 32-bit.
vp9lpf/x86: make filter_88_v work on 32-bit.
vp9lpf/x86: make filter_44_v work on 32-bit.
vp9lpf/x86: save one register in SIGN_ADD/SUB.
vp9lpf/x86: store unpacked intermediates for filter6/14 on stack.
vp9lpf/x86: move variable assigned inside macro branch.
vp9lpf/x86: simplify ABSSUM_CMP by inverting the comparison meaning.
vp9lpf/x86: remove unused register from ABSSUB_CMP macro.
vp9lpf/x86: slightly simplify 44/48/84/88 h stores.
vp9lpf/x86: make cglobal statement more conservative in register allocation.
vp9lpf/x86: save one register in loopfilter surface coverage.
vp9lpf/x86: add ff_vp9_loop_filter_[vh]_44_16_sse2,ssse3,avx.
vp9lpf/x86: add ff_vp9_loop_filter_h_48,84_16_sse2,ssse3,avx().
vp9lpf/x86: add an SSE2 version of vp9_loop_filter_[vh]_88_16
vp9lpf/x86: add ff_vp9_loop_filter_[vh]_88_16_ssse3,avx.
vp9lpf/x86: add ff_vp9_loop_filter_[vh]_16_16_sse2().
...All these commits are cherry-picks from FFmpeg. Maybe some slight
differences sneaked in but the Libav codebase still differs too much
with our own to make a proper diff. This merge is a noop.Merged-by: Clément Bœsch <u@pkh.me>
-
setting bit rates in creating video from images in ffmpeg not working
2 May 2014, by mast kalandarI have a HQ video of one second
Some information of this video is as below
Dimensions : 1920 x 1080
Codec : H.264
Framerate : 30 frames per second
Size : 684.7 kB (6,84,673 bytes)
Bitrates : 5458 kbpsI have extracted frames from video
ffmpeg -i f1.mp4 f%d.jpg
All images are of 1920 x 1020 pixels by default 30 frames are generated (f7_1.jpg, f7_2.jpg,.....,f7_30.jpg)
I have added some texts and objects to these images (without changing dimensions of any image, all 30 images are still of 1920 x 1020 pixels)
Now I am trying to merge all these images to create single video (of 1 second)
I referred this official document, I have run below command
ffmpeg -f image2 -i f7_%d.jpg -r 30 -b:v 5458k foo_5458_2.mp4
Video created is also of one second, thing is its bit rates are higher then the original one. New video has 6091 kbps bit rates, while I expect are 5458 kbps only.
Because of higher bits, its gets finish very quickly compare to original video in video player.
Is there any thing I missing ??
And I don’t know what is exact meaning and job of
-f image2
option, when I run command without this option, I am getting same video.