
Recherche avancée
Médias (2)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (88)
-
Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur
8 février 2011, parLa visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
Configuration de la boite multimédia
Dès (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
D’autres logiciels intéressants
12 avril 2011, parOn ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
Videopress
Site Internet : (...)
Sur d’autres sites (9242)
-
How to interpret ffmpeg recording options available for a webcam (directshow) ?
5 janvier 2023, par Jones659I am trying to create a GUI for personal use, that allows someone to customise recording and converting options of ffmpeg, without directly using the command line. At the moment, I am learning about different parameters and flags in ffmpeg.


Apologies in advance if I end up asking some stupid questions, I am on a learning journey at the moment, unfortunately not all of this info is available online in an easily understandable way.


I have a USB webcam which reported having the following options available to it :


[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=640x480 fps=5 max s=640x480 fps=30
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=640x480 fps=5 max s=640x480 fps=30 (tv, bt470bg/bt709/unknown, topleft) chroma_location=topleft
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=352x288 fps=5 max s=352x288 fps=30
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=352x288 fps=5 max s=352x288 fps=30 (tv, bt470bg/bt709/unknown, topleft)
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=320x240 fps=5 max s=320x240 fps=30
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=320x240 fps=5 max s=320x240 fps=30 (tv, bt470bg/bt709/unknown, topleft)
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=176x144 fps=5 max s=176x144 fps=30
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=176x144 fps=5 max s=176x144 fps=30 (tv, bt470bg/bt709/unknown, topleft)
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=160x120 fps=5 max s=160x120 fps=30
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=160x120 fps=5 max s=160x120 fps=30 (tv, bt470bg/bt709/unknown, topleft)
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=1280x1024 fps=5 max s=1280x1024 fps=9
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=1280x1024 fps=5 max s=1280x1024 fps=9 (tv, bt470bg/bt709/unknown, topleft)



I just want to get to the bottom of how I should interpret this, apologies that I will ask multiple questions :


- 

-
The fact that both resolution and fps have a min and max value (for every option) seems to imply that these two parameters are supposably uncontrollably variable, right ? In practice, the fps has been variable depending on brightness, however the resolution has not been - is it safe to assume that video imaging devices (especially such as a webcam) do not have variable resolution ?


-
Secondly, why is it that every option is listed twice, except half of them specify extra info, such as color_range, color_space, and chroma_location ? Is this just a quirk ? Surely those extra parameter options should not be discarded ?


-
It's hard to know how to make sense of this, but or example : the fact that only "tv" is ever shown, does that impliy that the webcam can only ever do limited color range, and there is no point trying to get full 0,255 out of it ? I read somewhere that "pc" implies full range of 0-255, whereas "tv" implies a range of 16-235


-
With regards to color space, is it acceptable to record the webcam as raw (un-encoded), and then later convert to a different color space later down the line ? Which approach to dealing with the color-space yields the least amount of lost color ? My only previous experience with color spaces is in the realm of images - where for example, it makes no sense to convert sRGB to ROMM16 RGB, because you're going to a color space which has wider coverage, and extra colors won't be created out of thin air, you'd want to go once from raw to a color space, and avoid converting between color spaces afterwards. Also, what does "unknown" mean in the color space options ?












Here's the culmination of some research/testing i've done, is there anything correct, or seriously wrong, in the conclusions and assumptions I've made below ?


My understanding of pixel_format is as follows : when you're recording, (even to raw), you specify the pixel format using something like "-pixel_format yuyv422", this is a "packed", not "planar" format, which is produced by the webcam. When you convert from raw to something like mkv using libx264, you can't specify a "packed" pixel format such as "yuyv422", but must instead use an appropriate planar counterpart, such as "yuv422p", which would be specified using "-pix_fmt yuv422p".


I did a raw recording of the webcam (in which I recorded a bright light, in the dark), I didn't set any of the options in the brackets above. I then converted this video using libx264 with the flags "-dst_range 1 -color_range 2" which I saw elsewhere on the internet.


Taking a screenshot of this video using vlc, and putting it through imagemagick identify -verbose, shows that the color range of the screenshot is 0,255, as for the video itself, "MediaInfo" reports "color range:Full", VLC's codec info says "Decoded format : Planar 4:2:2 YUV full scale - is this info worth anything, or is it just meta-data that the video got tagged with ?


At first I was happy about imagemagick's color range reporting, but I am thinking now, the 0, 255 range could be a result of "overshoot" values produced by the camera, which aren't actually supposed to be mapped linearly.


I appreciate that this probably feels like some school-kiddy offloading their homework assignment to avoid doing work, but I hope it can be seen that I've looked into these things prior to putting this post together.


Thanks in advance, if anyone takes the time to answer anything.


-
-
ffmpeg hwo to choose the best bit rate for HLS streaming ?
7 février 2023, par Josef KranzI'm building a video streaming white label product, but I've run into the following scenario where I'm not sure what might be the optimal way to get the best video quality in return. At the moment I'm using crf based encoding, which is from a streaming based point of view a very bad idea. From a quality point of view, it will definitely give me the best quality and also the best efficiency. Why is crf based encoding bad for streaming ? First you have a non-fixed file size, meaning your bit rate might vary between 20 kbps and 20 Mbit/s depending on the codec and picture motion of the current frames ...
When you have such heavily varying stream according to the bit rate, automatic quality selection at the player might not function correctly, and the player automatically switches from 1080p to 480p for no reason (using VideoJS here).


To fix this issue, it's fine if I set minrate, maxrate with ffmpeg but this will also come at the cost that some frames might look pixelated, which I absolutely do not want.


Currently, my encoding command looks something like this :


/usr/bin/ffmpeg -i "/tmp/VODProcessing/Tester 2160p UHD-HDR.mp4" -map 0:0 -c:v libsvtav1 -crf 19 -vf zscale=width=3840:height=2160 -svtav1-params "profile=0:enable-force-key-frames=1:superres-mode=1:enable-tf=0:tune=0:enable-overlays=1:scd=1:scm=2:enable-mfmv=1:enable-cdef=1:enable-dlf=1:fast-decode=1:color-primaries=9:transfer-characteristics=16:matrix-coefficients=9:input-depth=10:mastering-display=G(0.265,0.69)B(0.15,0.06)R(0.68,0.32)WP(0.3127,0.329)L(4000.0,0.005):content-light=368,226:enable-hdr=1:color-range=1" -pix_fmt yuv420p10le -color_trc smpte2084 -color_primaries bt2020 -colorspace bt2020nc -chroma_sample_location:v topleft -color_range:v pc -max_muxing_queue_size 1024 -preset 7 -bf 0 -force_key_frames "expr:gte(t,n_forced*4.004)" -keyint_min 48 -sc_threshold 0 -use_timeline 1 -use_template 1 -map_metadata -1 -map_chapters -1 -f hls -seg_duration 4.004 -hls_time 4.004 -streaming 1 -hls_list_size 0 -hls_segment_filename "/tmp/VODProcessing/output/Tester 2160p UHD-HDR/v-av01-2160p-av01.0.12M.10_PQ/f-%04d.m4s" -hls_fmp4_init_filename "init-v-av01-2160p-av01.0.12M.10_PQ.m4s" -hls_segment_type fmp4 -hls_playlist_type vod -movflags frag_keyframe+frag_every_frame+write_colr+prefer_icc+skip_trailer+faststart -hls_flags independent_segments -strict experimental "/tmp/VODProcessing/output/Tester 2160p UHD-HDR/v-av01-2160p-av01.0.12M.10_PQ/master.m3u8"



Which will form a stream made out of independent m4s segments. As you can see, I'm using libsvtav1 in crf mode to output the result in AV1.


Now my question is, how can I have the same nice quality output as with the crf mode while having a static/fixed bit rate ? Will 2 pass encoding solve this problem by distributing the pixels or data rate differently ?


Thanks in advance


-
How to transcode 4K HDR using libx265 or libsvtav1 with ffmpeg ?
28 octobre 2022, par Asav MaliIm currently trying to transcode a mp4 file containing a HEVC 10 Bit HDR stream with proper grading information to HLS (fragmented mp4 aka fmp4) in two different codecs, HEVC and AV1. The problem is that the x265 encoder pops the following message while transcoding starts :


Output #0, hls, to '/tmp/output/testclip/v-hevc-2160p-hvc1.2.4.L150.B0/master.m3u8':
 Metadata:
 encoder : Lavf59.27.100
 Stream #0:0: Video: hevc (hvc1 / 0x31637668), yuv420p10le(tv, bt2020nc/bt2020/smpte2084, progressive), 3840x1600 [SAR 1:1 DAR 12:5], q=2-31, 7864 kb/s, 23.98 fps, 24k tbn (default)
 Metadata:
 encoder : Lavc59.37.100 libx265
 Side data:
 cpb: bitrate max/min/avg: 8257000/0/0 buffer size: 31457000 vbv_delay: N/A
 Mastering Display Metadata, has_primaries:1 has_luminance:1 r(0.6800,0.3200) g(0.2650,0.6900) b(0.1500 0.0600) wp(0.3127, 0.3290) min_luminance=0.005000, max_luminance=4000.000000
 Content Light Level Metadata, MaxCLL=368, MaxFALL=226
x265 [warning]: unable to parse mastering display color volume infoed= 0x 
x265 [warning]: unable to parse mastering display color volume infoeed=0.758x 
x265 [warning]: unable to parse mastering display color volume infoeed=1.06x 



Why is the message "unable to parse mastering display color volume info" coming up if I explicitly define the grading info using the following command ? :


/usr/bin/ffmpeg -i "/tmp/testclip.m4v" -map 0:0 -c:v libx265 -b:v 7864320 -maxrate:v 8257536 -bufsize 31457280 -vf zscale=width=3840:height=1600 -x265-params colorprim=bt2020:transfer=smpte-2084:colormatrix=bt2020nc:master-display="G(13250.0,34500.0)B(7500.0,3000.0)R(34000.0,16000.0)WP(15635.0,16450.0)L(40000000.0,50.0):max-cll="368.0,226.0"" -pix_fmt yuv420p10le -profile:v main10 -bf 0 -crf 16 -preset fast -keyint_min 48 -g 48 -sc_threshold 0 -use_timeline 1 -use_template 1 -seg_duration 6 -tune fastdecode -vtag hvc1 -map_metadata -1 -map_chapters -1 -f hls -hls_time 6 -streaming 1 -hls_list_size 0 -hls_segment_filename "/tmp/output/testclip/v-hevc-2160p-hvc1.2.4.L150.B0/f-%04d.m4s" -hls_fmp4_init_filename "init-v-hevc-2160p-hvc1.2.4.L150.B0.m4s" -hls_segment_type fmp4 -movflags frag_every_frame+delay_moov+skip_trailer+faststart -hls_flags independent_segments "/tmp/output/testclip/v-hevc-2160p-hvc1.2.4.L150.B0/master.m3u8"



No matter if I define the "master-display" or not, the image stays the same according to the colors, If I don't define "master-display" I dont get the annoying message form the x265 encoder for some reason. Can somebody give an answer how the x265 encoder works with ffmpeg and HDR ?


Besides the HEVC stream, I also create a AV1 stream using libsvtav1. Here I'm using a quite similar command, but the image looks differently compared against the HEVC version, much colder, why that ? I define the grading information explicitly for the libsvtav1 encoder the same way as I do it for x265 using mastering-display...? :


/usr/bin/ffmpeg -i "/tmp/testclip.m4v" -map 0:0 -c:v libsvtav1 -b:v 6291456 -maxrate:v 6606028 -bufsize 25165824 -vf zscale=width=3840:height=1600 -svtav1-params input-depth=10:fast-decode=1:color-primaries=9:transfer-characteristics=16:matrix-coefficients=9:mastering-display="G(0.265,0.69)B(0.15,0.06)R(0.68,0.32)WP(0.3127,0.329)L(4000.0,0.005):content-light="368.0,226.0"":enable-hdr=1 -pix_fmt yuv420p10le -max_muxing_queue_size 1024 -profile:v main -preset 7 -crf 20 -keyint_min 48 -g 48 -use_timeline 1 -use_template 1 -seg_duration 6 -strict experimental -map_metadata -1 -map_chapters -1 -f hls -hls_time 6 -streaming 1 -hls_list_size 0 -hls_segment_filename "/tmp/output/testclip/v-av01-2160p-av01.0.12H.10/f-%04d.m4s" -hls_fmp4_init_filename "init-v-av01-2160p-av01.0.12H.10.m4s" -hls_segment_type fmp4 -movflags frag_every_frame+delay_moov+skip_trailer+faststart -hls_flags independent_segments "/tmp/output/testclip/v-av01-2160p-av01.0.12H.10/master.m3u8"



I would expect that grading information will get placed properly with both encoders.


Thanks in advance.