
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (47)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)
Sur d’autres sites (10669)
-
delphi firemonkey + FFmpeg Fill image/Tbitmap with data of AVFRAME->pixelformat->YUV420P
9 février 2020, par cobanI have managed to create a simple Video player using SDL2 + FFmpeg libraries with Delphi VCL. It’s about the same as ffplay.exe but not a Console app.
I’ve noticed that FFmpeg (I might be wrong) converts/scales (sws_scale) source pixelformat(any) -> to destination -> YUV420P faster than to any other format.What I want to achieve is some kind of a (video)surface, where over I can put other components, like for example a TProgressbar. SDL has a function sdl_createwindowfrom which can turn a tpanel into video(surface/window) where it is possible to put any component over it. But this function is only for windows.
Maybe I am looking in the wrong direction to achieve what I want, if so, any hint is welcome.
I was thinkin of drawing the data retrieved in pixelformat yuv420p to a TBitmap of a Timage, this way I won’t need SDL2 library, and I will be able to put any other component above, in this case, Timage. Or another component which might be faster.It seems like I need to convert the YUV420P into BGRA format, because TBitmap does not seem to support any YUV format, worse is FIREMONKEY tbitmap is always BGRA format, changing to other format is not possible.
In first case, I need a function to convert yuv420 to BGRA, can anyone help with this, is there a component/package/function for this which I could use ? Or maybe is it anyhow possible to use yuv420p format directly without converting ?
I tried to convert some SDL2 functions from SDL2 source (C/C++) to Delphi functions but it’s to complicate for me, specially with my knowledge of C/C++. In SDL2 there are methods/functions implemented for converting RGB <-> YUV. (Why did I ever start Delphi programming ? my mistake).BTW, I already tried TMediaplayer, it’s drawing video(picture) above everything, nothing else than the video is visible.
I’ve made an attempt, what I don’t understand is where to get/what is "y_stride, uv_stride and rgb_stride"
Some variable declarations and/or assignments can be incorrect, need to debug the values, but first I need to know what to pass for the above variables.procedure STD_FUNCTION_NAME(width, height:Cardinal;Y, U, V:PByte; Y_stride, UV_stride:Cardinal;
RGB:PByte; RGB_stride:Cardinal;yuv_type:YCbCrType;
YUV_FORMAT,RGB_FORMAT:Word);
var param:PYUV2RGBParam;
y_pixel_stride,
uv_pixel_stride,
uv_x_sample_interval,
uv_y_sample_interval:Word;
x, ys:Cardinal;
y_ptr1,y_ptr2,u_ptr,v_ptr:PByte;
rgb_ptr1,rgb_ptr2:PByte;
u_tmp,v_tmp,r_tmp,
g_tmp,b_tmp:Cardinal;
y_tmp:Integer;
begin
param := @(YUV2RGB[integer( yuv_type)]);
if YUV_FORMAT = YUV_FORMAT_420
then begin
y_pixel_stride := 1;
uv_pixel_stride := 1;
uv_x_sample_interval:= 2;
uv_y_sample_interval:= 2;
end;
if YUV_FORMAT = YUV_FORMAT_422
then begin
y_pixel_stride := 2;
uv_pixel_stride := 4;
uv_x_sample_interval := 2;
uv_y_sample_interval := 1;
end;
if YUV_FORMAT = YUV_FORMAT_NV12
then begin
y_pixel_stride := 1;
uv_pixel_stride := 2;
uv_x_sample_interval := 2;
uv_y_sample_interval := 2;
end;
//for(y=0; y<(height-(uv_y_sample_interval-1)); y+=uv_y_sample_interval)
ys := 0;
while ys < height-(uv_y_sample_interval-1) do
begin
y_ptr1 := Y+ys*Y_stride;
y_ptr2 := Y+(ys+1)*Y_stride;
u_ptr := U+(ys div uv_y_sample_interval)*UV_stride;
v_ptr := V+(ys div uv_y_sample_interval)*UV_stride;
rgb_ptr1:=RGB+ys*RGB_stride;
if uv_y_sample_interval > 1
then rgb_ptr2:=RGB+(ys+1)*RGB_stride;
//for(x=0; x<(width-(uv_x_sample_interval-1)); x+=uv_x_sample_interval)
x := 0;
while x<(width-(uv_x_sample_interval-1)) do
begin
// Compute U and V contributions, common to the four pixels
u_tmp := (( u_ptr^)-128);
v_tmp := (( v_ptr^)-128);
r_tmp := (v_tmp*param.v_r_factor);
g_tmp := (u_tmp*param.u_g_factor + v_tmp*param.v_g_factor);
b_tmp := (u_tmp*param.u_b_factor);
// Compute the Y contribution for each pixel
y_tmp := ((y_ptr1[0]-param.y_shift)*param.y_factor);
PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);
y_tmp := ((y_ptr1[y_pixel_stride]-param.y_shift)*param.y_factor);
PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);
if uv_y_sample_interval > 1
then begin
y_tmp := ((y_ptr2[0]-param.y_shift)*param.y_factor);
PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr2);
y_tmp := ((y_ptr2[y_pixel_stride]-param.y_shift)*param.y_factor);
PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr2);
end;
y_ptr1 := y_ptr1 + 2*y_pixel_stride;
y_ptr2 := y_ptr2 + 2*y_pixel_stride;
u_ptr := u_ptr + 2*uv_pixel_stride div uv_x_sample_interval;
v_ptr := v_ptr + 2*uv_pixel_stride div uv_x_sample_interval;
x := x + uv_x_sample_interval
end;
//* Catch the last pixel, if needed */
if (uv_x_sample_interval = 2) and (x = (width-1))
then begin
// Compute U and V contributions, common to the four pixels
u_tmp := (( u_ptr^)-128);
v_tmp := (( v_ptr^)-128);
r_tmp := (v_tmp*param.v_r_factor);
g_tmp := (u_tmp*param.u_g_factor + v_tmp*param.v_g_factor);
b_tmp := (u_tmp*param.u_b_factor);
// Compute the Y contribution for each pixel
y_tmp := ((y_ptr1[0]-param.y_shift)*param.y_factor);
PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);
if uv_y_sample_interval > 1
then begin
y_tmp := ((y_ptr2[0]-param.y_shift)*param.y_factor);
PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr2);
//PACK_PIXEL(rgb_ptr2);
end;
end;
ys := ys +uv_y_sample_interval;
end;
//* Catch the last line, if needed */
if (uv_y_sample_interval = 2) and (ys = (height-1))
then begin
y_ptr1 :=Y+ys*Y_stride;
u_ptr :=U+(ys div uv_y_sample_interval)*UV_stride;
v_ptr :=V+(ys div uv_y_sample_interval)*UV_stride;
rgb_ptr1:=RGB+ys*RGB_stride;
//for(x=0; x<(width-(uv_x_sample_interval-1)); x+=uv_x_sample_interval)
x := 0;
while x < (width-(uv_x_sample_interval-1)) do
begin
// Compute U and V contributions, common to the four pixels
u_tmp := (( u_ptr^)-128);
v_tmp := (( v_ptr^)-128);
r_tmp := (v_tmp*param.v_r_factor);
g_tmp := (u_tmp*param.u_g_factor + v_tmp*param.v_g_factor);
b_tmp := (u_tmp*param.u_b_factor);
// Compute the Y contribution for each pixel
y_tmp := ((y_ptr1[0]-param.y_shift)*param.y_factor);
//PACK_PIXEL(rgb_ptr1);
PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);
y_tmp := ((y_ptr1[y_pixel_stride]-param.y_shift)*param.y_factor);
//PACK_PIXEL(rgb_ptr1);
PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);
y_ptr1 := y_ptr1 + 2*y_pixel_stride;
u_ptr := u_ptr + 2*uv_pixel_stride div uv_x_sample_interval;
v_ptr := v_ptr + 2*uv_pixel_stride div uv_x_sample_interval;
x := x + uv_x_sample_interval
end;
//* Catch the last pixel, if needed */
if (uv_x_sample_interval = 2) and (x = (width-1))
then begin
// Compute U and V contributions, common to the four pixels
u_tmp := (( u_ptr^)-128);
v_tmp := (( v_ptr^)-128);
r_tmp := (v_tmp*param.v_r_factor);
g_tmp := (u_tmp*param.u_g_factor + v_tmp*param.v_g_factor);
b_tmp := (u_tmp*param.u_b_factor);
// Compute the Y contribution for each pixel
y_tmp := ((y_ptr1[0]-param.y_shift)*param.y_factor);
//PACK_PIXEL(rgb_ptr1);
PACK_PIXEL(RGB_FORMAT,y_tmp,r_tmp, g_tmp, b_tmp, rgb_ptr1);
end;
end;end ;
-
Revision 6d15132742 : Change dx_time data type in vpxdec.c Change dx_time data type to int64_t to pre
22 février 2014, par James YuChanged Paths :
Modify /vpxdec.c
Change dx_time data type in vpxdec.cChange dx_time data type to int64_t to prevent
test time overflow when decoding long video.Change-Id : I3dd5e324a246843e07e635fd25c50e71e385ed70
Signed-off-by : James Yu <james.yu@linaro.org> -
ffmpeg, ffprobe : don’t "merge" side data into packet data by default
9 mars 2017, par wm4ffmpeg, ffprobe : don’t "merge" side data into packet data by default
Preparation for potentially disabling merged side data by default in the
libs. Do this in particular because it affects fate tests.The changed tests either reflect added packet side data, or the changed
packet size due to merged side data removal reducing the packet size.- [DH] ffmpeg_opt.c
- [DH] ffprobe.c
- [DH] libavformat/tests/seek.c
- [DH] tests/ref/fate/gaplessenc-itunes-to-ipod-aac
- [DH] tests/ref/fate/gaplessenc-pcm-to-mov-aac
- [DH] tests/ref/fate/gaplessinfo-itunes1
- [DH] tests/ref/fate/gaplessinfo-itunes2
- [DH] tests/ref/fate/mov-aac-2048-priming
- [DH] tests/ref/seek/cache-pipe
- [DH] tests/ref/seek/extra-mp3
- [DH] tests/ref/seek/lavf-ts
- [DH] tests/ref/seek/mkv-codec-delay