
Recherche avancée
Autres articles (24)
-
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (5496)
-
Revision df037b615f : Adding API to read/write uncompressed frame header bits. The API is not final y
22 mai 2013, par Dmitry KovalevChanged Paths :
Modify /vp9/common/vp9_common.h
Delete /vp9/common/vp9_header.h
Modify /vp9/common/vp9_onyxc_int.h
Modify /vp9/decoder/vp9_decodframe.c
Add /vp9/decoder/vp9_read_bit_buffer.h
Modify /vp9/encoder/vp9_bitstream.c
Modify /vp9/encoder/vp9_onyx_if.c
Add /vp9/encoder/vp9_write_bit_buffer.h
Modify /vp9/vp9_common.mk
Modify /vp9/vp9_dx_iface.c
Modify /vp9/vp9cx.mk
Modify /vp9/vp9dx.mk
Adding API to read/write uncompressed frame header bits.The API is not final yet and can be changed. Actual layout of
uncompressed frame part will be finalized later. Right now moving
clr_type, error_resilient_mode, refresh_frame_context,
frame_parallel_decoding_mode from first compressed partition to
uncompressed frame part.Change-Id : I3afc5d4ea92c5a114f4c3d88f96858cccc15b76e
-
yuv to rgb conversion using OpenGL ES2.0 in android,
8 octobre 2013, par 101110101100111111101101I have 2 questions about yuv to rgb conversion using OpenGL ES2.0 in android.
First thing needs little background.
-------BACK GROUND------
When I put random data(YUVs one), It renders like good. (I'm not certain because data is
random data.)however, When I put 'real' one, that is awkaward, important is 'render something'.
I checked my codes(renderer, data parser, etc... every where), and no doubtful part , except for using GL_RGB parameters in method 'glRenderbufferStorage', 'glTexImage2D'...
I changed that parameters many times which is in 'glRenderbufferStorage'(GL_RGBA4, GL_RGB565 etc... ). (current is GL_RGBA4)
but no changed. (some parameters occur error, no rendering.)
and 'glTexImage2D's parameters too. (current is GL_LUMINANCE.)
Before Conversion : YUV420P (from ffmpeg pix_fmt_YUV420p) (16bpp, 2X2 Y4 Cb1 Cr1)
After Conversion : RGB. (I don't know what's difference with RGB series... RGBA, RGB565 etc)
Before decodedData : decoded linesize is 736 height is 480. (fixed)so 12*736*480/8 -> array size ;
Y is 736 * 480
U is arraysize * 1/4 ;
V is arraysize * 1/4 ;---------BACK GROUND END-----------
I wondering GL_RGB or RGBA4 or RGB565 that parameters effect output data's result ?
Not part of quality, but part of rendering or not.Second part is about fragment shader.
My rendering engine structure has 3texture in fragment shader. (attach source below)
"precision mediump float; \n"
"varying vec2 v_vTexCoord; \n"
"uniform sampler2D yTexture; \n"
"uniform sampler2D uTexture; \n"
"uniform sampler2D vTexture; \n"
"void main() { \n"
"float y=texture2D(yTexture, v_vTexCoord).r;\n"
"float u=texture2D(uTexture, v_vTexCoord).r;\n"
"float v=texture2D(vTexture, v_vTexCoord).r;\n"
"y=1.1643 * (y - 0.0625);\n"
"u=u - 0.5;\n"
"v=v - 0.5;\n"
"float r=y + 1.5958 * v;\n"
"float g=y - 0.39173 * u - 0.81290 * v;\n"
"float b=y + 2.017 * u;\n"
"gl_FragColor = vec4(r, g, b, 1.0);\n"
"}\n";as you know, there are three texture, Ytex,Utex,Vtex.
and then, it converts, and gl_FragColor = vec4(r,g,b,1.0) ;
I don't know gl_FragColor = vec4(r,g,b,1.0) means.
of cource, I know gl_FragColor is set, but How can I get actual r,g,b value ?
it renders in texture automatically ? -
ffmpeg YUV420 to RGB24 converts only one row
10 août 2016, par AlekseyI’m trying to convert my YUV420p image to RGB24 in c++ and create bitmap from byte array in c#.
My image size is 1920 w * 1020 h and ffmpeg decoder give me 3 planars for data with linesizes = 1920, 960, 960. But after sws_scale I’m getting RGB picture with only one plane with linesize = 5760.
It does not looks correct : I should get (5760 * h), not just only one row of data. What I’m doing wrong ?//c++ part
if (avcodec_receive_frame(m_decoderContext, pFrame) == 0)
{
//RGB
sws_ctx = sws_getContext(m_decoderContext->width,
m_decoderContext->height,
m_decoderContext->pix_fmt,
m_decoderContext->width,
m_decoderContext->height,
AV_PIX_FMT_RGB24,
SWS_BILINEAR,
NULL,
NULL,
NULL
);
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data, pFrame->linesize,
0, pFrame->height,
pFrameRGB->data, pFrameRGB->linesize);
//c# part (im reading data from pipe and its equal to c++ part)------------------------------------------------------------------
byte[] rgbch = new byte[frameLen];
for (int i=0; i 0)
{
var arrayHandle = System.Runtime.InteropServices.GCHandle.Alloc(rgbch,
System.Runtime.InteropServices.GCHandleType.Pinned);
var bmp = new Bitmap(1920, 1080,
3,
System.Drawing.Imaging.PixelFormat.Format24bppRgb,
arrayHandle.AddrOfPinnedObject()
);
pictureBox1.Image = bmp;
}