
Recherche avancée
Autres articles (22)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Taille des images et des logos définissables
9 février 2011, parDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...) -
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation"
Sur d’autres sites (4972)
-
YUV4:2:0 conversion to RGB outputs overly green image
27 février 2023, par luckybromaI'm decoding video and getting YUV 420 frames. In order to render them using D3D11, they need to get converted to RGB (or at least I assume that the render target view cannot be YUV itself).


The YUV frames are all in planar format, meaning UV and not packed. I'm creating 3 textures and ShaderResourceViews of type
DXGI_FORMAT_R8G8_UNORM
. I'm copying each plane from the frame into it's own ShaderResourceView. I'm then relying on the sampler to account for the differences in size between the Y and UV planes. Black/White only looks great. If I add in color though, I get an overly Green Picture :


I'm at a huge loss of what I could be doing wrong.. I've tried switching the UV and planes around, I've also tried tweaking the conversion values. I'm following Microsoft's guide on picture conversion.


Here is my shader :


min16float4 main(PixelShaderInput input) : SV_TARGET
{
 float y = YChannel.Sample(defaultSampler, input.texCoord).r;
 float u = UChannel.Sample(defaultSampler, input.texCoord).r - 0.5;
 float v = VChannel.Sample(defaultSampler, input.texCoord).r - 0.5;

 float r = y + 1.13983 * v;
 float g = y - 0.39465 * u - 0.58060 * v;
 float b = y + 2.03211 * u;

 return min16float4(r, g, b , 1.f);
}



Creating my ShaderResourceViews :


D3D11_TEXTURE2D_DESC texDesc;
 ZeroMemory(&texDesc, sizeof(texDesc));
 texDesc.Width = 1670;
 texDesc.Height = 626;
 texDesc.MipLevels = 1;
 texDesc.ArraySize = 1;
 texDesc.Format = DXGI_FORMAT_R8_UNORM;
 texDesc.SampleDesc.Count = 1;
 texDesc.SampleDesc.Quality = 0;
 texDesc.Usage = D3D11_USAGE_DYNAMIC;
 texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
 texDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;


 dev->CreateTexture2D(&texDesc, NULL, &pYPictureTexture);
 dev->CreateTexture2D(&texDesc, NULL, &pUPictureTexture);
 dev->CreateTexture2D(&texDesc, NULL, &pVPictureTexture);
 
 D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
 shaderResourceViewDesc.Format = DXGI_FORMAT_R8_UNORM;
 shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
 shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
 shaderResourceViewDesc.Texture2D.MipLevels = 1;

 dev->CreateShaderResourceView(pYPictureTexture, &shaderResourceViewDesc, &pYPictureTextureResourceView);

 dev->CreateShaderResourceView(pUPictureTexture, &shaderResourceViewDesc, &pUPictureTextureResourceView);
 
 dev->CreateShaderResourceView(pVPictureTexture, &shaderResourceViewDesc, &pVPictureTextureResourceView);




And then How I'm copying the decoded ffmpeg AVFrames :


int height = 626;
 int width = 1670; 

 D3D11_MAPPED_SUBRESOURCE msY;
 D3D11_MAPPED_SUBRESOURCE msU;
 D3D11_MAPPED_SUBRESOURCE msV;
 devcon->Map(pYPictureTexture, 0, D3D11_MAP_WRITE_DISCARD, 0, &msY);

 memcpy(msY.pData, frame->data[0], height * width);
 devcon->Unmap(pYPictureTexture, 0);

 devcon->Map(pUPictureTexture, 0, D3D11_MAP_WRITE_DISCARD, 0, &msU);
 memcpy(msU.pData, frame->data[1], (height*width) / 4);
 devcon->Unmap(pUPictureTexture, 0);


 devcon->Map(pVPictureTexture, 0, D3D11_MAP_WRITE_DISCARD, 0, &msV);
 memcpy(msV.pData, frame->data[2], (height*width) / 4);
 devcon->Unmap(pVPictureTexture, 0);



PS : Happy to provide any more additional requested code ! I just wanted to be concise as possible.


-
Problems of using MediaCodec.getOutputFormat() for an encoder in Android 4.1/4.2 devices
13 mars 2014, par MarkI'm trying to use MediaCodec to encode frames (either by camera or decoder) into a video.
When processing the encoder output by dequeueOutputBuffer(), I expect to receive the return index = MediaCodec.INFO_OUTPUT_FORMAT_CHANGED, so I can call getOutputFormat() to get the encoder output format as the input of the currently used ffmpeg muxer.I have tested some pad/phone devices with Android version 4.1 4.3. All of them have at least one hardware video AVC encoder and is used in the test. On the devices with version 4.3, the encoder gives MediaCodec.INFO_OUTPUT_FORMAT_CHANGED before writing the encoded data as expected, and the output format returned from getOutputFormat() can be used by the muxer correctly. On the devices with 4.2.2 or lower, the encoder never gives MediaCodec.INFO_OUTPUT_FORMAT_CHANGED while it can still output the encoded elementary stream, but the muxer cannot know the exact output format.
I want to ask the following questions :
- Does the behavior of encoder (gives MediaCodec.INFO_OUTPUT_FORMAT_CHANGED or not before outputing encoded data) depend on the Android API Level or the chips on individual devices ?
- If the encoder writes data before MediaCodec.INFO_OUTPUT_FORMAT_CHANGED appears, is there any way to get the output format of the encoded data ?
- The encoder still output the codec config data (with flag MediaCodec.BUFFER_FLAG_CODEC_CONFIG) on the devices before the encoded data. It is mostly used to config a decoder, but can I derive the output format by the codec config data ?
I have tried these solutions to get the output format but failed :
- Call getOutputFormat() frequently during the whole encode process. However, all of them throw IllegalStateException without the appearance of MediaCodec.INFO_OUTPUT_FORMAT_CHANGED.
-
Use the initial MediaFormat use to config the encoder at the beginning, like the example :
m_init_encode_format = MediaFormat.createVideoFormat(m_encode_video_mime, m_frame_width, m_frame_height);
int encode_bit_rate = 3000000;
int encode_frame_rate = 15;
int encode_iframe_interval = 2;
m_init_encode_format.setInteger(MediaFormat.KEY_COLOR_FORMAT, m_encode_color_format);
m_init_encode_format.setInteger(MediaFormat.KEY_BIT_RATE, encode_bit_rate);
m_init_encode_format.setInteger(MediaFormat.KEY_FRAME_RATE, encode_frame_rate);
m_init_encode_format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, encode_iframe_interval);
m_encoder = MediaCodec.createByCodecName(m_video_encoder_codec_info.getName());
m_encoder.configure(m_init_encode_format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
// Assume m_init_encode_format is the output format of the encoderHowever it fails since the output format of the encoder is still "changed" from the initial one.
Please help me to realize the behavior of an encoder, and if there is any solution to query the output format if the required MediaCodec.INFO_OUTPUT_FORMAT_CHANGED is missing.
By comparing the output format and the codec config data, the missing fields are csd-0, csd-1, and a "what" field with value = 1869968451.
(I do not understand the "what" field. It seems to be a constant and is not required. Can anyone tell me about its meaning ?)If I parse the codec config data as the csd-1 field (last 8 bytes) and csd-0 field (remaining bytes), it seems that the muxer can work correctly and output a video playable on all of the testing devices.
(But I want to ask : is this 8-byte assumption correct, or there is more reliable way to parse the data ?)However, I got another problem that If I decode the video by Android MediaCodec again, the BufferInfo.presentationTimeUs value get by dequeueOutputBuffer() is 0 for most of the decoded frames. Only the last few frames has correct time. The sample time get by MediaExtractor.getSampleTime() is correct and exactly the value I set to the encoder/muxer, but the decoded frame time is not. This issue only happen on 4.2.2 or lower device.
It is strange that the frame time is incorrect but the video can be playback in correct speed on the device. (Most of the devices with 4.2.2 or lower I've tested has only 1 Video AVC decoder.) Do I need to set other fields that may affect the presentation time ?
-
lavc : Make AVPacket.duration int64, and deprecate convergence_duration
26 septembre 2015, par wm4lavc : Make AVPacket.duration int64, and deprecate convergence_duration
Note that convergence_duration had another meaning, one which was in
practice never used. The only real use for it was a 64 bit replacement
for the duration field. It’s better just to make duration 64 bits, and
to get rid of it.Signed-off-by : Vittorio Giovara <vittorio.giovara@gmail.com>
- [DBH] doc/APIchanges
- [DBH] libavcodec/audio_frame_queue.c
- [DBH] libavcodec/audio_frame_queue.h
- [DBH] libavcodec/avcodec.h
- [DBH] libavcodec/avpacket.c
- [DBH] libavcodec/parser.c
- [DBH] libavcodec/version.h
- [DBH] libavformat/framecrcenc.c
- [DBH] libavformat/matroskadec.c
- [DBH] libavformat/matroskaenc.c
- [DBH] libavformat/md5enc.c
- [DBH] libavformat/mov.c
- [DBH] libavformat/r3d.c
- [DBH] libavformat/utils.c