
Recherche avancée
Médias (91)
-
Spoon - Revenge !
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
My Morning Jacket - One Big Holiday
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Zap Mama - Wadidyusay ?
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
David Byrne - My Fair Lady
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Beastie Boys - Now Get Busy
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
Autres articles (67)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...)
Sur d’autres sites (9433)
-
How to encode Planar 4:2:0 (fourcc P010)
20 juillet 2021, par DennisFleurbaaijI'm trying to recode fourcc V210 (which is a packed YUV4:2:2 format) into a P010 (planar YUV4:2:0). I think I've implemented it according to spec, but the renderer is giving a wrong image so something is off. Decoding the V210 has a decent example in ffmpeg (defines are modified from their solution) but I can't find a P010 encoder to look at what I'm doing wrong.


(Yes, I've tried ffmpeg and that works but it's too slow for this, it takes 30ms per frame on an Intel Gen11 i7)


Clarification (after @Frank's question) : The frames being processed are 4k (3840px wide) and hence there is no code for doing the 128b alignment.


This is running on intel so little endian conversions applied.


Try1 - all green image :


The following code


#define V210_READ_PACK_BLOCK(a, b, c) \
 do { \
 val = *src++; \
 a = val & 0x3FF; \
 b = (val >> 10) & 0x3FF; \
 c = (val >> 20) & 0x3FF; \
 } while (0)

#define PIXELS_PER_PACK 6
#define BYTES_PER_PACK (4*4)

void MyClass::FormatVideoFrame(
 BYTE* inFrame,
 BYTE* outBuffer)
{
 const uint32_t pixels = m_height * m_width;

 const uint32_t* src = (const uint32_t *)inFrame);

 uint16_t* dstY = (uint16_t *)outBuffer;

 uint16_t* dstUVStart = (uint16_t*)(outBuffer + ((ptrdiff_t)pixels * sizeof(uint16_t)));
 uint16_t* dstUV = dstUVStart;

 const uint32_t packsPerLine = m_width / PIXELS_PER_PACK;

 for (uint32_t line = 0; line < m_height; line++)
 {
 for (uint32_t pack = 0; pack < packsPerLine; pack++)
 {
 uint32_t val;
 uint16_t u, y1, y2, v;

 if (pack % 2 == 0)
 {
 V210_READ_PACK_BLOCK(u, y1, v);
 *dstUV++ = u;
 *dstY++ = y1;
 *dstUV++ = v;

 V210_READ_PACK_BLOCK(y1, u, y2);
 *dstY++ = y1;
 *dstUV++ = u;
 *dstY++ = y2;

 V210_READ_PACK_BLOCK(v, y1, u);
 *dstUV++ = v;
 *dstY++ = y1;
 *dstUV++ = u;

 V210_READ_PACK_BLOCK(y1, v, y2);
 *dstY++ = y1;
 *dstUV++ = v;
 *dstY++ = y2;
 }
 else
 {
 V210_READ_PACK_BLOCK(u, y1, v);
 *dstY++ = y1;

 V210_READ_PACK_BLOCK(y1, u, y2);
 *dstY++ = y1;
 *dstY++ = y2;

 V210_READ_PACK_BLOCK(v, y1, u);
 *dstY++ = y1;

 V210_READ_PACK_BLOCK(y1, v, y2);
 *dstY++ = y1;
 *dstY++ = y2;
 }
 }
 }

#ifdef _DEBUG

 // Fully written Y space
 assert(dstY == dstUVStart);

 // Fully written UV space
 const BYTE* expectedVurrentUVPtr = outBuffer + (ptrdiff_t)GetOutFrameSize();
 assert(expectedVurrentUVPtr == (BYTE *)dstUV);

#endif
}

// This is called to determine outBuffer size
LONG MyClass::GetOutFrameSize() const
{
 const LONG pixels = m_height * m_width;

 return
 (pixels * sizeof(uint16_t)) + // Every pixel 1 y
 (pixels / 2 / 2 * (2 * sizeof(uint16_t))); // Every 2 pixels and every odd row 2 16-bit numbers
}



Leads to all green image. This turned out to be a missing bit shift to place the 10 bits in the upper bits of the 16-bit value as per the P010 spec.


Try 2 - Y works, UV doubled ?


Updated the code to properly (or so I think) shifts the YUV values to the correct position in their 16-bit space.


#define V210_READ_PACK_BLOCK(a, b, c) \
 do { \
 val = *src++; \
 a = val & 0x3FF; \
 b = (val >> 10) & 0x3FF; \
 c = (val >> 20) & 0x3FF; \
 } while (0)


#define P010_WRITE_VALUE(d, v) (*d++ = (v << 6))

#define PIXELS_PER_PACK 6
#define BYTES_PER_PACK (4 * sizeof(uint32_t))

// Snipped constructor here which guarantees that we're processing
// something which does not violate alignment.

void MyClass::FormatVideoFrame(
 const BYTE* inBuffer,
 BYTE* outBuffer)
{ 
 const uint32_t pixels = m_height * m_width;
 const uint32_t aligned_width = ((m_width + 47) / 48) * 48;
 const uint32_t stride = aligned_width * 8 / 3;

 uint16_t* dstY = (uint16_t *)outBuffer;

 uint16_t* dstUVStart = (uint16_t*)(outBuffer + ((ptrdiff_t)pixels * sizeof(uint16_t)));
 uint16_t* dstUV = dstUVStart;

 const uint32_t packsPerLine = m_width / PIXELS_PER_PACK;

 for (uint32_t line = 0; line < m_height; line++)
 {
 // Lines start at 128 byte alignment
 const uint32_t* src = (const uint32_t*)(inBuffer + (ptrdiff_t)(line * stride));

 for (uint32_t pack = 0; pack < packsPerLine; pack++)
 {
 uint32_t val;
 uint16_t u, y1, y2, v;

 if (pack % 2 == 0)
 {
 V210_READ_PACK_BLOCK(u, y1, v);
 P010_WRITE_VALUE(dstUV, u);
 P010_WRITE_VALUE(dstY, y1);
 P010_WRITE_VALUE(dstUV, v);

 V210_READ_PACK_BLOCK(y1, u, y2);
 P010_WRITE_VALUE(dstY, y1);
 P010_WRITE_VALUE(dstUV, u);
 P010_WRITE_VALUE(dstY, y2);

 V210_READ_PACK_BLOCK(v, y1, u);
 P010_WRITE_VALUE(dstUV, v);
 P010_WRITE_VALUE(dstY, y1);
 P010_WRITE_VALUE(dstUV, u);

 V210_READ_PACK_BLOCK(y1, v, y2);
 P010_WRITE_VALUE(dstY, y1);
 P010_WRITE_VALUE(dstUV, v);
 P010_WRITE_VALUE(dstY, y2);
 }
 else
 {
 V210_READ_PACK_BLOCK(u, y1, v);
 P010_WRITE_VALUE(dstY, y1);

 V210_READ_PACK_BLOCK(y1, u, y2);
 P010_WRITE_VALUE(dstY, y1);
 P010_WRITE_VALUE(dstY, y2);

 V210_READ_PACK_BLOCK(v, y1, u);
 P010_WRITE_VALUE(dstY, y1);

 V210_READ_PACK_BLOCK(y1, v, y2);
 P010_WRITE_VALUE(dstY, y1);
 P010_WRITE_VALUE(dstY, y2);
 }
 }
 }

#ifdef _DEBUG

 // Fully written Y space
 assert(dstY == dstUVStart);

 // Fully written UV space
 const BYTE* expectedVurrentUVPtr = outBuffer + (ptrdiff_t)GetOutFrameSize();
 assert(expectedVurrentUVPtr == (BYTE *)dstUV);

#endif
}



This leads to the Y being correct and the amount of lines for U and V as well, but somehow U and V are not overlaid properly. There are two versions of it seemingly mirrored through the center vertical. Something similar but less visible for zeroing out V. So both of these are getting rendered at half the width ? Any tips appreciated :)


Fix :
Found the bug, I'm flipping VU not per pack but per block


if (pack % 2 == 0)



Should be


if (line % 2 == 0)



-
ffmpeg merge video and audio + fade out
1er mai 2016, par jacky brownI have one video file and one audio file.
I want to merge them together that the final output video will be in the length of the video and will contain the audio in the background.i did :
ffmpeg -i output.avi -i bgmusic.mp3 -filter_complex " [1:0] apad " -shortest out.avi
but the background audio is cut in the end of the final merge movie.
i want it to fade out nicely.how can i do it ??
but the background audio is cut in the end of the final merge movie.
i want it to fade out nicely.how can i do it ??
thanks.
-
Clips from a Live Video
12 mai 2022, par Jorge Borreguero SanmartinI am doing a program in Python that gets clips from a live video. To do so, with opencv I get the currrrent_frame_position and with the fps I get the start and final time to run this command :


ffmpeg -i live_video.ts -ss 00:01:30 -to 00:02:00 -c:v copy output.ts


To improve, I want to start dumping the live_video on the new file once I know the starting time, and finish once I get the final time. My intention is to make this process faster and get the result file sooner. It is not necessary to use ffmpeg but it is the tool that I have been using. How can I do this ?