
Recherche avancée
Autres articles (58)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)
Sur d’autres sites (4971)
-
avformat/matroskaenc : Remove special code for writing subtitles
29 décembre 2019, par Andreas Rheinhardtavformat/matroskaenc : Remove special code for writing subtitles
Once upon a time, mkv_write_block() only wrote a (Simple)Block,
not a BlockGroup which is needed for subtitles to convey
the duration. But with the introduction of support for writing
BlockAdditions and DiscardPadding (both of which require a BlockGroup),
mkv_write_block() can also open and close a BlockGroup of its own. This
naturally led to some code duplication which is removed in this commit.This new code leads to one regression : It always uses eight bytes for
the BlockGroup's length field, whereas the earlier code usually used the
lowest amount of bytes needed. This will be fixed in a future commit.This temporary regression is also the reason for changes to the
binsub-mksenc and matroska-zero-length-block fate tests.Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
-
FFMpeg DVB Subtitles memory leak
13 janvier 2015, par WLGfxWhen decoding the subtitles track from an mpegts udp mutlicast stream I am getting a memory leak using avcodec_decode_subtitle2. The audio and video streams are fine. And all three streams are manually memory managed by pre-allocating all buffers.
There’s very little information about but I do believe there’s a patch somewhere.
I’m currently using ffmpeg 2.0.4 compiled for armv7-a for android.
In the process I’ve found that video streams are different resolutions, ie, 720x576 or 576x576 which doesn’t matter now as I am rendering the subtitles separately as an overlay on the video. My original decoding function (which is changing to render a separate overlay) is :
void ffProcessSubtitlePacket( AVPacket *pkt )
{
//LOGI("NATIVE FFMPEG SUBTITLE - Decoding subtitle packet");
int got = 0;
avcodec_decode_subtitle2(ffSubtitleContext, &ffSubtitleFrame, &got, pkt);
if ( got )
{
//LOGI("NATIVE FFMPEG SUBTITLE - Got subtitle frame");
//LOGI("NATIVE FFMPEG SUBTITLE - Format = %d, Start = %d, End = %d, Rects = %d, PTS = %llu, AudioPTS = %llu, PacketPTS = %llu",
// ffSubtitleFrame.format, ffSubtitleFrame.start_display_time,
// ffSubtitleFrame.end_display_time, ffSubtitleFrame.num_rects,
// ffSubtitleFrame.pts, ffAudioGetPTS(), pkt->pts);
// now add the subtitle data to the list ready
for ( int s = 0; s < ffSubtitleFrame.num_rects; s++ )
{
ffSubtitle *sub = (ffSubtitle*)mmAlloc(sizeof(ffSubtitle)); //new ffSubtitle;
if ( sub )
{
AVSubtitleRect *r = ffSubtitleFrame.rects[s];
AVPicture *p = &r->pict;
// set main data
sub->startPTS = pkt->pts + (uint64_t)ffSubtitleFrame.start_display_time;
sub->endPTS = pkt->pts + (uint64_t)ffSubtitleFrame.end_display_time * (uint64_t)500;
sub->nb_colors = r->nb_colors;
sub->xpos = r->x;
sub->ypos = r->y;
sub->width = r->w;
sub->height = r->h;
// allocate space for CLUT and image all in one chunk
sub->data = mmAlloc(r->nb_colors * 4 + r->w * r->h); //new char[r->nb_colors * 4 + r->w * r->h];
if ( sub->data )
{
// copy the CLUT data
memcpy(sub->data, p->data[1], r->nb_colors * 4);
// copy the bitmap onto the end
memcpy(sub->data + r->nb_colors * 4, p->data[0], r->w * r->h);
// check for duplicate subtitles and remove them as this
// one replaces it with a new bitmap data
int pos = ffSubtitles.size();
while ( pos-- )
{
ffSubtitle *s = ffSubtitles[pos];
if ( s->xpos == sub->xpos &&
s->ypos == sub->ypos &&
s->width == sub->width &&
s->height == sub->height )
{
//delete s;
ffSubtitles.erase( ffSubtitles.begin() + pos );
//LOGI("NATIVE FFMPEG SUBTITLE - Removed old duplicate subtitle, size %d", ffSubtitles.size());
}
}
// append to subtitles list
ffSubtitles.push_back( sub );
char *dat; // data pointer used for the CLUT table
//LOGI("NATIVE FFMPEG SUBTITLE - Added %d,%d - %d,%d, Queue %d, Length = %d",
// r->x, r->y, r->w, r->h, ffSubtitles.size(), ffSubtitleFrame.end_display_time);
// convert the CLUT (RGB) to YUV values
dat = sub->data;
for ( int c = 0; c < r->nb_colors; c++ )
{
int r = dat[0];
int g = dat[1];
int b = dat[2];
int y = ( ( 65 * r + 128 * g + 24 * b + 128) >> 8) + 16;
int u = ( ( -37 * r - 74 * g + 112 * b + 128) >> 8) + 128;
int v = ( ( 112 * r - 93 * g - 18 * b + 128) >> 8) + 128;
*dat++ = (char)y;
*dat++ = (char)u;
*dat++ = (char)v;
dat++; // skip the alpha channel
}
}
else
{
//delete sub;
sub = 0;
LOGI("NATIVE FFMPEG SUBTITLE - Memory allocation error CLUT and BITMAP");
}
}
else
{
LOGI("NATIVE FFMPEG SUBTITLE - Memory allocation error ffSubtitle struct");
mmGarbageCollect();
ffSubtitles.clear();
}
}
}
}
void ffSubtitleRenderCheck(int bpos)
{
if ( ffSubtitleID == -1 || !usingSubtitles )
{
// empty the list in case of memory leaks
ffSubtitles.clear();
mmGarbageCollect();
return;
}
uint64_t audioPTS = ffAudioGetPTS();
int pos = 0;
// draw the subtitle list to the YUV frames
char *yframe = ffVideoBuffers[bpos].yFrame;
char *uframe = ffVideoBuffers[bpos].uFrame;
char *vframe = ffVideoBuffers[bpos].vFrame;
int ywidth = fv.frameActualWidth; // actual width with padding
int uvwidth = fv.frameAWidthHalf; // and for uv frames
while ( pos < ffSubtitles.size() )
{
ffSubtitle *sub = ffSubtitles[pos];
if ( sub->startPTS >= audioPTS ) // okay to draw this one?
{
//LOGI("NATIVE FFMPEG SUBTITLE - Rendering subtitle bitmap %d", pos);
char *clut = sub->data; // colour table
char *dat = clut + sub->nb_colors * 4; // start of bitmap data
int w = sub->width;
int h = sub->height;
int x = sub->xpos;
int y = sub->ypos;
for ( int xpos = 0; xpos < w; xpos++ )
{
for ( int ypos = 0; ypos < h; ypos++ )
{
// get colour for pixel
char bcol = dat[ypos * w + xpos];
if ( bcol != 0 ) // ignore 0 pixels
{
char cluty = clut[bcol * 4 + 0]; // get colours from CLUT
char clutu = clut[bcol * 4 + 1];
char clutv = clut[bcol * 4 + 2];
// draw to Y frame
int newx = x + xpos;
int newy = y + ypos;
yframe[newy * ywidth + newx] = cluty;
// draw to uv frames if we have a quarter pixel only
if ( ( newy & 1 ) && ( newx & 1 ) )
{
uframe[(newy >> 1) * uvwidth + (newx >> 1)] = clutu;
vframe[(newy >> 1) * uvwidth + (newx >> 1)] = clutv;
}
}
}
}
}
pos++;
}
// Last thing is to erase timed out subtitles
pos = ffSubtitles.size();
while ( pos-- )
{
ffSubtitle *sub = ffSubtitles[pos];
if ( sub->endPTS < audioPTS )
{
//delete sub;
ffSubtitles.erase( ffSubtitles.begin() + pos );
//LOGI("NATIVE FFMPEG SUBTITLE - Removed timed out subtitle");
}
}
if ( ffSubtitles.size() == 0 )
{
// garbage collect the custom memory pool
mmGarbageCollect();
}
//LOGI("NATIVE FFMPEG SUBTITLE - Size of subtitle list = %d", ffSubtitles.size());
}Any information would be appreciated or would I have to upgrade to a later version of ffmpeg ?
-
ffmpeg burned-in subtitles render in the wrong font
21 août 2024, par dv151Trying to burn in subtitles to a video in FFMPEG in GothamProBold font. No matter what I do it keeps reverting to Helvetica. From the console, I see that FFMPEG seems to load the font without error. Then switches over to font provider "coretext"


[Parsed_subtitles_0 @ 0x7fed054048c0] Loading font file '/Projects/Fonts/GothaProBol.otf'
[Parsed_subtitles_0 @ 0x7fed054048c0] Using font provider coretext
[Parsed_subtitles_0 @ 0x7fed054048c0] fontselect: (GothaProBol.otf, 400, 0) -> /System/Library/Fonts/Helvetica.ttc, -1, Helvetica



It seems like it has my font loaded, then loads what is likely a system default of Helvetica instead. My guess is that my chosen font isn't actually loading after all.


FFMPEG command (called from python) is as follows :


ffmpeg_cmd = ["ffmpeg", 
 "-i", self.source_video_uri, 
 "-y",
 "-c:v", "prores", "-profile:v", "1", 
 "-c:a", "pcm_s16be", 
 "-vf", f"subtitles={srt_uri}:fontsdir=/Projects/Fonts:force_style='Fontname=GothaProBol.otf'",
 f"{self.source_video_uri}_render.mov"]

subprocess.call(ffmpeg_cmd)



Any ideas ?


UPDATE : Found this setting in libass header file "ass.h" - which ffmpeg calls when using the subtitle filter. Don't know how to actually set this variable when ffmpeg calls libass, but here it is. Line 182 :


* \brief Default Font provider to load fonts in libass' database
 *
 * NONE don't use any default font provider for font lookup
 * AUTODETECT use the first available font provider
 * CORETEXT force a CoreText based font provider (OS X only)
 * FONTCONFIG force a Fontconfig based font provider
 *
 * libass uses the best shaper available by default.
 */
typedef enum {
 ASS_FONTPROVIDER_NONE = 0,
 ASS_FONTPROVIDER_AUTODETECT = 1,
 ASS_FONTPROVIDER_CORETEXT,
 ASS_FONTPROVIDER_FONTCONFIG,
 ASS_FONTPROVIDER_DIRECTWRITE,
} ASS_DefaultFontProvider;



RE : ANSWER BELOW : For the most part, it seems that if your font is installed in /System/Fonts or /Library/Fonts then CoreText can find it. Though in some cases, the naming conventions can be quite particular and non-intuitive. It also can't seem to find all fonts, necessarily.


For example : Gotham Pro Bold, in the /Library/Fonts folder on my system, file named "GothaProBol.otf" is correctly passed to fontname as : GothamPro-Bold or just Gotham Pro. Gotham Pro Bold, GothamPro, Gotham Pro-Bold, GothaProBol, and GothaProBol.otf do NOT work.


For most fonts it seems the preferred convention is FontName-Style/Weight as displayed in Mac OS's FontBook, not the filename.


That said, I have a novelty 'Game of Thrones.ttf' font in the same folder as Gotham Pro, and I can't get CoreText to connect to it under any of the above naming conventions.