
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (40)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Initialisation de MediaSPIP (préconfiguration)
20 février 2010, parLors de l’installation de MediaSPIP, celui-ci est préconfiguré pour les usages les plus fréquents.
Cette préconfiguration est réalisée par un plugin activé par défaut et non désactivable appelé MediaSPIP Init.
Ce plugin sert à préconfigurer de manière correcte chaque instance de MediaSPIP. Il doit donc être placé dans le dossier plugins-dist/ du site ou de la ferme pour être installé par défaut avant de pouvoir utiliser le site.
Dans un premier temps il active ou désactive des options de SPIP qui ne le (...)
Sur d’autres sites (9254)
-
FFMpeg C Lib - Transpose causes corrupt image
16 décembre 2016, par Victor.dMdBI’m trying to set up a transcoding pipeline with ffmpeg C lib, but if I transpose it, the video is corrupted as shown below.
If I don’t transpose, the video is fine, ie the rest of the pipeline is correctly set up.
I need to convert the AVFrame to another datatype to use it with other software. I believe the corruption happens on the copy, but I’m not sure why. Possible something to do with rotating YUV420P pixels ?
The constructor (code was taken from here)
MyFilter::MyFilter(const std::string filter_desc, AVCodecContext *data_ctx){
avfilter_register_all();
buffersrc_ctx = NULL;
buffersink_ctx = NULL;
filter_graph = avfilter_graph_alloc();
AVFilter *buffersink = avfilter_get_by_name("buffersink");
if (!buffersink) {
throw error("filtering sink element not found\n");
}
if (avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out", NULL, NULL, filter_graph) < 0) {
throw error("Cannot create buffer sink\n");
}
filterInputs = avfilter_inout_alloc();
filterInputs->name = av_strdup("out");
filterInputs->filter_ctx = buffersink_ctx;
filterInputs->pad_idx = 0;
filterInputs->next = NULL;
AVFilter *buffersrc = avfilter_get_by_name("buffer");
if (!buffersrc) {
throw error("filtering source element not found\n");
}
char args[512];
snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
data_ctx->width, data_ctx->height, data_ctx->pix_fmt,
data_ctx->time_base.num, data_ctx->time_base.den,
data_ctx->sample_aspect_ratio.num, data_ctx->sample_aspect_ratio.den);
log(Info, "Setting filter input with %s", args);
if (avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", args, NULL, filter_graph) < 0) {
throw error("Cannot create buffer source\n");
}
filterOutputs = avfilter_inout_alloc();
filterOutputs->name = av_strdup("in");
filterOutputs->filter_ctx = buffersrc_ctx;
filterOutputs->pad_idx = 0;
filterOutputs->next = NULL;
if ((avfilter_graph_parse(filter_graph, filter_desc.c_str(), filterInputs, filterOutputs, NULL)) < 0)
log(Warning,"Could not parse input filters");
if ((avfilter_graph_config(filter_graph, NULL)) < 0)
log(Warning,"Could not configure filter graph");
}And the process
AVFrame * MyFilter::process(AVFrame *inFrame){
if (av_buffersrc_add_frame_flags(buffersrc_ctx, inFrame->get(), AV_BUFFERSRC_FLAG_PUSH | AV_BUFFERSRC_FLAG_KEEP_REF ) < 0) {
throw error("Error while feeding the filtergraph\n");
}
int i = 0;
AVFrame* outFrame = av_frame_alloc();
if( av_buffersink_get_frame(buffersink_ctx, outFrame) < 0 ){
throw error("Couldnt find a frame\n");
}
return outFrame;
}And the filter I’m using is :
std::string filter_desc = "transpose=cclock"
As an extra note, it seems like the top bar(visible in the screen capture above) is actually composed of properly rotated pixels, and this works for the whole video. It just degrades for the remaining 99% of pixels.
Using this works :
std::string filter_desc = "rotate=PI/2"
, but then the resolution is not properly shifted. If I try
std::string filter_desc = "rotate='PI/2:ow=ih:oh=iw'"
the same issue as before starts appearing again. It seems to be associated with the change in resolution.I think the corruption might come from a copy thats made after (for compatibility with something else I’m using) :
void copyToPicture(AVFrame const* avFrame, DataPicture* pic) {
for (size_t comp=0; compgetNumPlanes(); ++comp) {
auto const subsampling = comp == 0 ? 1 : 2;
auto const bytePerPixel = pic->getFormat().format == YUYV422 ? 2 : 1;
// std::cout<<"Pixel format is "<getFormat().format<data[comp];
auto const srcPitch = avFrame->linesize[comp];
auto dst = pic->getPlane(comp);
auto const dstPitch = pic->getPitch(comp);
auto const w = avFrame->width * bytePerPixel / subsampling;
auto const h = avFrame->height / subsampling;
for (int y=0; ycode> -
Watson NarrowBand Speech to Text not accepting ogg file
19 janvier 2017, par Bob DillNodeJS app using ffmpeg to create ogg files from mp3 & mp4. If the source file is broadband, Watson Speech to Text accepts the file with no issues. If the source file is narrow band, Watson Speech to Text fails to read the ogg file. I’ve tested the output from ffmpeg and the narrowband ogg file has the same audio content (e.g. I can listen to it and hear the same people) as the mp3 file. Yes, in advance, I am changing the call to Watson to correctly specify the model and content_type. Code follows :
exports.createTranscript = function(req, res, next)
{ var _name = getNameBase(req.body.movie);
var _type = getType(req.body.movie);
var _voice = (_type == "mp4") ? "en-US_BroadbandModel" : "en-US_NarrowbandModel" ;
var _contentType = (_type == "mp4") ? "audio/ogg" : "audio/basic" ;
var _audio = process.cwd()+"/HTML/movies/"+_name+'ogg';
var transcriptFile = process.cwd()+"/HTML/movies/"+_name+'json';
speech_to_text.createSession({model: _voice}, function(error, session) {
if (error) {console.log('error:', error);}
else
{
var params = { content_type: _contentType, continuous: true,
audio: fs.createReadStream(_audio),
session_id: session.session_id
};
speech_to_text.recognize(params, function(error, transcript) {
if (error) {console.log('error:', error);}
else
{ fs.writeFile(transcriptFile, JSON.stringify(transcript), function(err) {if (err) {console.log(err);}});
res.send(transcript);
}
});
}
});
}_type
is either mp3 (narrowband from phone recording) or mp4 (broadband)
model: _voice
has been traced to ensure correct setting
content_type: _contentType
has been traced to ensure correct settingAny ogg file submitted to Speech to Text with narrowband settings fails with
Error: No speech detected for 30s.
Tested with both real narrowband files and asking Watson to read a broadband ogg file (created from mp4) as narrowband. Same error message. What am I missing ? -
ffmpeg transpose corrupts video [on hold]
15 décembre 2016, par Victor.dMdBI’m trying to set up a transcoding pipeline with ffmpeg C lib, but if I transpose it, the video is corrupted as shown below.
If I don’t transpose, the video is fine, ie the rest of the pipeline is correctly set up.
I’m not actually sure what is actually the issue, is it a problem with the pxiel format ? Why is the transpose corrupting the video stream ? Is there something wrong with my code (added below) ?
The constructor (code was taken from here)
MyFilter::MyFilter(const std::string filter_desc, AVCodecContext *data_ctx){
avfilter_register_all();
buffersrc_ctx = NULL;
buffersink_ctx = NULL;
filter_graph = avfilter_graph_alloc();
AVFilter *buffersink = avfilter_get_by_name("buffersink");
if (!buffersink) {
throw error("filtering sink element not found\n");
}
if (avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out", NULL, NULL, filter_graph) < 0) {
throw error("Cannot create buffer sink\n");
}
filterInputs = avfilter_inout_alloc();
filterInputs->name = av_strdup("out");
filterInputs->filter_ctx = buffersink_ctx;
filterInputs->pad_idx = 0;
filterInputs->next = NULL;
AVFilter *buffersrc = avfilter_get_by_name("buffer");
if (!buffersrc) {
throw error("filtering source element not found\n");
}
char args[512];
snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
data_ctx->width, data_ctx->height, data_ctx->pix_fmt,
data_ctx->time_base.num, data_ctx->time_base.den,
data_ctx->sample_aspect_ratio.num, data_ctx->sample_aspect_ratio.den);
log(Info, "Setting filter input with %s", args);
if (avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", args, NULL, filter_graph) < 0) {
throw error("Cannot create buffer source\n");
}
filterOutputs = avfilter_inout_alloc();
filterOutputs->name = av_strdup("in");
filterOutputs->filter_ctx = buffersrc_ctx;
filterOutputs->pad_idx = 0;
filterOutputs->next = NULL;
if ((avfilter_graph_parse(filter_graph, filter_desc.c_str(), filterInputs, filterOutputs, NULL)) < 0)
log(Warning,"Could not parse input filters");
if ((avfilter_graph_config(filter_graph, NULL)) < 0)
log(Warning,"Could not configure filter graph");
}And the process
AVFrame * MyFilter::process(AVFrame *inFrame){
if (av_buffersrc_add_frame_flags(buffersrc_ctx, inFrame->get(), AV_BUFFERSRC_FLAG_PUSH | AV_BUFFERSRC_FLAG_KEEP_REF ) < 0) {
throw error("Error while feeding the filtergraph\n");
}
int i = 0;
AVFrame* outFrame = av_frame_alloc();
if( av_buffersink_get_frame(buffersink_ctx, outFrame) < 0 ){
throw error("Couldnt find a frame\n");
}
return outFrame;
}And the filter I’m using is :
std::string filter_desc = "transpose=cclock"
As an extra note, it seems like the top bar(visible in the screen capture above) is actually composed of properly rotated pixels, and this works for the whole video. It just degrades for the remaining 99% of pixels.
EDIT :
Using this works
std::string filter_desc = "rotate=1.58"
, but then the resolution is not properly shifted.