Recherche avancée

Médias (0)

Mot : - Tags -/tags

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (72)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

Sur d’autres sites (10335)

  • FFMpeg C Lib - Transpose causes corrupt image

    16 décembre 2016, par Victor.dMdB

    I’m trying to set up a transcoding pipeline with ffmpeg C lib, but if I transpose it, the video is corrupted as shown below.

    If I don’t transpose, the video is fine, ie the rest of the pipeline is correctly set up.

    I need to convert the AVFrame to another datatype to use it with other software. I believe the corruption happens on the copy, but I’m not sure why. Possible something to do with rotating YUV420P pixels ?

    video is rotated but corrupted

    The constructor (code was taken from here)

    MyFilter::MyFilter(const std::string filter_desc, AVCodecContext *data_ctx){
     avfilter_register_all();
     buffersrc_ctx = NULL;
     buffersink_ctx = NULL;

       filter_graph = avfilter_graph_alloc();

     AVFilter *buffersink = avfilter_get_by_name("buffersink");
     if (!buffersink) {
       throw error("filtering sink element not found\n");
     }

     if (avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out", NULL, NULL, filter_graph) < 0) {
       throw error("Cannot create buffer sink\n");
     }


    filterInputs  = avfilter_inout_alloc();
     filterInputs->name       = av_strdup("out");
     filterInputs->filter_ctx = buffersink_ctx;
     filterInputs->pad_idx    = 0;
     filterInputs->next       = NULL;

       AVFilter *buffersrc  = avfilter_get_by_name("buffer");
       if (!buffersrc) {
           throw error("filtering source element not found\n");
       }

       char args[512];
       snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
                       data_ctx->width, data_ctx->height, data_ctx->pix_fmt,
                       data_ctx->time_base.num, data_ctx->time_base.den,
                       data_ctx->sample_aspect_ratio.num, data_ctx->sample_aspect_ratio.den);

       log(Info, "Setting filter input with %s", args);


       if (avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", args, NULL, filter_graph) < 0) {
            throw error("Cannot create buffer source\n");
       }

       filterOutputs = avfilter_inout_alloc();
       filterOutputs->name       = av_strdup("in");
       filterOutputs->filter_ctx = buffersrc_ctx;
       filterOutputs->pad_idx    = 0;
       filterOutputs->next       = NULL;

       if ((avfilter_graph_parse(filter_graph, filter_desc.c_str(), filterInputs, filterOutputs, NULL)) < 0)
               log(Warning,"Could not parse input filters");

       if ((avfilter_graph_config(filter_graph, NULL)) < 0)
           log(Warning,"Could not configure filter graph");

    }

    And the process

    AVFrame * MyFilter::process(AVFrame *inFrame){

       if (av_buffersrc_add_frame_flags(buffersrc_ctx, inFrame->get(), AV_BUFFERSRC_FLAG_PUSH | AV_BUFFERSRC_FLAG_KEEP_REF ) < 0) {
            throw error("Error while feeding the filtergraph\n");
        }

       int i = 0;
       AVFrame* outFrame =  av_frame_alloc();
       if( av_buffersink_get_frame(buffersink_ctx, outFrame) < 0 ){
        throw error("Couldnt find a frame\n");
       }
     return outFrame;
    }

    And the filter I’m using is :

    std::string filter_desc = "transpose=cclock"

    As an extra note, it seems like the top bar(visible in the screen capture above) is actually composed of properly rotated pixels, and this works for the whole video. It just degrades for the remaining 99% of pixels.

    Using this works :
    std::string filter_desc = "rotate=PI/2", but then the resolution is not properly shifted. If I try
    std::string filter_desc = "rotate='PI/2:ow=ih:oh=iw'"
    the same issue as before starts appearing again. It seems to be associated with the change in resolution.

    I think the corruption might come from a copy thats made after (for compatibility with something else I’m using) :

    void copyToPicture(AVFrame const* avFrame, DataPicture* pic) {
       for (size_t comp=0; compgetNumPlanes(); ++comp) {
           auto const subsampling = comp == 0 ? 1 : 2;
           auto const bytePerPixel = pic->getFormat().format == YUYV422 ? 2 : 1;
           // std::cout<<"Pixel format is "<getFormat().format<data[comp];
           auto const srcPitch = avFrame->linesize[comp];

           auto dst = pic->getPlane(comp);
           auto const dstPitch = pic->getPitch(comp);

           auto const w = avFrame->width * bytePerPixel / subsampling;
           auto const h = avFrame->height / subsampling;

           for (int y=0; ycode>
  • Watson NarrowBand Speech to Text not accepting ogg file

    19 janvier 2017, par Bob Dill

    NodeJS app using ffmpeg to create ogg files from mp3 & mp4. If the source file is broadband, Watson Speech to Text accepts the file with no issues. If the source file is narrow band, Watson Speech to Text fails to read the ogg file. I’ve tested the output from ffmpeg and the narrowband ogg file has the same audio content (e.g. I can listen to it and hear the same people) as the mp3 file. Yes, in advance, I am changing the call to Watson to correctly specify the model and content_type. Code follows :

    exports.createTranscript = function(req, res, next)
    { var _name = getNameBase(req.body.movie);
     var _type = getType(req.body.movie);
     var _voice = (_type == "mp4") ? "en-US_BroadbandModel" : "en-US_NarrowbandModel" ;
     var _contentType = (_type == "mp4") ? "audio/ogg" : "audio/basic" ;
     var _audio = process.cwd()+"/HTML/movies/"+_name+'ogg';
     var transcriptFile = process.cwd()+"/HTML/movies/"+_name+'json';

     speech_to_text.createSession({model: _voice}, function(error, session) {
       if (error) {console.log('error:', error);}
       else
         {
           var params = { content_type: _contentType, continuous: true,
            audio: fs.createReadStream(_audio),
             session_id: session.session_id
             };
             speech_to_text.recognize(params, function(error, transcript) {
               if (error) {console.log('error:', error);}
               else
                 { fs.writeFile(transcriptFile, JSON.stringify(transcript), function(err) {if (err) {console.log(err);}});
                   res.send(transcript);
                 }
             });
         }
     });
    }

    _type is either mp3 (narrowband from phone recording) or mp4 (broadband)
    model: _voice has been traced to ensure correct setting
    content_type: _contentType has been traced to ensure correct setting

    Any ogg file submitted to Speech to Text with narrowband settings fails with Error: No speech detected for 30s. Tested with both real narrowband files and asking Watson to read a broadband ogg file (created from mp4) as narrowband. Same error message. What am I missing ?

  • ffmpeg transpose corrupts video [on hold]

    15 décembre 2016, par Victor.dMdB

    I’m trying to set up a transcoding pipeline with ffmpeg C lib, but if I transpose it, the video is corrupted as shown below.

    If I don’t transpose, the video is fine, ie the rest of the pipeline is correctly set up.

    I’m not actually sure what is actually the issue, is it a problem with the pxiel format ? Why is the transpose corrupting the video stream ? Is there something wrong with my code (added below) ?

    video is rotated but corrupted

    The constructor (code was taken from here)

    MyFilter::MyFilter(const std::string filter_desc, AVCodecContext *data_ctx){
     avfilter_register_all();
     buffersrc_ctx = NULL;
     buffersink_ctx = NULL;

       filter_graph = avfilter_graph_alloc();

     AVFilter *buffersink = avfilter_get_by_name("buffersink");
     if (!buffersink) {
       throw error("filtering sink element not found\n");
     }

     if (avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out", NULL, NULL, filter_graph) < 0) {
       throw error("Cannot create buffer sink\n");
     }


    filterInputs  = avfilter_inout_alloc();
     filterInputs->name       = av_strdup("out");
     filterInputs->filter_ctx = buffersink_ctx;
     filterInputs->pad_idx    = 0;
     filterInputs->next       = NULL;

       AVFilter *buffersrc  = avfilter_get_by_name("buffer");
       if (!buffersrc) {
           throw error("filtering source element not found\n");
       }

       char args[512];
       snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
                       data_ctx->width, data_ctx->height, data_ctx->pix_fmt,
                       data_ctx->time_base.num, data_ctx->time_base.den,
                       data_ctx->sample_aspect_ratio.num, data_ctx->sample_aspect_ratio.den);

       log(Info, "Setting filter input with %s", args);


       if (avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", args, NULL, filter_graph) < 0) {
            throw error("Cannot create buffer source\n");
       }

       filterOutputs = avfilter_inout_alloc();
       filterOutputs->name       = av_strdup("in");
       filterOutputs->filter_ctx = buffersrc_ctx;
       filterOutputs->pad_idx    = 0;
       filterOutputs->next       = NULL;

       if ((avfilter_graph_parse(filter_graph, filter_desc.c_str(), filterInputs, filterOutputs, NULL)) < 0)
               log(Warning,"Could not parse input filters");

       if ((avfilter_graph_config(filter_graph, NULL)) < 0)
           log(Warning,"Could not configure filter graph");

    }

    And the process

    AVFrame * MyFilter::process(AVFrame *inFrame){

       if (av_buffersrc_add_frame_flags(buffersrc_ctx, inFrame->get(), AV_BUFFERSRC_FLAG_PUSH | AV_BUFFERSRC_FLAG_KEEP_REF ) < 0) {
            throw error("Error while feeding the filtergraph\n");
        }

       int i = 0;
       AVFrame* outFrame =  av_frame_alloc();
       if( av_buffersink_get_frame(buffersink_ctx, outFrame) < 0 ){
        throw error("Couldnt find a frame\n");
       }
     return outFrame;
    }

    And the filter I’m using is :

    std::string filter_desc = "transpose=cclock"

    As an extra note, it seems like the top bar(visible in the screen capture above) is actually composed of properly rotated pixels, and this works for the whole video. It just degrades for the remaining 99% of pixels.

    EDIT :

    Using this works std::string filter_desc = "rotate=1.58", but then the resolution is not properly shifted.