Recherche avancée

Médias (0)

Mot : - Tags -/masques

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (49)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

Sur d’autres sites (4675)

  • Revision 106740 : Cf r106739 : ouvelle possibilité du plugin de mutualisation facile (voir ...

    11 octobre 2017, par real3t@… — Log

    Cf r106739 : ouvelle possibilité du plugin de mutualisation facile (voir commit suivant) : afficher la valeur d’une meta. Soit :
    - nommeta
    - nomcasier/nommeta (beaucoup utilisé avec les CFG)
    Déclaration dans le mes_options.php
    Exemple :
    GLOBALSmutualisation_afficher_config ? = (isset($GLOBALSmutualisation_afficher_config ?) ?$GLOBALSmutualisation_afficher_config ?.’,’ :).’soyezcreateurs/mode_affichage,slogan_site’ ;

  • Converting uint8_t data to AVFrame with FFmpeg

    30 octobre 2017, par J.Lefebvre

    I am currently working in C++ with the Autodesk 3DStudio Max 2014 SDK (toolset 100) and the Ffmpeg library in Visual Studio 2015 and trying to convert a DIB (Device Independent Bitmap) to uint8_t pointer array and then convert these data to an AVFrame.

    I don’t have any errors, but my video is still black and without meta data.
    (no time display, etc)

    I made approximatively the same with a Visual Studio Console application to convert jpeg image sequence from disk and this is working fine.
    (The only difference is that instead of converting jpeg to AVFrame with the Ffmpeg library, I try to convert raw data to an AVFrame.)

    So I think the problem could be either on the DIB conversion to the uint8_t data or the uint8_t data to the AVFrame.
    (The second is more plausible, because I used the SFML library to display a window with my rgb uint8_t* data for debuging and it is working fine.)

    I first initialize the ffmpeg library :

    This function is called once at the beginning.

    int Converter::Initialize(AVCodecID codec_id, int width, int height, int fps, const char *filename)
    {
       avcodec_register_all();
       av_register_all();

       AVCodec *codec;
       inputFrame = NULL;
       codecContext = NULL;
       pkt = NULL;
       file = NULL;
       outputFilename = new char[strlen(filename)]();
       *outputFilename = '\0';
       strcpy(outputFilename, filename);

       int ret;

       //Initializing AVCodecContext and getting PixelFormat supported by encoder
       codec = avcodec_find_encoder(codec_id);
       if (!codec)
           return 1;

       AVPixelFormat pixFormat = codec->pix_fmts[0];
       codecContext = avcodec_alloc_context3(codec);
       if (!codecContext)
           return 1;

       codecContext->bit_rate = 400000;
       codecContext->width = width;
       codecContext->height = height;
       codecContext->time_base.num = 1;
       codecContext->time_base.den = fps;
       codecContext->gop_size = 10;
       codecContext->max_b_frames = 1;
       codecContext->pix_fmt = pixFormat;

       if (codec_id == AV_CODEC_ID_H264)
           av_opt_set(codecContext->priv_data, "preset", "slow", 0);

       //Actually opening the encoder
       if (avcodec_open2(codecContext, codec, NULL) < 0)
           return 1;

       file = fopen(outputFilename, "wb");
       if (!file)
           return 1;

       inputFrame = av_frame_alloc();
       inputFrame->format = codecContext->pix_fmt;
       inputFrame->width = codecContext->width;
       inputFrame->height = codecContext->height;

       ret = av_image_alloc(inputFrame->data, inputFrame->linesize, codecContext->width, codecContext->height, codecContext->pix_fmt, 32);

       if (ret < 0)
           return 1;

       return 0;
    }

    Then for each frame, I get the DIB and convert to a uint8_t* it with this function :

    uint8_t* Util::ToUint8_t(RGBQUAD *data, int width, int height)
    {
       uint8_t* buf = (uint8_t*)data;

       int imageSize = width * height;
       size_t rgbquad_size = sizeof(RGBQUAD);
       size_t total_bytes = imageSize * rgbquad_size;
       uint8_t * pCopyBuffer = new uint8_t[total_bytes];

       for (int x = 0; x < width; x++)
       {
           for (int y = 0; y < height; y++)
           {
               int index = (x + width * y) * rgbquad_size;
               int invertIndex = (x + width* (height - y - 1)) * rgbquad_size;

               //BGRA to RGBA
               pCopyBuffer[index] = buf[invertIndex + 2];
               pCopyBuffer[index + 1] = buf[invertIndex + 1];
               pCopyBuffer[index + 2] = buf[invertIndex];
               pCopyBuffer[index + 3] = 0xFF;
           }
       }

       return pCopyBuffer;
    }

    void GetDIBBuffer(Interface* ip, BITMAPINFO *bmi, uint8_t** outBuffer)
    {
       int size;

       ViewExp& view = ip->GetActiveViewExp();

       view.getGW()->getDIB(NULL, &size);

       bmi = (BITMAPINFO *)malloc(size);
       BITMAPINFOHEADER *bmih = (BITMAPINFOHEADER *)bmi;
       view.getGW()->getDIB(bmi, &size);

       uint8_t * pCopyBuffer = Util::ToUint8_t(bmi->bmiColors, bmih->biWidth, bmih->biHeight);

       *outBuffer = pCopyBuffer;
    }

    This function is used to get the DIB :

    void GetViewportDIB(Interface* ip, BITMAPINFO *bmi, BITMAPINFOHEADER *bmih, BitmapInfo biFile, Bitmap *map)
    {
       int size;

       if (!biFile.Name()[0])
           return;

       ViewExp& view = ip->GetActiveViewExp();

       view.getGW()->getDIB(NULL, &size);

       bmi = (BITMAPINFO *)malloc(size);
       bmih = (BITMAPINFOHEADER *)bmi;

       view.getGW()->getDIB(bmi, &size);

       biFile.SetWidth((WORD)bmih->biWidth);
       biFile.SetHeight((WORD)bmih->biHeight);
       biFile.SetType(BMM_TRUE_32);

       map = TheManager->Create(&biFile);
       map->OpenOutput(&biFile);
       map->FromDib(bmi);
       map->Write(&biFile);
       map->Close(&biFile);
    }

    And after the conversion to AVFrame and video encoding :

    The EncodeFromMem function is call each frame.

    int Converter::EncodeFromMem(const char *outputDir, int frameNumber, uint8_t* data)
    {
       int ret;

       inputFrame->pts = frameNumber;
       EncodeFrame(data, codecContext, inputFrame, &pkt, file);

       return 0;
    }

    static void RgbToYuv(uint8_t *rgb, AVCodecContext *c, AVFrame *frame)
    {
       struct SwsContext *swsCtx = NULL;
       const int in_linesize[1] = { 3 * c->width };// RGB stride
       swsCtx = sws_getCachedContext(swsCtx, c->width, c->height, AV_PIX_FMT_RGB24, c->width, c->height, AV_PIX_FMT_YUV420P, 0, 0, 0, 0);
       sws_scale(swsCtx, (const uint8_t * const *)&rgb, in_linesize, 0, c->height, frame->data, frame->linesize);
    }

    static void EncodeFrame(uint8_t *rgb, AVCodecContext *c, AVFrame *frame, AVPacket **pkt, FILE *file)
    {
       int ret, got_output;

       RgbToYuv(rgb, c, frame);

       *pkt = av_packet_alloc();
       av_init_packet(*pkt);
       (*pkt)->data = NULL;
       (*pkt)->size = 0;

       ret = avcodec_encode_video2(c, *pkt, frame, &got_output);
       if (ret < 0)
       {
           fprintf(stderr, "Error encoding frame/n");
           exit(1);
       }
       if (got_output)
       {
           fwrite((*pkt)->data, 1, (*pkt)->size, file);
           av_packet_unref(*pkt);
       }
    }

    To finish I have a function that write the packets and free the memory :
    This function is called once at the end of the time range.

    int Converter::Finalize()
    {
       int ret, got_output;
       uint8_t endcode[] = { 0, 0, 1, 0xb7 };

       /* get the delayed frames */
       do
       {
           fflush(stdout);
           ret = avcodec_encode_video2(codecContext, pkt, NULL, &got_output);
           if (ret < 0)
           {
               fprintf(stderr, "Error encoding frame/n");
               return 1;
           }
           if (got_output)
           {
               fwrite(pkt->data, 1, pkt->size, file);
               av_packet_unref(pkt);
           }
       } while (got_output);

       fwrite(endcode, 1, sizeof(endcode), file);
       fclose(file);

       avcodec_close(codecContext);
       av_free(codecContext);

       av_frame_unref(inputFrame);
       av_frame_free(&inputFrame);
       //av_freep(&inputFrame->data[0]); //Crash

       delete outputFilename;
       outputFilename = 0;

       return 0;
    }

    EDIT :

    I modify my RgbToYuv function and create another one to convert back the yuv frame to an rgb one.

    This not really solve the problem, but maybe focus the problem on the conversion from YuvToRgb.

    This is the result of the conversion from YUV to RGB :

     ![YuvToRgb result] : https://img42.com/kHqpt+

    static void YuvToRgb(AVCodecContext *c, AVFrame *frame)
    {
       struct SwsContext *img_convert_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P, c->width, c->height, AV_PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
       AVFrame * rgbPictInfo = av_frame_alloc();
       avpicture_fill((AVPicture*)rgbPictInfo, *(frame)->data, AV_PIX_FMT_RGB24, c->width, c->height);
       sws_scale(img_convert_ctx, frame->data, frame->linesize, 0, c->height, rgbPictInfo->data, rgbPictInfo->linesize);

       Util::DebugWindow(c->width, c->height, rgbPictInfo->data[0]);
    }
    static void RgbToYuv(uint8_t *rgb, AVCodecContext *c, AVFrame *frame)
    {
       AVFrame * rgbPictInfo = av_frame_alloc();
       avpicture_fill((AVPicture*)rgbPictInfo, rgb, AV_PIX_FMT_RGBA, c->width, c->height);

       struct SwsContext *swsCtx = sws_getContext(c->width, c->height, AV_PIX_FMT_RGBA, c->width, c->height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);
       avpicture_fill((AVPicture*)frame, rgb, AV_PIX_FMT_YUV420P, c->width, c->height);    
       sws_scale(swsCtx, rgbPictInfo->data, rgbPictInfo->linesize, 0, c->height, frame->data, frame->linesize);

       YuvToRgb(c, frame);
    }
  • Combining more than 32 input files in FFmpeg

    17 mai 2016, par thunderblaster

    I am using FFmpeg through Node.js via fluent-ffmpeg to combine a number of small audio files into one. Each of the audio files I am combining has a delayed start time (so it’s neither merging all at the same start, nor concat’ing). I can do this successfully using aevalsrc=0 and concating my audio file to that, and then amixing everything down. However, although I couldn’t find reference to a maximum number of inputs in the documentation, I got [amix @ 0x3fcd920] Value 78.000000 for parameter 'inputs' out of range [1 - 32] when trying to amix 78 files. Clearly there is a limit of 32 input files.

    Given this limitation, I am unsure of the best way to proceed. I understand that amerge exists, but it stops after the shortest file length, so I would need to apad everything and I just tested it and determined that amerge has a limit of 64 input files, which won’t always suit my needs (I have an arbitrary number of inputs).

    I could amix 32 files, store that somewhere, amix 32 more, etc, and amix the results. I would prefer to not deal with writing temp files to disk to then have to clean up later. I considered writing the "temp" outputs to duplex Node streams and reading from them in my final mixdown, but I fear that may be rather inefficient.

    Below is what I am currently doing. If this is an XY problem and this is a dumb way to accomplish what I want, please let me know.

    ffmpeg()
       //INPUTS
       .input('/drummachine/www/audio/bd/bd5025.wav')
       .input('/drummachine/www/audio/bd/bd5025.wav')
       .input('/drummachine/www/audio/bd/bd5025.wav')
       .input('/drummachine/www/audio/sd/sd5025.wav')
       .input('/drummachine/www/audio/sd/sd5025.wav')
       .input('/drummachine/www/audio/rs/rs.wav')
       .input('/drummachine/www/audio/rs/rs.wav')
       .input('/drummachine/www/audio/rs/rs.wav')
       .input('/drummachine/www/audio/cp/cp.wav')
       .input('/drummachine/www/audio/cp/cp.wav')
       .input('/drummachine/www/audio/cp/cp.wav')
       .input('/drummachine/www/audio/oh/oh25.wav')
       .input('/drummachine/www/audio/oh/oh25.wav')
       .input('/drummachine/www/audio/oh/oh25.wav')
       .input('/drummachine/www/audio/ch/ch.wav')
       .input('/drummachine/www/audio/ch/ch.wav')
       .input('/drummachine/www/audio/ch/ch.wav')
       // ...
       // you get the picture
       // ...
       .audioCodec('libmp3lame').format('mp3')
       .complexFilter([
           //GENERATE SILENCE TO PREPEND TO INPUTS
           'aevalsrc=0:d=6.857142857142857[s78]',
           'aevalsrc=0:d=0[s0]',
           'aevalsrc=0:d=0.857[s1]',
           'aevalsrc=0:d=1.714[s2]',
           'aevalsrc=0:d=2.571[s3]',
           'aevalsrc=0:d=3.429[s4]',
           'aevalsrc=0:d=4.286[s5]',
           'aevalsrc=0:d=5.143[s6]',
           'aevalsrc=0:d=6[s7]',
           'aevalsrc=0:d=0.429[s8]',
           'aevalsrc=0:d=1.286[s9]',
           'aevalsrc=0:d=2.143[s10]',
           'aevalsrc=0:d=3[s11]',
           'aevalsrc=0:d=3.857[s12]',
           'aevalsrc=0:d=4.714[s13]',
           'aevalsrc=0:d=5.571[s14]',
           'aevalsrc=0:d=6.429[s15]',
           'aevalsrc=0:d=0.536[s16]',
           // ...
           //CONCAT SILENCE AND AUDIO
           {filter: 'concat', options: {v: 0, a: 1}, inputs: ['s0', '0:a'], outputs: 'ac0'},
           {filter: 'concat', options: {v: 0, a: 1}, inputs: ['s1', '1:a'], outputs: 'ac1'},
           {filter: 'concat', options: {v: 0, a: 1}, inputs: ['s2', '2:a'], outputs: 'ac2'},
           {filter: 'concat', options: {v: 0, a: 1}, inputs: ['s3', '3:a'], outputs: 'ac3'},
           {filter: 'concat', options: {v: 0, a: 1}, inputs: ['s4', '4:a'], outputs: 'ac4'},
           {filter: 'concat', options: {v: 0, a: 1}, inputs: ['s5', '5:a'], outputs: 'ac5'},
           {filter: 'concat', options: {v: 0, a: 1}, inputs: ['s6', '6:a'], outputs: 'ac6'},
           {filter: 'concat', options: {v: 0, a: 1}, inputs: ['s7', '7:a'], outputs: 'ac7'},
           {filter: 'concat', options: {v: 0, a: 1}, inputs: ['s8', '8:a'], outputs: 'ac8'},
           {filter: 'concat', options: {v: 0, a: 1}, inputs: ['s9', '9:a'], outputs: 'ac9'},
           {filter: 'concat', options: {v: 0, a: 1}, inputs: ['s10', '10:a'], outputs: 'ac10'},
           // ...
           // again, this goes on for a while
           // ...
           //MIX IT ALL
           {filter: 'amix', options: {inputs: 78, duration: 'longest'},
               inputs: ['s78', 'ac0', 'ac1', 'ac2', 'ac3', 'ac4', 'ac5',
               'ac6', 'ac7', 'ac8', 'ac9', 'ac10', 'ac11', 'ac12', 'ac13',
               'ac14', 'ac15', 'ac16', 'ac17', 'ac18', 'ac19', 'ac20',    
               // ...
               'ac74', 'ac75', 'ac76'], outputs: 'out'}], 'out')
       //ERROR
       .on('error', function (err, stdout, stderr) {
           console.log('an error happened: ' + err.message);
           console.log('ffmpeg stdout: ' + stdout);
           console.log('ffmpeg stderr: ' + stderr);
       //SUCCESS
       }).on('end', function () {
           console.log('Processing finished !');
           res.end();
       }).pipe(res);