Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (75)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (6072)

  • Converting uint8_t data to AVFrame with FFmpeg

    30 octobre 2017, par J.Lefebvre

    I am currently working in C++ with the Autodesk 3DStudio Max 2014 SDK (toolset 100) and the Ffmpeg library in Visual Studio 2015 and trying to convert a DIB (Device Independent Bitmap) to uint8_t pointer array and then convert these data to an AVFrame.

    I don’t have any errors, but my video is still black and without meta data.
    (no time display, etc)

    I made approximatively the same with a Visual Studio Console application to convert jpeg image sequence from disk and this is working fine.
    (The only difference is that instead of converting jpeg to AVFrame with the Ffmpeg library, I try to convert raw data to an AVFrame.)

    So I think the problem could be either on the DIB conversion to the uint8_t data or the uint8_t data to the AVFrame.
    (The second is more plausible, because I used the SFML library to display a window with my rgb uint8_t* data for debuging and it is working fine.)

    I first initialize the ffmpeg library :

    This function is called once at the beginning.

    int Converter::Initialize(AVCodecID codec_id, int width, int height, int fps, const char *filename)
    {
       avcodec_register_all();
       av_register_all();

       AVCodec *codec;
       inputFrame = NULL;
       codecContext = NULL;
       pkt = NULL;
       file = NULL;
       outputFilename = new char[strlen(filename)]();
       *outputFilename = '\0';
       strcpy(outputFilename, filename);

       int ret;

       //Initializing AVCodecContext and getting PixelFormat supported by encoder
       codec = avcodec_find_encoder(codec_id);
       if (!codec)
           return 1;

       AVPixelFormat pixFormat = codec->pix_fmts[0];
       codecContext = avcodec_alloc_context3(codec);
       if (!codecContext)
           return 1;

       codecContext->bit_rate = 400000;
       codecContext->width = width;
       codecContext->height = height;
       codecContext->time_base.num = 1;
       codecContext->time_base.den = fps;
       codecContext->gop_size = 10;
       codecContext->max_b_frames = 1;
       codecContext->pix_fmt = pixFormat;

       if (codec_id == AV_CODEC_ID_H264)
           av_opt_set(codecContext->priv_data, "preset", "slow", 0);

       //Actually opening the encoder
       if (avcodec_open2(codecContext, codec, NULL) < 0)
           return 1;

       file = fopen(outputFilename, "wb");
       if (!file)
           return 1;

       inputFrame = av_frame_alloc();
       inputFrame->format = codecContext->pix_fmt;
       inputFrame->width = codecContext->width;
       inputFrame->height = codecContext->height;

       ret = av_image_alloc(inputFrame->data, inputFrame->linesize, codecContext->width, codecContext->height, codecContext->pix_fmt, 32);

       if (ret < 0)
           return 1;

       return 0;
    }

    Then for each frame, I get the DIB and convert to a uint8_t* it with this function :

    uint8_t* Util::ToUint8_t(RGBQUAD *data, int width, int height)
    {
       uint8_t* buf = (uint8_t*)data;

       int imageSize = width * height;
       size_t rgbquad_size = sizeof(RGBQUAD);
       size_t total_bytes = imageSize * rgbquad_size;
       uint8_t * pCopyBuffer = new uint8_t[total_bytes];

       for (int x = 0; x < width; x++)
       {
           for (int y = 0; y < height; y++)
           {
               int index = (x + width * y) * rgbquad_size;
               int invertIndex = (x + width* (height - y - 1)) * rgbquad_size;

               //BGRA to RGBA
               pCopyBuffer[index] = buf[invertIndex + 2];
               pCopyBuffer[index + 1] = buf[invertIndex + 1];
               pCopyBuffer[index + 2] = buf[invertIndex];
               pCopyBuffer[index + 3] = 0xFF;
           }
       }

       return pCopyBuffer;
    }

    void GetDIBBuffer(Interface* ip, BITMAPINFO *bmi, uint8_t** outBuffer)
    {
       int size;

       ViewExp& view = ip->GetActiveViewExp();

       view.getGW()->getDIB(NULL, &size);

       bmi = (BITMAPINFO *)malloc(size);
       BITMAPINFOHEADER *bmih = (BITMAPINFOHEADER *)bmi;
       view.getGW()->getDIB(bmi, &size);

       uint8_t * pCopyBuffer = Util::ToUint8_t(bmi->bmiColors, bmih->biWidth, bmih->biHeight);

       *outBuffer = pCopyBuffer;
    }

    This function is used to get the DIB :

    void GetViewportDIB(Interface* ip, BITMAPINFO *bmi, BITMAPINFOHEADER *bmih, BitmapInfo biFile, Bitmap *map)
    {
       int size;

       if (!biFile.Name()[0])
           return;

       ViewExp& view = ip->GetActiveViewExp();

       view.getGW()->getDIB(NULL, &size);

       bmi = (BITMAPINFO *)malloc(size);
       bmih = (BITMAPINFOHEADER *)bmi;

       view.getGW()->getDIB(bmi, &size);

       biFile.SetWidth((WORD)bmih->biWidth);
       biFile.SetHeight((WORD)bmih->biHeight);
       biFile.SetType(BMM_TRUE_32);

       map = TheManager->Create(&biFile);
       map->OpenOutput(&biFile);
       map->FromDib(bmi);
       map->Write(&biFile);
       map->Close(&biFile);
    }

    And after the conversion to AVFrame and video encoding :

    The EncodeFromMem function is call each frame.

    int Converter::EncodeFromMem(const char *outputDir, int frameNumber, uint8_t* data)
    {
       int ret;

       inputFrame->pts = frameNumber;
       EncodeFrame(data, codecContext, inputFrame, &pkt, file);

       return 0;
    }

    static void RgbToYuv(uint8_t *rgb, AVCodecContext *c, AVFrame *frame)
    {
       struct SwsContext *swsCtx = NULL;
       const int in_linesize[1] = { 3 * c->width };// RGB stride
       swsCtx = sws_getCachedContext(swsCtx, c->width, c->height, AV_PIX_FMT_RGB24, c->width, c->height, AV_PIX_FMT_YUV420P, 0, 0, 0, 0);
       sws_scale(swsCtx, (const uint8_t * const *)&rgb, in_linesize, 0, c->height, frame->data, frame->linesize);
    }

    static void EncodeFrame(uint8_t *rgb, AVCodecContext *c, AVFrame *frame, AVPacket **pkt, FILE *file)
    {
       int ret, got_output;

       RgbToYuv(rgb, c, frame);

       *pkt = av_packet_alloc();
       av_init_packet(*pkt);
       (*pkt)->data = NULL;
       (*pkt)->size = 0;

       ret = avcodec_encode_video2(c, *pkt, frame, &got_output);
       if (ret < 0)
       {
           fprintf(stderr, "Error encoding frame/n");
           exit(1);
       }
       if (got_output)
       {
           fwrite((*pkt)->data, 1, (*pkt)->size, file);
           av_packet_unref(*pkt);
       }
    }

    To finish I have a function that write the packets and free the memory :
    This function is called once at the end of the time range.

    int Converter::Finalize()
    {
       int ret, got_output;
       uint8_t endcode[] = { 0, 0, 1, 0xb7 };

       /* get the delayed frames */
       do
       {
           fflush(stdout);
           ret = avcodec_encode_video2(codecContext, pkt, NULL, &got_output);
           if (ret < 0)
           {
               fprintf(stderr, "Error encoding frame/n");
               return 1;
           }
           if (got_output)
           {
               fwrite(pkt->data, 1, pkt->size, file);
               av_packet_unref(pkt);
           }
       } while (got_output);

       fwrite(endcode, 1, sizeof(endcode), file);
       fclose(file);

       avcodec_close(codecContext);
       av_free(codecContext);

       av_frame_unref(inputFrame);
       av_frame_free(&inputFrame);
       //av_freep(&inputFrame->data[0]); //Crash

       delete outputFilename;
       outputFilename = 0;

       return 0;
    }

    EDIT :

    I modify my RgbToYuv function and create another one to convert back the yuv frame to an rgb one.

    This not really solve the problem, but maybe focus the problem on the conversion from YuvToRgb.

    This is the result of the conversion from YUV to RGB :

     ![YuvToRgb result] : https://img42.com/kHqpt+

    static void YuvToRgb(AVCodecContext *c, AVFrame *frame)
    {
       struct SwsContext *img_convert_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P, c->width, c->height, AV_PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
       AVFrame * rgbPictInfo = av_frame_alloc();
       avpicture_fill((AVPicture*)rgbPictInfo, *(frame)->data, AV_PIX_FMT_RGB24, c->width, c->height);
       sws_scale(img_convert_ctx, frame->data, frame->linesize, 0, c->height, rgbPictInfo->data, rgbPictInfo->linesize);

       Util::DebugWindow(c->width, c->height, rgbPictInfo->data[0]);
    }
    static void RgbToYuv(uint8_t *rgb, AVCodecContext *c, AVFrame *frame)
    {
       AVFrame * rgbPictInfo = av_frame_alloc();
       avpicture_fill((AVPicture*)rgbPictInfo, rgb, AV_PIX_FMT_RGBA, c->width, c->height);

       struct SwsContext *swsCtx = sws_getContext(c->width, c->height, AV_PIX_FMT_RGBA, c->width, c->height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);
       avpicture_fill((AVPicture*)frame, rgb, AV_PIX_FMT_YUV420P, c->width, c->height);    
       sws_scale(swsCtx, rgbPictInfo->data, rgbPictInfo->linesize, 0, c->height, frame->data, frame->linesize);

       YuvToRgb(c, frame);
    }
  • video generation using ffmpeg

    10 mai 2012, par Jack

    I am trying to generate video from set of image using ffmpeg library. Using following code, I am able to generate the video on simulator but when I run it on device, it produces strange green effect in video. I followed Encoding images to video with ffmpeg url. Can anybody help me out or if can provide code snippet, will be really appreciated.

    //here is the code..

    (void)createVideoFromImages
    {
     NSString *fileName2 = [Utilities documentsPath:[NSString stringWithFormat:@"test.mov"]];
     NSLog(@"filename: %@",fileName2);

     //Register all formats and codecs

     AVCodec *codec;

     //avcodec_register_all();
     //avdevice_register_all();

     av_register_all();


     AVCodecContext *c= NULL;
     int i, out_size, size, outbuf_size;
     FILE *f;
     AVFrame *picture;
     uint8_t *outbuf;

     printf("Video encoding\n");

     /* find the mpeg video encoder */
     codec = avcodec_find_encoder(CODEC_ID_MPEG2VIDEO);
     if (!codec)
     {
       fprintf(stderr, "codec not found\n");
       exit(1);
     }

     c= avcodec_alloc_context();
     picture= avcodec_alloc_frame();

     /* put sample parameters */
     c->bit_rate = 400000;
     /* resolution must be a multiple of two */
     c->width = 256;
     c->height = 256;//258;

     /* frames per second */
     c->time_base= (AVRational){1,25};
     c->gop_size = 10; /* emit one intra frame every ten frames */
     c->max_b_frames=1;
     c->pix_fmt =  PIX_FMT_YUV420P;//PIX_FMT_YUV420P;

     /* open it */
     if (avcodec_open(c, codec) < 0) {
       fprintf(stderr, "could not open codec\n");
       exit(1);
     }

     const char* filename_cstr = [fileName2 cStringUsingEncoding:NSUTF8StringEncoding];
     f = fopen(filename_cstr, "wb");
     if (!f) {
       fprintf(stderr, "could not open %s\n", fileName2);
       exit(1);
     }

     /* alloc image and output buffer */
     outbuf_size = 100000;
     outbuf = malloc(outbuf_size);
     size = c->width * c->height;

     #pragma mark -
     AVFrame* outpic = avcodec_alloc_frame();
     int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);

     NSLog(@"bytes: %d",nbytes);

     //create buffer for the output image
     uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);

     for(i=100;i<104;i++)
     {
       fflush(stdout);

       int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
       NSLog(@"numBytes: %d",numBytes);
       uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

       UIImage *image;

       image = [UIImage imageWithContentsOfFile:[Utilities bundlePath:[NSString stringWithFormat:@"%d.png",i]]];


       /*
       if(i>=98)//for video images
       {
          NSLog(@"i: %d",i);
          image = [UIImage imageWithContentsOfFile:[Utilities documentsPath:[NSString stringWithFormat:@"image0098.png"]]]; ///@"image0098.png"]];
          //[NSString stringWithFormat:@"%d.png", i]];
       }
       else //for custom image
       {
          image = [UIImage imageWithContentsOfFile:[Utilities bundlePath:[NSString stringWithFormat:@"image%04d.png", i]]];
          //[UIImage imageNamed:[NSString stringWithFormat:@"%d.png", i]];//@"image%04d.png",i]];
       }*/

       CGImageRef newCgImage = [image CGImage];

       NSLog(@"No. of Bits per component: %d",CGImageGetBitsPerComponent([image CGImage]));
       NSLog(@"No. of Bits per pixel: %d",CGImageGetBitsPerPixel([image CGImage]));
       NSLog(@"No. of Bytes per row: %d",CGImageGetBytesPerRow([image CGImage]));


       CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
       CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
       buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);  

       struct SwsContext* fooContext;

       avpicture_fill((AVPicture*)picture, buffer, PIX_FMT_RGBA, c->width, c->height);

       avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);

       fooContext= sws_getContext(c->width, c->height,
          PIX_FMT_RGBA,
           c->width, c->height,
           PIX_FMT_YUV420P,
           SWS_FAST_BILINEAR , NULL, NULL, NULL);

    //}

     //perform the conversion

     NSLog(@"linesize: %d", picture->linesize[0]);

     sws_scale(fooContext, picture->data, picture->linesize, 0, c->height, outpic->data, outpic->linesize);

     // Here is where I try to convert to YUV

     /* encode the image */
     out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
     printf("encoding frame %d (size=%d)\n", i, out_size);
     fwrite(outbuf, 1, out_size, f);

     NSLog(@"%d",sizeof(f));

     free(buffer);
     buffer = NULL;      

     }

     /* get the delayed frames */
     for( ; out_size; i++)
     {
       fflush(stdout);
       out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
       printf("write frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, outbuf_size, f);      
     }

     /* add sequence end code to have a real mpeg file */
     outbuf[0] = 0x00;
     outbuf[1] = 0x00;
     outbuf[2] = 0x01;
     outbuf[3] = 0xb7;
     fwrite(outbuf, 1, 4, f);

     fclose(f);
     free(outbuf);

     avcodec_close(c);
     av_free(c);
     av_free(picture);
     printf("\n");

    }
  • FFmpeg poor quality encoded MPEG-4 video

    10 octobre 2017, par Frank Natoli

    Have adapted FFmpeg sample muxing.c under Windows 7 to write MP4 files from video camera RGB data.
    Using muxing.c default bit_rate=400000.
    Am not setting global_quality.
    Resultant MP4 is poor quality, highly pixelated.
    Original raw images in video camera are sharp.
    What values should I use for bit_rate ? Do I have to also set rc_max_rate ?
    What values should I use for global_quality ? Do I have to set any flags to enable use of global_quality ?
    Is bit_rate versus global_quality an either/or situation ? Or can both be useful in adjusting quality ?
    Thanks for your time.