Advanced search

Medias (1)

Tag: - Tags -/MediaSPIP

Other articles (23)

  • Publier sur MédiaSpip

    13 June 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • MediaSPIP Core : La Configuration

    9 November 2010, by

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes; une page spécifique à la configuration de la page d’accueil du site; une page spécifique à la configuration des secteurs;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques de (...)

  • Configuration spécifique pour PHP5

    4 February 2011, by

    PHP5 est obligatoire, vous pouvez l’installer en suivant ce tutoriel spécifique.
    Il est recommandé dans un premier temps de désactiver le safe_mode, cependant, s’il est correctement configuré et que les binaires nécessaires sont accessibles, MediaSPIP devrait fonctionner correctement avec le safe_mode activé.
    Modules spécifiques
    Il est nécessaire d’installer certains modules PHP spécifiques, via le gestionnaire de paquet de votre distribution ou manuellement : php5-mysql pour la connectivité avec la (...)

On other websites (6424)

  • The issue of inccorrect duration time of MP4 file generated with FFMPEG SDK

    28 April 2016, by LavenderSs

    The purpose of following code is to merge some pictures into a MP4 file with FFMPEG SDK and X265 encoder. The Mp4 file could be generated and played normally, but the duration time is longer than it should be. Eg. I have 8 pictures, and If the fps of MP4 is 2, the duration time shall be 4 seconds. But the actual duration time is 11 secodes. When I played this MP4 file, it will be over after playing 4s.

    I am just fresh guy in the field of ffmpeg. I think the root cause of issue may be the incorrect bitrate, but I’ve tried updated the parameter of encoders, doesn’t work. Could any people who is familiar with ffmpeg programming help check this issue? Really appreciate!!

    #define IMAGE_FILE_NUMBER    8

    #define MP4_FILE     "test_sample_1.mp4"

    char* image_file[IMAGE_FILE_NUMBER] =
    {
       "test_sample_1_1.bmp",
       "test_sample_1_2.bmp",
       "test_sample_1_3.bmp",
       "test_sample_1_4.bmp",
       "test_sample_1_5.bmp",
       "test_sample_1_6.bmp",
       "test_sample_1_7.bmp",
       "test_sample_1_8.bmp"
    };

    void test_image2mp4(const char* output_filename)
    {
       printf(" Enter %s!!!\n", __FUNCTION__);

       FILE *file[IMAGE_FILE_NUMBER];    /*image file handler*/  
       char *szTxt[IMAGE_FILE_NUMBER];  /*image data*/  
       int nDataLen[IMAGE_FILE_NUMBER]={0};   /*image data length (exclude header)*/

       int nWidth = 0;    
       int nHeight= 0;    
       int nLen;  
       int nHeadLen;

       int file_index;
       int ret;

       BITMAPFILEHEADER bmpFHeader = {0};    
       BITMAPINFOHEADER bmpIHeader = {0};

       //Abstract image data from bmp files
       for (file_index = 0; file_index < IMAGE_FILE_NUMBER; file_index ++)    
       {
           file[file_index] = fopen(image_file[file_index], "rb");  

           av_assert0(file[file_index] != NULL);

           /*Read image file*/
           fseek(file[file_index],0,SEEK_END);  
           nLen = ftell(file[file_index]);  
           szTxt[file_index] = (char *)malloc(nLen);

           fseek(file[file_index],0,SEEK_SET);
           nLen = fread(szTxt[file_index],1,nLen,file[file_index]);  
           fclose(file[file_index]);    

           memcpy(&bmpFHeader, szTxt[file_index], sizeof(BITMAPFILEHEADER));  

           nHeadLen = bmpFHeader.bfOffBits - sizeof(BITMAPFILEHEADER);  

           memcpy(&bmpIHeader,szTxt[file_index]+sizeof(BITMAPFILEHEADER),nHeadLen);  

           nWidth = bmpIHeader.biWidth;  
           nHeight = bmpIHeader.biHeight;  

           szTxt[file_index] += bmpFHeader.bfOffBits;    
           nDataLen[file_index] = nLen-bmpFHeader.bfOffBits;    

           printf(" Read [%s] width:%d, height:%d, data len:%d.\n", image_file[file_index], nWidth, nHeight, nDataLen[file_index]);
       }    

       av_register_all();    
       avcodec_register_all();    

       AVFormatContext *pFmtCtx = NULL;
       AVCodec *pCodec;
       AVStream *pVideoStream;
       AVCodecContext *pCodecCtx;
       AVFrame *pFrame;
       AVFrame *pRGBFrame;
       AVPacket pPkt;
       unsigned char *yuv_buff;
       unsigned char *rgb_buff;
       struct SwsContext * pSwsCtx;
       int size;
       AVRational sRate;

       sRate.num = 2000;
       sRate.den = 1000;

       pFmtCtx = avformat_alloc_context();
       pFmtCtx->oformat = av_guess_format(NULL, output_filename, NULL);
       avio_open(&pFmtCtx->pb, output_filename, AVIO_FLAG_WRITE);

       pCodec = avcodec_find_encoder(AV_CODEC_ID_HEVC);
       printf(" Found h265 codec...\n");

       pVideoStream = avformat_new_stream(pFmtCtx, pCodec);

       pVideoStream->time_base = sRate;
       pVideoStream->avg_frame_rate = sRate;
       pCodecCtx = pVideoStream->codec;

       if (pFmtCtx->oformat->flags | AVFMT_GLOBALHEADER)
       {
           pCodecCtx->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       av_opt_set(pCodecCtx->priv_data, "preset", "ultrafast", 0);
       av_opt_set(pCodecCtx->priv_data, "tune", "zerolatency", 0);
       av_opt_set(pCodecCtx->priv_data, "x265-params", "qp=20", 0);
       av_opt_set(pCodecCtx->priv_data, "crf", "18", 0);

       pCodecCtx->codec_id = AV_CODEC_ID_HEVC;//from lxh's case
       pCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
       pCodecCtx->ticks_per_frame = 2;
       pCodecCtx->max_b_frames = 1;//0
       pCodecCtx->bit_rate_tolerance = 1;
       pCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
       pCodecCtx->width = nWidth;
       pCodecCtx->height = nHeight;
       pCodecCtx->thread_count = 1;

       pCodecCtx->time_base.num = pVideoStream->avg_frame_rate.num;
       pCodecCtx->time_base.den = pVideoStream->avg_frame_rate.den;

       pCodecCtx->gop_size = 10;

       pCodecCtx->bit_rate = 3000000;//no use
       pCodecCtx->qmin = 1;
       pCodecCtx->qmax = 5;

       pFrame = av_frame_alloc();
       pRGBFrame = av_frame_alloc();

       pFrame->width = pCodecCtx->width;
       pFrame->height = pCodecCtx->height;
       pFrame->format = pCodecCtx->pix_fmt;

       ret = avcodec_open2(pCodecCtx, pCodec, NULL);
       if (ret < 0)
       {
         char msg[128];
         av_strerror(ret, msg, 128);
         printf("err: %s\n", msg);
       }

       printf(" Create and open codec...\n");    

       ret = avformat_write_header(pFmtCtx, NULL);
       if (ret < 0)
       {
           static char msg[128];
           av_strerror(ret, msg, 128);
           av_assert0(ret >= 0);
       }

       size = avpicture_get_size(pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);  
       yuv_buff = (unsigned char*)av_malloc(size);

       pSwsCtx= sws_getContext(pCodecCtx->width,pCodecCtx->height,AV_PIX_FMT_BGR24,
                       pCodecCtx->width,pCodecCtx->height,AV_PIX_FMT_YUV420P,SWS_POINT,NULL,NULL,NULL);    

       printf(" Begin encode...\n");

       pFrame->pts = 0;

       for (file_index = 0; file_index < IMAGE_FILE_NUMBER; file_index ++)    
       {
           rgb_buff = (unsigned char*)av_malloc(nDataLen[file_index]);  
           memcpy(rgb_buff, szTxt[file_index], nDataLen[file_index]);    

           avpicture_fill((AVPicture*)pRGBFrame, (unsigned char*)rgb_buff, AV_PIX_FMT_RGB24, nWidth, nHeight);    
           //        av_image_fill_arrays
           avpicture_fill((AVPicture*)pFrame, (unsigned char*)yuv_buff, AV_PIX_FMT_YUV420P, nWidth, nHeight);    

           // rotate image
           pRGBFrame->data[0]  += pRGBFrame->linesize[0] * (nHeight - 1);    
           pRGBFrame->linesize[0] *= -1;                      
           pRGBFrame->data[1]  += pRGBFrame->linesize[1] * (nHeight / 2 - 1);    
           pRGBFrame->linesize[1] *= -1;    
           pRGBFrame->data[2]  += pRGBFrame->linesize[2] * (nHeight / 2 - 1);    
           pRGBFrame->linesize[2] *= -1;    

           //rgb -> yuv    
           sws_scale(pSwsCtx,pRGBFrame->data,pRGBFrame->linesize,0,pCodecCtx->height,pFrame->data,pFrame->linesize);    

           av_init_packet(&pPkt);
           pPkt.data = NULL;    // packet data will be allocated by the encoder
           pPkt.size = 0;

           pFrame->pts = (int64_t)file_index * ((pVideoStream->time_base.den * pVideoStream->avg_frame_rate.den)
             / (pVideoStream->time_base.num * pVideoStream->avg_frame_rate.num));

           /* encode the image */
           int out;
           ret = avcodec_encode_video2(pVideoStream->codec, &pPkt, pFrame, &out);

           if (ret < 0)
           {
               static char msg[128];
               av_strerror(ret, msg, 128);
               av_assert0(ret >= 0);
           }
           else
           {
               printf(" [%d]encoding ...\n", file_index);        
           }

           pPkt.stream_index = 0;
           ret = av_interleaved_write_frame(pFmtCtx, &pPkt);
           if (ret < 0)
           {
               static char msg[128];
               av_strerror(ret, msg, 128);
               av_assert0(ret >= 0);
           }

           av_free_packet(&pPkt);
           av_free(rgb_buff);
       }

       av_write_trailer(pFmtCtx);

       printf(" Finish encode... \n");

       avcodec_close(pCodecCtx);
       av_frame_free(&pFrame);
       av_frame_free(&pRGBFrame);
       avio_closep(&pFmtCtx->pb);
       sws_freeContext(pSwsCtx);
       avformat_free_context(pFmtCtx);

       return;
    }

    This is encoding information of generated MP4 file, including incorrect duration time.
    enter image description here

  • SegmentedIndexBox (SIDX) not generated when using WEBM over DASH

    11 July 2014, by Flock Dawson

    I’m trying to get the Industry Format DASH player to work with WEBM audio/video files. However, I’m running in the same error again and again, and Google doesn’t seem to give much help.

    To start with, I created different streams of the same file (different resolutions and bitrates) using this tutorial: https://developer.mozilla.org/en-US/docs/Web/HTML/DASH_Adaptive_Streaming_for_HTML_5_Video

    Then, I downloaded the Industry Format DASH player (http://dashif.org/software/) and pointed it to the DASH manifest I created. When I try to play the video in Chrome, I get the following log:

    Parsing complete: ( xml2json: 3ms, objectiron: 2ms, total: 0.005s) dash.all.js:3
    Manifest has loaded. dash.all.js:3
    MediaSource is open! dash.all.js:3
    Event {clipboardData: undefined, path: NodeList[0], cancelBubble: false, returnValue: true, srcElement: MediaSource…}
    dash.all.js:3
    Video codec: video/webm;codecs="vp8" dash.all.js:3
    No text tracks. dash.all.js:3
    Audio codec: audio/webm;codecs="vorbis" dash.all.js:3
    Duration successfully set to: 27.2 dash.all.js:3
    Perform SIDX load: https://*****/street_orig_125k_final.webm dash.all.js:3
    Perform SIDX load: https://*****/street_audio_final.webm dash.all.js:3
    Uncaught RangeError: Offset is outside the bounds of the DataView

    From this log, I distilled that the manifest is fetched and processed correctly, but something goes wrong when trying to process the SIDX (SegmetIndexBox). I tried another (third-party) source, which works perfectly. I analysed the response returned by the server when trying to fetch the SIDX, and when converted to a readable presentation, the text ’Dsidx’ can be found in this response. So, I analyzed the WEBM file I provide (hexdump and grep), but I cannot find such a SIDX. My conclusion is that the SIDX is never added to the WEBM file.

    From the tutorial I used, I guess the generation of the SIDX is handled by the samplemuxer command, which does not offer any additional parameters. Is there anyone who has more experience in generating this SIDX?

  • Dealing with long conversion times on nginx, ffmpeg and Ruby on Rails

    19 April 2013, by Graeme

    I have developed a Ruby on Rails-based app which allows users to upload videos to one of our local servers (Ubunto 10.04 LTS). The server uses nginx.

    Through the paperclip-ffmpeg gem, videos are converted to mp4 format using the ffmpeg library.

    Everything appears to be working fine in production, except Rails' own 500 page (not the customised version I have provided - but that's a different issue) is displayed whenever certain videos are uploaded. Otherwise, videos are being converted as expected.

    Having done a bit of investigation, I think the default 500 page is being displayed because a 502 error has occurred. I think what is happening, having uploaded the videos locally, is that some videos are taking an extensive amount of time to convert, and that an interruption is occurring on the server (I'm not a server expert by any means).

    Using the excellent Railscasts episode on deployment, I use Capistrano to deploy the app. Here's the unicorn.rb file:

    root = "XXXXXXX"
    working_directory root
    pid "#{root}/tmp/pids/unicorn.pid"
    stderr_path "#{root}/log/unicorn.log"
    stdout_path "#{root}/log/unicorn.log"

    listen "/tmp/unicorn.XXXXXXXXX.sock"
    worker_processes 2
    timeout 200

    And here's the nginx.conf file. Note that client_max_body_size has been set to a fairly hefty 4Gb!:

    upstream unicorn {
     server unix:/tmp/unicorn.XXXXXXXXX.sock fail_timeout=0;
    }

    server {
     listen 80 default deferred;
     root XXXXXXXXX;


     location ^~ /assets/ {
       gzip_static on;
       expires max;
       add_header Cache-Control public;
     }

     try_files $uri/index.html $uri @unicorn;
     location @unicorn {
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header Host $http_host;
       proxy_read_timeout 600;
       proxy_redirect off;
       proxy_pass http://unicorn;
     }

     error_page 500 502 503 504 /500.html;
     client_max_body_size 4G;
     keepalive_timeout 10;

    }

    So, my question is...how could I edit (either of) the above two files to deal with the extensive time that certain videos take to convert through ffmpeg - possibly up to an hour, 2 hours or even more?

    Should I extend timeout in the former and/or keepalive_timeout in the latter - or is there a more efficient way (given that I've no idea how long certain videos will take to convert)?

    Or, is there possibly a more significant issue I should consider - e.g. the amount of memory in the server?

    I'm not an nginx/server expert, so any advice would be useful (particularly where to put extra lines of code) - however, as the rest of the app just "works", I'm not keen to make a huge amount of changes!