Recherche avancée

Médias (91)

Autres articles (91)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (10839)

  • why is ffmpeg so fast

    8 juillet 2017, par jx.xu

    I have written a ffmpeg-based C++ program about converting yuv to rgb using libswscale, similar to the official example.
    Just simple copy and modify the official example before build in my visual studio 2017 on windows 10. However, the time performance is much slower than the ffmpeg.exe executable file, as 36 seconds vs 12 seconds.
    I already know that ffmpeg uses some optimization techniques like SIMD instructions. While in my performance profiling, the bottleneck is disk I/O writing which takes at least 2/3 time.
    Then I develop a concurrent version where a dedicated thread will handle all I/O task while the situation doesn’t seem to improve. It’s worth noting that I use Boost C++ library to utilize multi-thread and asynchronous events.

    So, I just wanna know how can I modify program using the libraries of ffmpeg or the time performance gap towards ffmpeg.exe just can’ be catched up.

    As requested by friendly answers, I post my codes. Compiler is msvc in vs2017 and I turn on the full optimization /Ox .

    Supplement my main question, I make another plain disk I/O test merely copying the file of the same size. It’s surprising to find that plain sequential disk I/O costs 28 seconds while the front codes cost 36 seconds in total... Any one knows how can ffmpeg finishes the same job in only 12 seconds ? That must use some optimization techniques, like random disk I/O or memory buffer reusing ?

    #include "stdafx.h"
    #define __STDC_CONSTANT_MACROS
    extern "C" {
    #include <libavutil></libavutil>imgutils.h>
    #include <libavutil></libavutil>parseutils.h>
    #include <libswscale></libswscale>swscale.h>
    }

    #ifdef _WIN64
    #pragma comment(lib, "avformat.lib")
    #pragma comment(lib, "avcodec.lib")
    #pragma comment(lib, "avutil.lib")
    #pragma comment(lib, "swscale.lib")
    #endif

    #include <common></common>cite.hpp>   // just include headers of c++ STL/Boost

    int main(int argc, char **argv)
    {
       chrono::duration<double> period;
       auto pIn = fopen("G:/Panorama/UHD/originalVideos/DrivingInCountry_3840x1920_30fps_8bit_420_erp.yuv", "rb");
       auto time_mark = chrono::steady_clock::now();

       int src_w = 3840, src_h = 1920, dst_w, dst_h;
       enum AVPixelFormat src_pix_fmt = AV_PIX_FMT_YUV420P, dst_pix_fmt = AV_PIX_FMT_RGB24;
       const char *dst_filename = "G:/Panorama/UHD/originalVideos/out.rgb";
       const char *dst_size = "3840x1920";
       FILE *dst_file;
       int dst_bufsize;
       struct SwsContext *sws_ctx;
       int i, ret;
       if (av_parse_video_size(&amp;dst_w, &amp;dst_h, dst_size) &lt; 0) {
           fprintf(stderr,
               "Invalid size '%s', must be in the form WxH or a valid size abbreviation\n",
               dst_size);
           exit(1);
       }
       dst_file = fopen(dst_filename, "wb");
       if (!dst_file) {
           fprintf(stderr, "Could not open destination file %s\n", dst_filename);
           exit(1);
       }
       /* create scaling context */
       sws_ctx = sws_getContext(src_w, src_h, src_pix_fmt,
           dst_w, dst_h, dst_pix_fmt,
           SWS_BILINEAR, NULL, NULL, NULL);
       if (!sws_ctx) {
           fprintf(stderr,
               "Impossible to create scale context for the conversion "
               "fmt:%s s:%dx%d -> fmt:%s s:%dx%d\n",
               av_get_pix_fmt_name(src_pix_fmt), src_w, src_h,
               av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h);
           ret = AVERROR(EINVAL);
           exit(1);
       }
       io_service srv;   // Boost.Asio class
       auto work = make_shared(srv);
       thread t{ bind(&amp;io_service::run,&amp;srv) }; // I/O worker thread
       vector> result;
       /* utilize function class so that lambda can capture itself */
       function)> recursion;  
       recursion = [&amp;](int left, unique_future<bool> writable)  
       {
           if (left &lt;= 0)
           {
               writable.wait();
               return;
           }
           uint8_t *src_data[4], *dst_data[4];
           int src_linesize[4], dst_linesize[4];
           /* promise-future pair used for thread synchronizing so that the file part is written in the correct sequence  */
           promise<bool> sync;
           /* allocate source and destination image buffers */
           if ((ret = av_image_alloc(src_data, src_linesize,
               src_w, src_h, src_pix_fmt, 16)) &lt; 0) {
               fprintf(stderr, "Could not allocate source image\n");
           }
           /* buffer is going to be written to rawvideo file, no alignment */
           if ((ret = av_image_alloc(dst_data, dst_linesize,
               dst_w, dst_h, dst_pix_fmt, 1)) &lt; 0) {
               fprintf(stderr, "Could not allocate destination image\n");
           }
           dst_bufsize = ret;
           fread(src_data[0], src_h*src_w, 1, pIn);
           fread(src_data[1], src_h*src_w / 4, 1, pIn);
           fread(src_data[2], src_h*src_w / 4, 1, pIn);
           result.push_back(async([&amp;] {
               /* convert to destination format */
               sws_scale(sws_ctx, (const uint8_t * const*)src_data,
                   src_linesize, 0, src_h, dst_data, dst_linesize);
               if (left>0)
               {
                   assert(writable.get() == true);
                   srv.post([=]
                   {
                       /* write scaled image to file */
                       fwrite(dst_data[0], 1, dst_bufsize, dst_file);
                       av_freep((void*)&amp;dst_data[0]);
                   });
               }
               sync.set_value(true);
               av_freep(&amp;src_data[0]);
           }));
           recursion(left - 1, sync.get_future());
       };

       promise<bool> root;
       root.set_value(true);
       recursion(300, root.get_future());  // .yuv file only has 300 frames
       wait_for_all(result.begin(), result.end()); // wait for all unique_future to callback
       work.reset();    // io_service::work releses
       srv.stop();      // io_service stops
       t.join();        // I/O thread joins
       period = steady_clock::now() - time_mark;  // calculate valid time
       fprintf(stderr, "Scaling succeeded. Play the output file with the command:\n"
           "ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n",
           av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h, dst_filename);
       cout &lt;&lt; "period" &lt;&lt; et &lt;&lt; period &lt;&lt; en;
    end:
       fclose(dst_file);
       //  av_freep(&amp;src_data[0]);
       //  av_freep(&amp;dst_data[0]);
       sws_freeContext(sws_ctx);
       return ret &lt; 0;
    }
    </bool></bool></bool></double>
  • Transcode to opus by Fluent-ffmpeg or ffmpeg from command line

    14 juillet 2017, par shamaleyte

    My purpose is to transcode a webm file into opus file.
    It works just fine as the following ;

    ffmpeg -i input.webm -vn -c:a copy output.opus

    But the generated opus file always starts from 4rd or 5th seconds when I play it. It seems like that the first seconds are lost. Any idea why it happens ?

    >ffmpeg -i x.webm -vn -c:a copy x1.opus
    ffmpeg version N-86175-g64ea4d1 Copyright (c) 2000-2017 the FFmpeg
    developers
    built with gcc 6.3.0 (GCC)
    configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid -
    -enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-
    avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls
    --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-
    libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-
    libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb -
    -enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --
    enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --
    enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab -
    -enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-
    libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-
    libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
     libavutil      55. 63.100 / 55. 63.100
     libavcodec     57. 96.101 / 57. 96.101
     libavformat    57. 72.101 / 57. 72.101
     libavdevice    57.  7.100 / 57.  7.100
     libavfilter     6. 90.100 /  6. 90.100
     libswscale      4.  7.101 /  4.  7.101
     libswresample   2.  8.100 /  2.  8.100
     libpostproc    54.  6.100 / 54.  6.100
    Input #0, matroska,webm, from 'x.webm':
     Metadata:
     encoder         : libwebm-0.2.1.0
     creation_time   : 2017-06-19T20:50:21.722000Z
    Duration: 00:00:32.33, start: 0.000000, bitrate: 134 kb/s
    Stream #0:0(eng): Audio: opus, 48000 Hz, mono, fltp (default)
    Stream #0:1(eng): Video: vp8, yuv420p(progressive), 640x480, SAR 1:1 DAR
    4:3, 16.67 fps, 16.67 tbr, 1k tbn, 1k tbc (default)
    Output #0, opus, to 'x1.opus':
    Metadata:
    encoder         : Lavf57.72.101
    Stream #0:0(eng): Audio: opus, 48000 Hz, mono, fltp (default)
    Metadata:
     encoder         : Lavf57.72.101
    Stream mapping:
    Stream #0:0 -> #0:0 (copy)
    Press [q] to stop, [?] for help
    size=     114kB time=00:00:32.33 bitrate=  28.8kbits/s speed=3.22e+003x
    video:0kB audio:111kB subtitle:0kB other streams:0kB global headers:0kB
    muxing overhead: 2.152229%

    It is jumping from 0 to 4th second .
    Please take a look at this screencast.
    https://www.screenmailer.com/v/52IXnpAarHavwJE

    This is the sample video file that I tried to transcode : https://drive.google.com/open?id=0B2sa3oV_Y3X_ZmVWX3MzTlRPSmc

    So I guess the transcoding starts right at the point that the voice comes in, why is that ?

  • Video encoder & segmenter for HLS VoD poor quality

    10 juillet 2017, par Murilo

    I am trying to encode and segment video for HLS on demand(VoD).
    I am using the following code for such :

    ffmpeg -i 20170706_174314.mp4 -c 24 \
           -vcodec libx264 -acodec aac -ac 1 -strict -2 -b:v 128k \
           -profile:v baseline -maxrate 400k -bufsize 1835k \
           -hls_time 10 -hls_playlist_type vod -vsync 1 \
           video_chunks/index1.m3u8 \
           -c 24 -vcodec libx264 -acodec aac -ac 1 -strict -2 -b:v 128k \
           -profile:v baseline -maxrate 700k -bufsize 1835k \
           -hls_time 10 -hls_playlist_type vod -vsync 1 \
           video_chunks/index2.m3u8

    I tried this other code also just for segmenting but had the same exactly problem :

    ffmpeg -i 20170706_174314.mp4 \
    -c:a libmp3lame -ar 48000 -ab 64k  -c:v libx264 -b:v 128k -flags \
    -global_header -map 0 -f segment \
    -segment_list video_chunks/test.m3u8 -segment_time 10 -segment_format mpegts \
    video_chunks/segment_%05d.ts

    Later on I create another playlist with bandwidth separators to call on the two other playlists generated with the code above.

    This code was working great on some videos but yesterday I recorded a video with my Samsung J7 Prime phone to test since the videos will be generated by phone and this video was poorly encoded. The quality sucks and some parts of the video turned Black&White.

    Another thing I noticed on this video is that the following message kept appearing in loop until the end of the encoding&segmenting process.

    Past duration X too large

    Where X is a decimal really close to

    0.675316

    The link to the video is below :

    Dropbox Link

    My FFmpeg version :

    ffmpeg --version
    ffmpeg version N-86482-gbc40674 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 7.1.0 (GCC)
     configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
     libavutil      55. 66.100 / 55. 66.100
     libavcodec     57. 99.100 / 57. 99.100
     libavformat    57. 73.100 / 57. 73.100
     libavdevice    57.  7.100 / 57.  7.100
     libavfilter     6. 92.100 /  6. 92.100
     libswscale      4.  7.101 /  4.  7.101
     libswresample   2.  8.100 /  2.  8.100
     libpostproc    54.  6.100 / 54.  6.100

    SO : Windows 10

    EDIT1 : Link to the output
    If you see the output it might be worth saying I am also seeing the message

    VBV underflow(Frame X, -Y bits)