Recherche avancée

Médias (91)

Autres articles (82)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (9411)

  • How do I initialize multiple libffmpeg contexts in multiple threads and convert the video stream of multiple IP cameras into jpeg image ?

    10 juillet 2017, par hipitt

    Access to multiple IP cameras, and the resolution of each network camera may be different, now I need to decode the video stream of each camera with LIBFFMPEG and convert it into JPG picture (the video stream is H264 encoded). Can I use multi threading, each thread to initialize (instantiate) a LIBFFMPEG context to decode ? Or what should I do ? use multi processes ?

    The two thread initializes the two ffmpeg context crash

    [h264 @ 0x7f1a08000a60] no frame!
    [2017-07-10 09:52:58,443 WARN ] H264 Error while decoding for send frame: 0 0 0 1
    [h264 @ 0x7f1a08000a60] non-existing PPS 0 referenced
    [h264 @ 0x7f1a08000a60] [2017-07-10 09:52:58,444 ERROR] Signal caught:11, dumping backtrace...
    decode_slice_header error
    [h264 @ 0x7f1a08000a60] no frame!
    [2017-07-10 09:52:58,444 WARN ] H264 Error while decoding for send frame: 0 0 0 1
    [h264 @ 0x7f1a08000a60] non-existing PPS 0 referenced
    [h264 @ 0x7f1a08000a60] decode_slice_header error
    [h264 @ 0x7f1a08000a60] no frame!
    [2017-07-10 09:52:58,444 WARN ] H264 Error while decoding for send frame: 0 0 0 1
    [h264 @ 0x7f1a08000a60] non-existing PPS 0 referenced
    [h264 @ 0x7f1a08000a60] decode_slice_header error
    [h264 @ 0x7f1a08000a60] no frame!
    [2017-07-10 09:52:58,445 WARN ] H264 Error while decoding for send frame: 0 0 0 1
    [h264 @ 0x7f1a08000a60] non-existing PPS 0 referenced
    [h264 @ 0x7f1a08000a60] decode_slice_header error
    [h264 @ 0x7f1a08000a60] no frame!
    [2017-07-10 09:52:58,445 WARN ] H264 Error while decoding for send frame: 0 0 0 1
    [h264 @ 0x7f1a08000a60] non-existing PPS 0 referenced
    [h264 @ 0x7f1a08000a60] decode_slice_header error
    [h264 @ 0x7f1a08000a60] no frame!
    [2017-07-10 09:52:58,445 WARN ] H264 Error while decoding for send frame: 0 0 0 1
    [h264 @ 0x7f1a08000a60] non-existing PPS 0 referenced
    [h264 @ 0x7f1a08000a60] decode_slice_header error
    [h264 @ 0x7f1a08000a60] no frame!
    [2017-07-10 09:52:58,446 WARN ] H264 Error while decoding for send frame: 0 0 0 1
    [h264 @ 0x7f1a08000a60] non-existing PPS 0 referenced
    [h264 @ 0x7f1a08000a60] decode_slice_header error
    [h264 @ 0x7f1a08000a60] no frame!
    [2017-07-10 09:52:58,446 WARN ] H264 Error while decoding for send frame: 0 0 0 1
    [h264 @ 0x7f1a08000a60] non-existing PPS 0 referenced
    [h264 @ 0x7f1a08000a60] decode_slice_header error
    [h264 @ 0x7f1a08000a60] no frame!
    [2017-07-10 09:52:58,446 WARN ] H264 Error while decoding for send frame: 0 0 0 1
    *** Error in `./camera-stream': corrupted size vs. prev_size: 0x00007f1a0801cbc0 ***
    ======= Backtrace: =========
    /lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f1a4a6877e5]
    /lib/x86_64-linux-gnu/libc.so.6(+0x82aec)[0x7f1a4a692aec]
    /lib/x86_64-linux-gnu/libc.so.6(__libc_malloc+0x54)[0x7f1a4a694184]
    /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_Znwm+0x18)[0x7f1a4af86e78]
    /compile/lib/liblog4cxx.so.10(+0x14f34d)[0x7f1a4c99a34d]
    /compile/lib/liblog4cxx.so.10(_ZN7log4cxx7rolling27RollingFileAppenderSkeleton9subAppendERKNS_7helpers10ObjectPtrTINS_3spi12LoggingEventEEERNS2_4PoolE+0x67)[0x7f1a4c99b707]
    /compile/lib/liblog4cxx.so.10(_ZN7log4cxx16AppenderSkeleton8doAppendERKNS_7helpers10ObjectPtrTINS_3spi12LoggingEventEEERNS1_4PoolE+0x222)[0x7f1a4c92a692]
    /compile/lib/liblog4cxx.so.10(_ZN7log4cxx7helpers22AppenderAttachableImpl21appendLoopOnAppendersERKNS0_10ObjectPtrTINS_3spi12LoggingEventEEERNS0_4PoolE+0x3f)[0x7f1a4c92838f]
    /compile/lib/liblog4cxx.so.10(_ZNK7log4cxx6Logger13callAppendersERKNS_7helpers10ObjectPtrTINS_3spi12LoggingEventEEERNS1_4PoolE+0xe8)[0x7f1a4c970058]
    /compile/lib/liblog4cxx.so.10(_ZNK7log4cxx6Logger9forcedLogERKNS_7helpers10ObjectPtrTINS_5LevelEEERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_3spi12LocationInfoE+0xbe)[0x7f1a4c9702ae]
    ./camera-stream(_Z10handleCorei+0x2ca)[0x62fa17]
    /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7f1a4b4ae390]
    /lib/x86_64-linux-gnu/libc.so.6(cfree+0x22)[0x7f1a4a694512]
    ./camera-stream(av_fast_padded_malloc+0x78)[0xa692d8]
    ./camera-stream(ff_h2645_extract_rbsp+0x131)[0xd68591]
    ./camera-stream(ff_h2645_packet_split+0x204)[0xd689d4]
    ./camera-stream[0x7dd357]
    ./camera-stream(avcodec_decode_video2+0x184)[0xa6bf74]
    ./camera-stream[0xa6cd50]
    ./camera-stream(avcodec_send_packet+0xb8)[0xa71a38]
    ./camera-stream(_ZN11H264Capture6decodeEPhiRN2cv3MatE+0x98)[0x637fe6]
    ./camera-stream(_ZN7Capture13face_work_funEv+0x532)[0x630944]
    ./camera-stream(_ZN7Capture15face_thread_funEPv+0x20)[0x63040a]
    /lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7f1a4b4a46ba]
    /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f1a4a7173dd]
  • why is ffmpeg so fast

    8 juillet 2017, par jx.xu

    I have written a ffmpeg-based C++ program about converting yuv to rgb using libswscale, similar to the official example.
    Just simple copy and modify the official example before build in my visual studio 2017 on windows 10. However, the time performance is much slower than the ffmpeg.exe executable file, as 36 seconds vs 12 seconds.
    I already know that ffmpeg uses some optimization techniques like SIMD instructions. While in my performance profiling, the bottleneck is disk I/O writing which takes at least 2/3 time.
    Then I develop a concurrent version where a dedicated thread will handle all I/O task while the situation doesn’t seem to improve. It’s worth noting that I use Boost C++ library to utilize multi-thread and asynchronous events.

    So, I just wanna know how can I modify program using the libraries of ffmpeg or the time performance gap towards ffmpeg.exe just can’ be catched up.

    As requested by friendly answers, I post my codes. Compiler is msvc in vs2017 and I turn on the full optimization /Ox .

    Supplement my main question, I make another plain disk I/O test merely copying the file of the same size. It’s surprising to find that plain sequential disk I/O costs 28 seconds while the front codes cost 36 seconds in total... Any one knows how can ffmpeg finishes the same job in only 12 seconds ? That must use some optimization techniques, like random disk I/O or memory buffer reusing ?

    #include "stdafx.h"
    #define __STDC_CONSTANT_MACROS
    extern "C" {
    #include <libavutil></libavutil>imgutils.h>
    #include <libavutil></libavutil>parseutils.h>
    #include <libswscale></libswscale>swscale.h>
    }

    #ifdef _WIN64
    #pragma comment(lib, "avformat.lib")
    #pragma comment(lib, "avcodec.lib")
    #pragma comment(lib, "avutil.lib")
    #pragma comment(lib, "swscale.lib")
    #endif

    #include <common></common>cite.hpp>   // just include headers of c++ STL/Boost

    int main(int argc, char **argv)
    {
       chrono::duration<double> period;
       auto pIn = fopen("G:/Panorama/UHD/originalVideos/DrivingInCountry_3840x1920_30fps_8bit_420_erp.yuv", "rb");
       auto time_mark = chrono::steady_clock::now();

       int src_w = 3840, src_h = 1920, dst_w, dst_h;
       enum AVPixelFormat src_pix_fmt = AV_PIX_FMT_YUV420P, dst_pix_fmt = AV_PIX_FMT_RGB24;
       const char *dst_filename = "G:/Panorama/UHD/originalVideos/out.rgb";
       const char *dst_size = "3840x1920";
       FILE *dst_file;
       int dst_bufsize;
       struct SwsContext *sws_ctx;
       int i, ret;
       if (av_parse_video_size(&amp;dst_w, &amp;dst_h, dst_size) &lt; 0) {
           fprintf(stderr,
               "Invalid size '%s', must be in the form WxH or a valid size abbreviation\n",
               dst_size);
           exit(1);
       }
       dst_file = fopen(dst_filename, "wb");
       if (!dst_file) {
           fprintf(stderr, "Could not open destination file %s\n", dst_filename);
           exit(1);
       }
       /* create scaling context */
       sws_ctx = sws_getContext(src_w, src_h, src_pix_fmt,
           dst_w, dst_h, dst_pix_fmt,
           SWS_BILINEAR, NULL, NULL, NULL);
       if (!sws_ctx) {
           fprintf(stderr,
               "Impossible to create scale context for the conversion "
               "fmt:%s s:%dx%d -> fmt:%s s:%dx%d\n",
               av_get_pix_fmt_name(src_pix_fmt), src_w, src_h,
               av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h);
           ret = AVERROR(EINVAL);
           exit(1);
       }
       io_service srv;   // Boost.Asio class
       auto work = make_shared(srv);
       thread t{ bind(&amp;io_service::run,&amp;srv) }; // I/O worker thread
       vector> result;
       /* utilize function class so that lambda can capture itself */
       function)> recursion;  
       recursion = [&amp;](int left, unique_future<bool> writable)  
       {
           if (left &lt;= 0)
           {
               writable.wait();
               return;
           }
           uint8_t *src_data[4], *dst_data[4];
           int src_linesize[4], dst_linesize[4];
           /* promise-future pair used for thread synchronizing so that the file part is written in the correct sequence  */
           promise<bool> sync;
           /* allocate source and destination image buffers */
           if ((ret = av_image_alloc(src_data, src_linesize,
               src_w, src_h, src_pix_fmt, 16)) &lt; 0) {
               fprintf(stderr, "Could not allocate source image\n");
           }
           /* buffer is going to be written to rawvideo file, no alignment */
           if ((ret = av_image_alloc(dst_data, dst_linesize,
               dst_w, dst_h, dst_pix_fmt, 1)) &lt; 0) {
               fprintf(stderr, "Could not allocate destination image\n");
           }
           dst_bufsize = ret;
           fread(src_data[0], src_h*src_w, 1, pIn);
           fread(src_data[1], src_h*src_w / 4, 1, pIn);
           fread(src_data[2], src_h*src_w / 4, 1, pIn);
           result.push_back(async([&amp;] {
               /* convert to destination format */
               sws_scale(sws_ctx, (const uint8_t * const*)src_data,
                   src_linesize, 0, src_h, dst_data, dst_linesize);
               if (left>0)
               {
                   assert(writable.get() == true);
                   srv.post([=]
                   {
                       /* write scaled image to file */
                       fwrite(dst_data[0], 1, dst_bufsize, dst_file);
                       av_freep((void*)&amp;dst_data[0]);
                   });
               }
               sync.set_value(true);
               av_freep(&amp;src_data[0]);
           }));
           recursion(left - 1, sync.get_future());
       };

       promise<bool> root;
       root.set_value(true);
       recursion(300, root.get_future());  // .yuv file only has 300 frames
       wait_for_all(result.begin(), result.end()); // wait for all unique_future to callback
       work.reset();    // io_service::work releses
       srv.stop();      // io_service stops
       t.join();        // I/O thread joins
       period = steady_clock::now() - time_mark;  // calculate valid time
       fprintf(stderr, "Scaling succeeded. Play the output file with the command:\n"
           "ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n",
           av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h, dst_filename);
       cout &lt;&lt; "period" &lt;&lt; et &lt;&lt; period &lt;&lt; en;
    end:
       fclose(dst_file);
       //  av_freep(&amp;src_data[0]);
       //  av_freep(&amp;dst_data[0]);
       sws_freeContext(sws_ctx);
       return ret &lt; 0;
    }
    </bool></bool></bool></double>
  • Transcode to opus by Fluent-ffmpeg or ffmpeg from command line

    14 juillet 2017, par shamaleyte

    My purpose is to transcode a webm file into opus file.
    It works just fine as the following ;

    ffmpeg -i input.webm -vn -c:a copy output.opus

    But the generated opus file always starts from 4rd or 5th seconds when I play it. It seems like that the first seconds are lost. Any idea why it happens ?

    >ffmpeg -i x.webm -vn -c:a copy x1.opus
    ffmpeg version N-86175-g64ea4d1 Copyright (c) 2000-2017 the FFmpeg
    developers
    built with gcc 6.3.0 (GCC)
    configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid -
    -enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-
    avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls
    --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-
    libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-
    libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb -
    -enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --
    enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --
    enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab -
    -enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-
    libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-
    libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
     libavutil      55. 63.100 / 55. 63.100
     libavcodec     57. 96.101 / 57. 96.101
     libavformat    57. 72.101 / 57. 72.101
     libavdevice    57.  7.100 / 57.  7.100
     libavfilter     6. 90.100 /  6. 90.100
     libswscale      4.  7.101 /  4.  7.101
     libswresample   2.  8.100 /  2.  8.100
     libpostproc    54.  6.100 / 54.  6.100
    Input #0, matroska,webm, from 'x.webm':
     Metadata:
     encoder         : libwebm-0.2.1.0
     creation_time   : 2017-06-19T20:50:21.722000Z
    Duration: 00:00:32.33, start: 0.000000, bitrate: 134 kb/s
    Stream #0:0(eng): Audio: opus, 48000 Hz, mono, fltp (default)
    Stream #0:1(eng): Video: vp8, yuv420p(progressive), 640x480, SAR 1:1 DAR
    4:3, 16.67 fps, 16.67 tbr, 1k tbn, 1k tbc (default)
    Output #0, opus, to 'x1.opus':
    Metadata:
    encoder         : Lavf57.72.101
    Stream #0:0(eng): Audio: opus, 48000 Hz, mono, fltp (default)
    Metadata:
     encoder         : Lavf57.72.101
    Stream mapping:
    Stream #0:0 -> #0:0 (copy)
    Press [q] to stop, [?] for help
    size=     114kB time=00:00:32.33 bitrate=  28.8kbits/s speed=3.22e+003x
    video:0kB audio:111kB subtitle:0kB other streams:0kB global headers:0kB
    muxing overhead: 2.152229%

    It is jumping from 0 to 4th second .
    Please take a look at this screencast.
    https://www.screenmailer.com/v/52IXnpAarHavwJE

    This is the sample video file that I tried to transcode : https://drive.google.com/open?id=0B2sa3oV_Y3X_ZmVWX3MzTlRPSmc

    So I guess the transcoding starts right at the point that the voice comes in, why is that ?