Recherche avancée

Médias (39)

Mot : - Tags -/audio

Autres articles (66)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

Sur d’autres sites (5915)

  • why is ffmpeg so fast

    8 juillet 2017, par jx.xu

    I have written a ffmpeg-based C++ program about converting yuv to rgb using libswscale, similar to the official example.
    Just simple copy and modify the official example before build in my visual studio 2017 on windows 10. However, the time performance is much slower than the ffmpeg.exe executable file, as 36 seconds vs 12 seconds.
    I already know that ffmpeg uses some optimization techniques like SIMD instructions. While in my performance profiling, the bottleneck is disk I/O writing which takes at least 2/3 time.
    Then I develop a concurrent version where a dedicated thread will handle all I/O task while the situation doesn’t seem to improve. It’s worth noting that I use Boost C++ library to utilize multi-thread and asynchronous events.

    So, I just wanna know how can I modify program using the libraries of ffmpeg or the time performance gap towards ffmpeg.exe just can’ be catched up.

    As requested by friendly answers, I post my codes. Compiler is msvc in vs2017 and I turn on the full optimization /Ox .

    Supplement my main question, I make another plain disk I/O test merely copying the file of the same size. It’s surprising to find that plain sequential disk I/O costs 28 seconds while the front codes cost 36 seconds in total... Any one knows how can ffmpeg finishes the same job in only 12 seconds ? That must use some optimization techniques, like random disk I/O or memory buffer reusing ?

    #include "stdafx.h"
    #define __STDC_CONSTANT_MACROS
    extern "C" {
    #include <libavutil></libavutil>imgutils.h>
    #include <libavutil></libavutil>parseutils.h>
    #include <libswscale></libswscale>swscale.h>
    }

    #ifdef _WIN64
    #pragma comment(lib, "avformat.lib")
    #pragma comment(lib, "avcodec.lib")
    #pragma comment(lib, "avutil.lib")
    #pragma comment(lib, "swscale.lib")
    #endif

    #include <common></common>cite.hpp>   // just include headers of c++ STL/Boost

    int main(int argc, char **argv)
    {
       chrono::duration<double> period;
       auto pIn = fopen("G:/Panorama/UHD/originalVideos/DrivingInCountry_3840x1920_30fps_8bit_420_erp.yuv", "rb");
       auto time_mark = chrono::steady_clock::now();

       int src_w = 3840, src_h = 1920, dst_w, dst_h;
       enum AVPixelFormat src_pix_fmt = AV_PIX_FMT_YUV420P, dst_pix_fmt = AV_PIX_FMT_RGB24;
       const char *dst_filename = "G:/Panorama/UHD/originalVideos/out.rgb";
       const char *dst_size = "3840x1920";
       FILE *dst_file;
       int dst_bufsize;
       struct SwsContext *sws_ctx;
       int i, ret;
       if (av_parse_video_size(&amp;dst_w, &amp;dst_h, dst_size) &lt; 0) {
           fprintf(stderr,
               "Invalid size '%s', must be in the form WxH or a valid size abbreviation\n",
               dst_size);
           exit(1);
       }
       dst_file = fopen(dst_filename, "wb");
       if (!dst_file) {
           fprintf(stderr, "Could not open destination file %s\n", dst_filename);
           exit(1);
       }
       /* create scaling context */
       sws_ctx = sws_getContext(src_w, src_h, src_pix_fmt,
           dst_w, dst_h, dst_pix_fmt,
           SWS_BILINEAR, NULL, NULL, NULL);
       if (!sws_ctx) {
           fprintf(stderr,
               "Impossible to create scale context for the conversion "
               "fmt:%s s:%dx%d -> fmt:%s s:%dx%d\n",
               av_get_pix_fmt_name(src_pix_fmt), src_w, src_h,
               av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h);
           ret = AVERROR(EINVAL);
           exit(1);
       }
       io_service srv;   // Boost.Asio class
       auto work = make_shared(srv);
       thread t{ bind(&amp;io_service::run,&amp;srv) }; // I/O worker thread
       vector> result;
       /* utilize function class so that lambda can capture itself */
       function)> recursion;  
       recursion = [&amp;](int left, unique_future<bool> writable)  
       {
           if (left &lt;= 0)
           {
               writable.wait();
               return;
           }
           uint8_t *src_data[4], *dst_data[4];
           int src_linesize[4], dst_linesize[4];
           /* promise-future pair used for thread synchronizing so that the file part is written in the correct sequence  */
           promise<bool> sync;
           /* allocate source and destination image buffers */
           if ((ret = av_image_alloc(src_data, src_linesize,
               src_w, src_h, src_pix_fmt, 16)) &lt; 0) {
               fprintf(stderr, "Could not allocate source image\n");
           }
           /* buffer is going to be written to rawvideo file, no alignment */
           if ((ret = av_image_alloc(dst_data, dst_linesize,
               dst_w, dst_h, dst_pix_fmt, 1)) &lt; 0) {
               fprintf(stderr, "Could not allocate destination image\n");
           }
           dst_bufsize = ret;
           fread(src_data[0], src_h*src_w, 1, pIn);
           fread(src_data[1], src_h*src_w / 4, 1, pIn);
           fread(src_data[2], src_h*src_w / 4, 1, pIn);
           result.push_back(async([&amp;] {
               /* convert to destination format */
               sws_scale(sws_ctx, (const uint8_t * const*)src_data,
                   src_linesize, 0, src_h, dst_data, dst_linesize);
               if (left>0)
               {
                   assert(writable.get() == true);
                   srv.post([=]
                   {
                       /* write scaled image to file */
                       fwrite(dst_data[0], 1, dst_bufsize, dst_file);
                       av_freep((void*)&amp;dst_data[0]);
                   });
               }
               sync.set_value(true);
               av_freep(&amp;src_data[0]);
           }));
           recursion(left - 1, sync.get_future());
       };

       promise<bool> root;
       root.set_value(true);
       recursion(300, root.get_future());  // .yuv file only has 300 frames
       wait_for_all(result.begin(), result.end()); // wait for all unique_future to callback
       work.reset();    // io_service::work releses
       srv.stop();      // io_service stops
       t.join();        // I/O thread joins
       period = steady_clock::now() - time_mark;  // calculate valid time
       fprintf(stderr, "Scaling succeeded. Play the output file with the command:\n"
           "ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n",
           av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h, dst_filename);
       cout &lt;&lt; "period" &lt;&lt; et &lt;&lt; period &lt;&lt; en;
    end:
       fclose(dst_file);
       //  av_freep(&amp;src_data[0]);
       //  av_freep(&amp;dst_data[0]);
       sws_freeContext(sws_ctx);
       return ret &lt; 0;
    }
    </bool></bool></bool></double>
  • FFMpeg Concatenation Filters : Stream specifier ':0' in filtergraph matches no streams

    7 février 2017, par Anthony Eden

    I am developing an application that relies heavily on FFMpeg to perform various transformations on audio files. I am currently testing my FFMpeg configuration on the command line.

    I am trying to concatenate multiple audio files which are in different formats (Primarily MP3, MP2 & WAV). I have been using the official TRAC documentation (https://trac.ffmpeg.org/wiki/How%20to%20concatenate%20(join%2C%20merge)%20media%20files#differentcodec) to help me with this and have created the following command :

    ffmpeg -i OHIn.wav -i OHOut.wav -filter_complex '[0:0] [1:0] concat=n=2:a=1 [a]' -map '[a]' output.wav

    However, when I run this on Mac OS X using version 2.0.1 of FFMpeg, I get the following error message :

    Stream specifier ':0' in filtergraph description [0:0] [1:0] concat=n=2:a=1 [a] matches no streams.

    Here is my full output from the terminal :

    ~/ffmpeg -i OHIn.wav -i OHOut.wav -filter_complex '[0:0] [1:0] concat=n=2:a=1 [a]' -map '[a]' output.wav

    ffmpeg version 2.0.1 Copyright (c) 2000-2013 the FFmpeg developers
     built on Aug 15 2013 10:56:46 with llvm-gcc 4.2.1 (LLVM build 2336.11.00)
     configuration: --prefix=/Volumes/Ramdisk/sw --enable-gpl --enable-pthreads --enable-version3 --enable-libspeex --enable-libvpx --disable-decoder=libvpx --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-avfilter --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-filters --enable-libgsm --arch=x86_64 --enable-runtime-cpudetect
     libavutil      52. 38.100 / 52. 38.100
     libavcodec     55. 18.102 / 55. 18.102
     libavformat    55. 12.100 / 55. 12.100
     libavdevice    55.  3.100 / 55.  3.100
     libavfilter     3. 79.101 /  3. 79.101
     libswscale      2.  3.100 /  2.  3.100
     libswresample   0. 17.102 /  0. 17.102
     libpostproc    52.  3.100 / 52.  3.100
    Guessed Channel Layout for  Input Stream #0.0 : stereo
    Input #0, wav, from 'OHIn.wav':
     Duration: 00:00:06.71, bitrate: 1411 kb/s
       Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
    Guessed Channel Layout for  Input Stream #1.0 : stereo
    Input #1, wav, from 'OHOut.wav':
     Duration: 00:00:07.19, bitrate: 1411 kb/s
       Stream #1:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
    Stream specifier ':0' in filtergraph description [0:0] [1:0] concat=n=2:a=1 [a] matches no streams.

    I do not understand why this does not work. FFMpeg shows that the streams 0:0 and 1:0 exist in the source files. The only other similar problems online have surrounded the use of the single quote in Windows, however testing of this confirm it does not apply to my Mac command line.

    Any help would be much appreciated.

  • Adding a custom filter to FFmpeg

    16 juillet 2024, par Abhimanyu

    I am looking to configure and compile FFmpeg with custom filter.&#xA;custom filters are available on this git repository which is compiled with older ffmpeg version&#xA;https://github.com/numberwolf/FFmpeg-PlusPlus

    &#xA;

    I am following the steps mentioned on the ffmpeg official website&#xA;https://github.com/FFmpeg/FFmpeg/blob/master/doc/writing_filters.txt

    &#xA;

    I am getting warning plusglshader filter is unknown and same for all other filters

    &#xA;

    Here are my steps :

    &#xA;

    First, create your filter source files in the libavfilter directory :

    &#xA;

    `libavfilter/vf_plusglshader.c&#xA;libavfilter/vf_pipglshader.c&#xA;libavfilter/vf_lutglshader.c&#xA;libavfilter/vf_fadeglshader.c`&#xA;

    &#xA;

    Update the FFmpeg build system by adding your new filter files to libavfilter/Makefile :

    &#xA;

    I am getting following below steps

    &#xA;

    First, create your filter source files in the libavfilter directory :

    &#xA;

    `libavfilter/vf_plusglshader.c&#xA;libavfilter/vf_pipglshader.c&#xA;libavfilter/vf_lutglshader.c&#xA;libavfilter/vf_fadeglshader.c`&#xA;

    &#xA;

    Update the FFmpeg build system by adding your new filter files to libavfilter/Makefile :

    &#xA;

    `OBJS-$(CONFIG_PLUSGLSHADER_FILTER)    &#x2B;= vf_plusglshader.o&#xA;OBJS-$(CONFIG_PIPGLSHADER_FILTER)     &#x2B;= vf_pipglshader.o&#xA;OBJS-$(CONFIG_LUTGLSHADER_FILTER)     &#x2B;= vf_lutglshader.o&#xA;OBJS-$(CONFIG_FADEGLSHADER_FILTER)    &#x2B;= vf_fadeglshader.o`&#xA;

    &#xA;

    In libavfilter/allfilters.c, add your filter declarations :

    &#xA;

    `extern const AVFilter ff_vf_plusglshader;&#xA;extern const AVFilter ff_vf_pipglshader;&#xA;extern const AVFilter ff_vf_lutglshader;&#xA;extern const AVFilter ff_vf_fadeglshader;`&#xA;

    &#xA;

    In libavfilter/filter_list.c (not allfilters.c), add your filters to the filter_list array :

    &#xA;

    `static const AVFilter * const filter_list[] = {&#xA;    // ... existing filters ...&#xA;    &amp;ff_vf_plusglshader,&#xA;    &amp;ff_vf_pipglshader,&#xA;    &amp;ff_vf_lutglshader,&#xA;    &amp;ff_vf_fadeglshader,&#xA;    NULL&#xA;};`&#xA;

    &#xA;

    In libavfilter/allfilters.c, add your filter declarations :

    &#xA;

    `extern const AVFilter ff_vf_plusglshader;&#xA;extern const AVFilter ff_vf_pipglshader;&#xA;extern const AVFilter ff_vf_lutglshader;&#xA;extern const AVFilter ff_vf_fadeglshader;`&#xA;

    &#xA;

    In libavfilter/filter_list.c (not allfilters.c), add your filters to the filter_list array :

    &#xA;

    `static const AVFilter * const filter_list[] = {&#xA;    // ... existing filters ...&#xA;    &amp;ff_vf_plusglshader,&#xA;    &amp;ff_vf_pipglshader,&#xA;    &amp;ff_vf_lutglshader,&#xA;    &amp;ff_vf_fadeglshader,&#xA;    NULL&#xA;};`&#xA;

    &#xA;

    `

    &#xA;