Recherche avancée

Médias (1)

Mot : - Tags -/bug

Autres articles (18)

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

  • Taille des images et des logos définissables

    9 février 2011, par

    Dans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
    Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (6987)

  • fftools/ffmpeg : add thread-aware transcode scheduling infrastructure

    18 mai 2023, par Anton Khirnov
    fftools/ffmpeg : add thread-aware transcode scheduling infrastructure
    

    See the comment block at the top of fftools/ffmpeg_sched.h for more
    details on what this scheduler is for.

    This commit adds the scheduling code itself, along with minimal
    integration with the rest of the program :
    * allocating and freeing the scheduler
    * passing it throughout the call stack in order to register the
    individual components (demuxers/decoders/filtergraphs/encoders/muxers)
    with the scheduler

    The scheduler is not actually used as of this commit, so it should not
    result in any change in behavior. That will change in future commits.

    • [DH] fftools/Makefile
    • [DH] fftools/ffmpeg.c
    • [DH] fftools/ffmpeg.h
    • [DH] fftools/ffmpeg_dec.c
    • [DH] fftools/ffmpeg_demux.c
    • [DH] fftools/ffmpeg_enc.c
    • [DH] fftools/ffmpeg_filter.c
    • [DH] fftools/ffmpeg_mux.c
    • [DH] fftools/ffmpeg_mux.h
    • [DH] fftools/ffmpeg_mux_init.c
    • [DH] fftools/ffmpeg_opt.c
    • [DH] fftools/ffmpeg_sched.c
    • [DH] fftools/ffmpeg_sched.h
  • ValueError : I/O operation on closed file with ffmpeg

    22 mars 2018, par AstroCoda

    I’m trying to get this (minimal working example) code to compile in a virtual environment on Anaconda which I’ve set up in a supercomputing cluster :

    import numpy as np
    import matplotlib
    matplotlib.use("Agg")
    import matplotlib.pyplot as plt
    import matplotlib.animation as manimation

    FFMpegWriter = manimation.writers['ffmpeg']
    metadata = dict(title='Movie Test', artist='Matplotlib',
               comment='Movie support!')
    writer = FFMpegWriter(fps=15, metadata=metadata)

    fig = plt.figure()
    l, = plt.plot([], [], 'k-o')

    plt.xlim(-5, 5)
    plt.ylim(-5, 5)

    x0, y0 = 0, 0

    with writer.saving(fig, "writer_test.mp4", 100):
       for i in range(100):
           x0 += 0.1 * np.random.randn()
           y0 += 0.1 * np.random.randn()
           l.set_data(x0, y0)
           writer.grab_frame()

    The thing is, this code works absolutely fine on my local machine (MacOSX) - Anaconda distribution ; Python 2.7 ; same matplotlib and numpy version, and I have ffmpeg on Anaconda ; I have ffmpeg on the cluster as well, albeit at a different version to the one on Python (but no issue with this on my local machine). When I run the code on the cluster, I get :

    Traceback (most recent call last):
     File "movie_test.py", line 25, in <module>
       writer.grab_frame()
     File "~/anaconda2/envs/test_movie/lib/python2.7/contextlib.py", line 35, in __exit__
       self.gen.throw(type, value, traceback)
     File "~/anaconda2/envs/test_movie/lib/python2.7/site-packages/matplotlib/animation.py", line 241, in saving
       self.finish()
     File "~/anaconda2/envs/test_movie/lib/python2.7/site-packages/matplotlib/animation.py", line 367, in finish
       self.cleanup()
     File "~/anaconda2/envs/test_movie/lib/python2.7/site-packages/matplotlib/animation.py", line 405, in cleanup
       out, err = self._proc.communicate()
     File "~/anaconda2/envs/test_movie/lib/python2.7/site-packages/subprocess32.py", line 927, in communicate
       stdout, stderr = self._communicate(input, endtime, timeout)
     File "~/anaconda2/envs/test_movie/lib/python2.7/site-packages/subprocess32.py", line 1713, in _communicate
       orig_timeout)
     File "~/anaconda2/envs/test_movie/lib/python2.7/site-packages/subprocess32.py", line 1769, in _communicate_with_poll
       register_and_append(self.stdout, select_POLLIN_POLLPRI)
     File "~/anaconda2/envs/test_movie/lib/python2.7/site-packages/subprocess32.py", line 1748, in register_and_append
       poller.register(file_obj.fileno(), eventmask)
    ValueError: I/O operation on closed file
    </module>

    All the searches I’ve made correspond to relatively simple text write in/out operations, but not for videos. Thanks in advance for the help !

  • Saving frames as images using FFmpeg

    25 octobre 2014, par Mr Almighty

    There are some tutorials on the internet about it, most of them is using deprecated functions and unfortunately the API use to broke and it makes a mess and I’m really confused.

    I’m following tutorials, learning with the documentation and seeing the examples of the current version (even that way some examples does not work).

    What I’m trying to do is to save frames in .png, following the examples and reading I did this, but I’m confused about the conversion the frame to RBG and saving it :

    #include <iostream>

    extern "C"
    {
       #include <libavcodec></libavcodec>avcodec.h>
       #include <libavformat></libavformat>avformat.h>
       #include <libavutil></libavutil>avutil.h>
    }

    int main(int argc, char ** argv)
    {
       if (argc &lt; 2)
       {
           av_log(0, AV_LOG_FATAL, "Usage: %s <input />", argv[0]);
           return -1;
       }

       const char * filename = argv[1];

       // register all codecs and formats
       av_register_all();

       // open input file, and allocate format context
       AVFormatContext *avFormatContext = avformat_alloc_context();

       if (avformat_open_input(&amp;avFormatContext, filename, 0, 0) &lt; 0)
       {
           av_log(0, AV_LOG_FATAL, "Could not open source file %s", filename);
           return -1;
       }

       // retrieve stream information
       if (avformat_find_stream_info(avFormatContext, 0) &lt; 0)
       {
           av_log(0, AV_LOG_FATAL, "Could not find stream information");
           return -1;
       }

       // dump information about file onto standard error
       av_dump_format(avFormatContext, 0, filename, 0);

       // find the "best" video stream in the file.
       int result = av_find_best_stream(avFormatContext, AVMEDIA_TYPE_VIDEO, -1, -1, 0, 0);

       if (result &lt; 0)
       {
           av_log(0, AV_LOG_FATAL, "Could not find %s stream in input file '%s'", av_get_media_type_string(AVMEDIA_TYPE_VIDEO), filename);
           return -1;
       }

       int stream = result;
       AVStream *avStream = avFormatContext->streams[stream];
       AVCodecContext *avCodecContext = avStream->codec;

       // find decoder for the stream
       AVCodec *avCodec = avcodec_find_decoder(avCodecContext->codec_id);

       if (! avCodec)
       {
           av_log(0, AV_LOG_FATAL, "Failed to find %s codec", av_get_media_type_string(AVMEDIA_TYPE_VIDEO));
           return -1;
       }

       // init the decoders, with reference counting
       AVDictionary *avDictionary = 0;
       av_dict_set(&amp;avDictionary, "refcounted_frames", "1", 0);

       if (result = avcodec_open2(avCodecContext, avCodec, &amp;avDictionary) &lt; 0)
       {
           av_log(0, AV_LOG_FATAL, "Failed to open %s codec", av_get_media_type_string(AVMEDIA_TYPE_VIDEO));
           return -1;
       }

       AVFrame *avFrame = av_frame_alloc();

       if (! avFrame)
       {
           av_log(0, AV_LOG_FATAL, "Could not allocate frame");
           return -1;
       }

       // initialize packet, set data to null, let the demuxer fill it
       AVPacket avPacket;
       av_init_packet(&amp;avPacket);
       avPacket.data = 0;
       avPacket.size = 0;

       while (av_read_frame(avFormatContext, &amp;avPacket) >= 0)
       {
           if (avPacket.stream_index == stream)
           {
               int success = avcodec_decode_video2(avCodecContext, avFrame, &amp;success, &amp;avPacket);

               if (success &lt;= 0)
               {
                   av_log(0, AV_LOG_FATAL, "Error decoding video frame");
                   return -1;
               }

               // ... saving...
           }
       }

       avcodec_close(avCodecContext);
       avformat_close_input(&amp;avFormatContext);
       av_frame_free(&amp;avFrame);

       return 0;
    }
    </iostream>