Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (74)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

Sur d’autres sites (9523)

  • Can you force ffmpeg hls demuxing to start from the beginning in a live hls stream ?

    13 octobre 2016, par jdramer

    When remuxing an hls stream into an mp4 file I use the following command.

    ffmpeg -i "http://example.com/master.m3u8" -c copy -bsf:a aac_adtstoasc output.mp4

    This works fine for VOD content, but if the stream is live it starts from the live position, rather than the very first segment in the m3u8 file. Does the applehttp demuxer have any parameters that would force starting from the first segment ?

  • Overlay filter in LibAV/FFMpeg returns strange (tripled) frame in C

    28 juillet 2014, par gkuczera

    I tried to make a program, which merges two frames. I use LibAV (libav-win32-20140428) under Windows 7 64 and Visual Studio 2013.
    But the result is quite odd.

    http://oi58.tinypic.com/rcobnm.jpg

    The filter which was used is Overlay. When I change the graph, to the one, that uses only one stream and add FADE effect, everything works like a charm. But OVERLAY and eg. DRAWBOX give me strange distortion (three frames on one and black and white effect). Here is the code :

    static int init_filter_graph(AVFilterGraph **pGraph, AVFilterContext **pSrc1, AVFilterContext **pSink)
    {
       AVFilterGraph* tFilterGraph;
       AVFilterContext* tBufferContext1;
       AVFilter* tBuffer1;
       AVFilterContext* tColorContext;
       AVFilter* tColor;
       AVFilterContext* tOverlayContext;
       AVFilter* tOverlay;
       AVFilterContext* tBufferSinkContext;
       AVFilter* tBufferSink;

       int tError;

       /* Create a new filtergraph, which will contain all the filters. */
       tFilterGraph = avfilter_graph_alloc();

       if (!tFilterGraph) {
           return -1;
       }

       { // BUFFER FILTER 1
           tBuffer1 = avfilter_get_by_name("buffer");
           if (!tBuffer1) {
               return -1;
           }
           tBufferContext1 = avfilter_graph_alloc_filter(tFilterGraph, tBuffer1, "src1");
           if (!tBufferContext1) {
               return -1;
           }

           av_dict_set(&tOptionsDict, "width", "320", 0);
           av_dict_set(&tOptionsDict, "height", "240", 0);
           av_dict_set(&tOptionsDict, "pix_fmt", "bgr24", 0);
           av_dict_set(&tOptionsDict, "time_base", "1/25", 0);
           av_dict_set(&tOptionsDict, "sar", "1", 0);
           tError = avfilter_init_dict(tBufferContext1, &tOptionsDict);
           av_dict_free(&tOptionsDict);
           if (tError < 0) {
               return tError;
           }
       }

       { // COLOR FILTER
           tColor = avfilter_get_by_name("color");
           if (!tColor) {
               return -1;
           }
           tColorContext = avfilter_graph_alloc_filter(tFilterGraph, tColor, "color");
           if (!tColorContext) {
               return -1;
           }

           av_dict_set(&tOptionsDict, "color", "white", 0);
           av_dict_set(&tOptionsDict, "size", "20x120", 0);
           av_dict_set(&tOptionsDict, "framerate", "1/25", 0);
           tError = avfilter_init_dict(tColorContext, &tOptionsDict);
           av_dict_free(&tOptionsDict);
           if (tError < 0) {
               return tError;
           }
       }

       { // OVERLAY FILTER
           tOverlay = avfilter_get_by_name("overlay");
           if (!tOverlay) {
               return -1;
           }
           tOverlayContext = avfilter_graph_alloc_filter(tFilterGraph, tOverlay, "overlay");
           if (!tOverlayContext) {
               return -1;
           }

           av_dict_set(&tOptionsDict, "x", "0", 0);
           av_dict_set(&tOptionsDict, "y", "0", 0);
           av_dict_set(&tOptionsDict, "main_w", "120", 0);
           av_dict_set(&tOptionsDict, "main_h", "140", 0);
           av_dict_set(&tOptionsDict, "overlay_w", "320", 0);
           av_dict_set(&tOptionsDict, "overlay_h", "240", 0);
           tError = avfilter_init_dict(tOverlayContext, &tOptionsDict);
           av_dict_free(&tOptionsDict);
           if (tError < 0) {
               return tError;
           }
       }

       { // BUFFERSINK FILTER
           tBufferSink = avfilter_get_by_name("buffersink");
           if (!tBufferSink) {
               return -1;
           }

           tBufferSinkContext = avfilter_graph_alloc_filter(tFilterGraph, tBufferSink, "sink");
           if (!tBufferSinkContext) {
               return -1;
           }

           tError = avfilter_init_str(tBufferSinkContext, NULL);
           if (tError < 0) {
               return tError;
           }
       }

       // Linking graph
       tError = avfilter_link(tBufferContext1, 0, tOverlayContext, 0);
       if (tError >= 0) {
           tError = avfilter_link(tColorContext, 0, tOverlayContext, 1);
       }
       if (tError >= 0) {
           tError = avfilter_link(tOverlayContext, 0, tBufferSinkContext, 0);
       }
       if (tError < 0) {
           return tError;
       }

       tError = avfilter_graph_config(tFilterGraph, NULL);
       if (tError < 0) {
           return tError;
       }

       *pGraph = tFilterGraph;
       *pSrc1 = tBufferContext1;
       *pSink = tBufferSinkContext;

       return 0;
    }

    What do you think is the reason ?

  • Live Transcoding & Streaming

    4 mars 2016, par acohen

    My client has a requirement where he needs me to transcode a source file into a proxy with a unique burn in on it per playback.

    For the proxy I will be using ffmpeg, nothing fancy, but ideally the users can play back the file as it is being transcoded since it may take up to several minutes to complete the transcoding.

    Another restriction is that the player does not support HLS and other live streaming options and can only accept MP4s as a source.

    Any ideas/suggestions would be great.