Recherche avancée

Médias (16)

Mot : - Tags -/mp3

Autres articles (79)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (11897)

  • unable to create FFMpeg filters via c++ application

    1er mars 2019, par arun s

    I am trying to follow ffmpeg filter_audio.c example .

    i am constructing the filter graph as below, but not able to get volume level after processing , silence filter working well but not volumedetect filter.

    int Filter :: audio_filter_init(AVCodecContext *aCodecctx)
    {

           AVCodecContext  *aCodecCtx = aCodecctx;
       char ch_layout[64];
       int err;

       //AVRational time_base = pFormatCtx->streams[audioindex]->time_base;
           AVRational time_base = aCodecctx->time_base;  
       /* Create a new filtergraph, which will contain all the filters. */
       filter_graph = avfilter_graph_alloc();
       if (!filter_graph) {
           fprintf(stderr, "Unable to create filter graph.\n");
           return AVERROR(ENOMEM);
       }

       /* Create the abuffer filter;
        * it will be used for feeding the data into the graph. */
       abuffer = avfilter_get_by_name("abuffer");
       if (!abuffer) {
           fprintf(stderr, "Could not find the abuffer filter.\n");
           return AVERROR_FILTER_NOT_FOUND;
       }

       abuffer_ctx = avfilter_graph_alloc_filter(filter_graph, abuffer, "src");
       if (!abuffer_ctx) {
           fprintf(stderr, "Could not allocate the abuffer instance.\n");
           return AVERROR(ENOMEM);
       }


       if (!aCodecCtx->channel_layout)
           aCodecCtx->channel_layout = av_get_default_channel_layout(aCodecCtx->channels);


       /* Set the filter options through the AVOptions API. */
       av_get_channel_layout_string(ch_layout, sizeof(ch_layout), 0, aCodecCtx->channel_layout);    
       av_opt_set    (abuffer_ctx, "channel_layout",ch_layout , AV_OPT_SEARCH_CHILDREN);
       av_opt_set    (abuffer_ctx, "sample_fmt",     av_get_sample_fmt_name(aCodecCtx->sample_fmt), AV_OPT_SEARCH_CHILDREN);
       av_opt_set_q  (abuffer_ctx, "time_base",      (AVRational){ 1, aCodecCtx->sample_rate},  AV_OPT_SEARCH_CHILDREN);
       av_opt_set_int(abuffer_ctx, "sample_rate",    aCodecCtx->sample_rate, AV_OPT_SEARCH_CHILDREN);

       /* Now initialize the filter; we pass NULL options, since we have already
        * set all the options above. */
       err = avfilter_init_str(abuffer_ctx, NULL);
       if (err < 0) {
           fprintf(stderr, "Could not initialize the abuffer filter.\n");
           return err;
       }

       silence = avfilter_get_by_name("silencedetect");

       if (!silence) {
           fprintf(stderr, "Could not find the silencedetect filter.\n");
           return AVERROR_FILTER_NOT_FOUND;
       }
       silent_ctx = avfilter_graph_alloc_filter(filter_graph, silence, "silencedetect");
       if (!silent_ctx) {
           fprintf(stderr, "Could not allocate the silencedetect instance.\n");
           return AVERROR(ENOMEM);
       }

       av_opt_set_dict_val (silent_ctx, "duration", (const AVDictionary*)AV_STRINGIFY(5),0);
       av_opt_set_dict_val (silent_ctx, "noise",  (const AVDictionary*)AV_STRINGIFY(-30db),0);
       err = avfilter_init_str(silent_ctx, NULL);
       if (err < 0) {
           fprintf(stderr, "Could not initialize the silencedetect filter.\n");
           return err;
       }


       volumedet = avfilter_get_by_name("volumedetect");
       if (!volumedet) {
           fprintf(stderr, "Could not find the volumedetect filter.\n");
           return AVERROR_FILTER_NOT_FOUND;
       }
       volume_ctx = avfilter_graph_alloc_filter(filter_graph, volumedet, "volumedetect");
       if (!volume_ctx) {
           fprintf(stderr, "Could not allocate the volumedetect instance.\n");
           return AVERROR(ENOMEM);
       }
       /* Finally create the abuffersink filter;
        * it will be used to get the filtered data out of the graph. */
       abuffersink = avfilter_get_by_name("abuffersink");
       if (!abuffersink) {
           fprintf(stderr, "Could not find the abuffersink filter.\n");
           return AVERROR_FILTER_NOT_FOUND;
       }
       abuffersink_ctx = avfilter_graph_alloc_filter(filter_graph, abuffersink, "sink");
       if (!abuffersink_ctx) {
           fprintf(stderr, "Could not allocate the abuffersink instance.\n");
           return AVERROR(ENOMEM);
       }
       /* This filter takes no options. */
       err = avfilter_init_str(abuffersink_ctx, NULL);
       if (err < 0) {
           fprintf(stderr, "Could not initialize the abuffersink instance.\n");
           return err;
       }
       /* Connect the filters;
        * in this simple case the filters just form a linear chain. */
       err = avfilter_link(abuffer_ctx, 0, silent_ctx, 0);    
       if (err >= 0)
           err = avfilter_link(silent_ctx, 0, volume_ctx, 0);
       if (err >= 0)
           err = avfilter_link(volume_ctx, 0, abuffersink_ctx, 0);    
       if (err < 0) {
           fprintf(stderr, "Error connecting filters\n");
           return err;
       }
       /* Configure the graph. */
       err = avfilter_graph_config(filter_graph, NULL);
       if (err < 0) {
           av_log(NULL, AV_LOG_ERROR, "Error configuring the filter graph\n");
           return err;
       }

           return 0;

    }


    below information i am getting in vprintf

    tb:1/44100 samplefmt:fltp samplerate:44100 chlayout:stereo

    auto-inserting filter ’auto_resampler_0’ between the filter ’src’ and the filter ’volumedetect’

    query_formats : 4 queried, 6 merged, 3 already done, 0 delayed

    Using fltp internally between filters

    ch:2 chl:stereo fmt:fltp r:44100Hz -> ch:2 chl:stereo fmt:s16 r:44100Hz

    Using fltp internally between filters

    can someone help to detect volume level

  • libavfilter/vf_yadif : Make frame management logic and options shareable

    24 octobre 2018, par Philip Langdale
    libavfilter/vf_yadif : Make frame management logic and options shareable
    

    I'm writing a cuda implementation of yadif, and while this
    obviously has a very different implementation of the actual
    filtering, all the frame management is unchanged. To avoid
    duplicating that logic, let's make it shareable.

    From the perspective of the existing filter, the only real change
    is introducing a function pointer for the filter() function so it
    can be specified for the specific filter.

    • [DH] libavfilter/Makefile
    • [DH] libavfilter/vf_yadif.c
    • [DH] libavfilter/yadif.h
    • [DH] libavfilter/yadif_common.c
  • Create I Frame out of P and B frames

    15 mai 2018, par Eugene Alexeev

    I’ve written a C++ converter based on FFMpeg which can receive a link to hls-stream and convert it into local .mp4 video. So far, so good, converter works like a charm, no questions about that.

    PROBLEM : No matter what input source I’m providing to the converter, at the end of convertation I need to receive video with key-frames ONLY. I need such video due to perfect seeking forward and reverse.

    It’s a well-known fact that subsidiary video frames (P and B) dependent on their owner-frame (I frame), because this frame contains full pixel map. According to that, we can recreate a I frame for each P and B frame by merging their data with their I frame. That’s why such ffmpeg command ffmpeg -i video.mp4 output%4d.jpg works.

    QUESTION : How can I implement an algorithm of merging of frames in order to recreate Key-frames ONLY video at the end ? What kind of quirks I need to know about merging datas of AVPackets ?

    Thanks.