Recherche avancée

Médias (1)

Mot : - Tags -/bug

Autres articles (106)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (13168)

  • ffmpeg Proper Command to Upscale Video

    19 mai 2014, par user3651716

    We have a problem with the watermark upon ffmpeg conversion to .mp4.

    We use a PHP KVS tube script, running CentOS & hopefully the latest ffmpeg version.

    If input video is in smaller resolution then watermark appearing in the output is too big, so video won’t properly re-size (upscale).

    Looking for a proper command that will upscale video resolution if this is smaller than 720px (width). Re sizing bigger videos work fine, but not smaller.

    So, we would always like 720px-1(dynamic height), no matter what is the input video resolution.

    Here is one of the command we have tried to use but it didnt work can you please provide the correct command to upscale videos 720px (width).


    This is the below command which we used but it has not worked
    _______________________________________________________-

    -vf "resize=720:trunc(ow/a/2)*2" -vcodec libx264 -threads 0 -acodec libfaac -ar 44100 -ab 128k -f mp4

  • ffmpeg - dts/pts values for mpegts relative to the input

    22 juin 2015, par Ivo

    I’m trying to use ffmpeg to split an input video into mpegts segments.

    Currently what I’m using to generate a segment (this case - 2nd segment) is :

    ffmpeg -y -t 15 -ss 15 -i INPUT -c:v libx264 -c:a aac -strict -2 -vbsf h264_mp4toannexb -preset veryfast -f mpegts pipe:1

    This works perfectly, except on the output mpegts result, dts/pts values start from zero and they’re not relative to the input. Which breaks HLS continuity.

    The setpts filter could be a great option, but I cannot figure out how to use it.

    Please do not suggest the segment or the hls muxers, since I need to generate every segment only on-demand.

    Thanks in advance.

  • Implementing a multiple input filter graph with the Libavfilter library in Android NDK

    4 avril 2014, par gookman

    I am trying to use the overlay filter with multiple input sources, for an Android app. Basically, I want to overlay multiple video sources on top of a static image.
    I have looked at the sample that comes with ffmpeg and implemented my code based on that, but things don't seem to be working as expected.

    In the ffmpeg filtering sample there seems to be a single video input. I have to handle multiple video inputs and I am not sure that my solution is the correct one. I have tried to find other examples, but looks like this is the only one.

    Here is my code :

    AVFilterContext **inputContexts;
    AVFilterContext *outputContext;
    AVFilterGraph *graph;

    int initFilters(AVFrame *bgFrame, int inputCount, AVCodecContext **codecContexts, char *filters)
    {
       int i;
       int returnCode;
       char args[512];
       char name[9];
       AVFilterInOut **graphInputs = NULL;
       AVFilterInOut *graphOutput = NULL;

       AVFilter *bufferSrc  = avfilter_get_by_name("buffer");
       AVFilter *bufferSink = avfilter_get_by_name("buffersink");

       graph = avfilter_graph_alloc();
       if(graph == NULL)
           return -1;

       //allocate inputs
       graphInputs = av_calloc(inputCount + 1, sizeof(AVFilterInOut *));
       for(i = 0; i <= inputCount; i++)
       {
           graphInputs[i] = avfilter_inout_alloc();
           if(graphInputs[i] == NULL)
               return -1;
       }

       //allocate input contexts
       inputContexts = av_calloc(inputCount + 1, sizeof(AVFilterContext *));
       //first is the background
       snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=1/1:pixel_aspect=0", bgFrame->width, bgFrame->height, bgFrame->format);
       returnCode = avfilter_graph_create_filter(&inputContexts[0], bufferSrc, "background", args, NULL, graph);
       if(returnCode < 0)
           return returnCode;
       graphInputs[0]->filter_ctx = inputContexts[0];
       graphInputs[0]->name = av_strdup("background");
       graphInputs[0]->next = graphInputs[1];

       //allocate the rest
       for(i = 1; i <= inputCount; i++)
       {
           AVCodecContext *codecCtx = codecContexts[i - 1];
           snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
                       codecCtx->width, codecCtx->height, codecCtx->pix_fmt,
                       codecCtx->time_base.num, codecCtx->time_base.den,
                       codecCtx->sample_aspect_ratio.num, codecCtx->sample_aspect_ratio.den);
           snprintf(name, sizeof(name), "video_%d", i);

           returnCode = avfilter_graph_create_filter(&inputContexts[i], bufferSrc, name, args, NULL, graph);
           if(returnCode < 0)
               return returnCode;

           graphInputs[i]->filter_ctx = inputContexts[i];
           graphInputs[i]->name = av_strdup(name);
           graphInputs[i]->pad_idx = 0;
           if(i < inputCount)
           {
               graphInputs[i]->next = graphInputs[i + 1];
           }
           else
           {
               graphInputs[i]->next = NULL;
           }
       }

       //allocate outputs
       graphOutput = avfilter_inout_alloc();  
       returnCode = avfilter_graph_create_filter(&outputContext, bufferSink, "out", NULL, NULL, graph);
       if(returnCode < 0)
           return returnCode;
       graphOutput->filter_ctx = outputContext;
       graphOutput->name = av_strdup("out");
       graphOutput->next = NULL;
       graphOutput->pad_idx = 0;

       returnCode = avfilter_graph_parse_ptr(graph, filters, graphInputs, &graphOutput, NULL);
       if(returnCode < 0)
           return returnCode;

       returnCode = avfilter_graph_config(graph, NULL);
           return returnCode;

       return 0;
    }

    The filters argument of the function is passed on to avfilter_graph_parse_ptr and it can looks like this : [background] scale=512x512 [base]; [video_1] scale=256x256 [tmp_1]; [base][tmp_1] overlay=0:0 [out]

    The call breaks after the call to avfilter_graph_config with the warning :
    Output pad "default" with type video of the filter instance "background" of buffer not connected to any destination and the error Invalid argument.

    What is it that I am not doing correctly ?

    EDIT : The are two issues that I have discovered :

    1. Looks like the description of avfilter_graph_parse_ptr is a bit vague. The ouputs parameter represents a list of the current outputs of the graph, in my case that being the graphInputs variable, because these are the outputs from the buffer filter. The inputs parameter represents a list of the current inputs of the graph, in this case this is the graphOutput variable, because it represents the input to the buffersink filter.

    2. I did some testing with a scale filter and a single input. It seems that the name of the AVFilterInOut structure required by avfilter_graph_parse_ptr needs to be in. I have tried with different versions : in_1, in_link_1. None of them work and I have not been able to find any documentation related to this.

    So the issue still remains. How do I implement a filter graph with multiple inputs ?