Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (101)

  • Prérequis à l’installation

    31 janvier 2010, par

    Préambule
    Cet article n’a pas pour but de détailler les installations de ces logiciels mais plutôt de donner des informations sur leur configuration spécifique.
    Avant toute chose SPIPMotion tout comme MediaSPIP est fait pour tourner sur des distributions Linux de type Debian ou dérivées (Ubuntu...). Les documentations de ce site se réfèrent donc à ces distributions. Il est également possible de l’utiliser sur d’autres distributions Linux mais aucune garantie de bon fonctionnement n’est possible.
    Il (...)

  • Emballe Médias : Mettre en ligne simplement des documents

    29 octobre 2010, par

    Le plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
    Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
    D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (6823)

  • Piping input AND output of ffmpeg in python

    5 mars 2019, par bluesummers

    I’m using ffmpeg to create a video, from a list of base64 encoded images that I pipe into ffmpeg.

    Outputting to a file (using the attached code below) works perfectly, but what I would like to achieve is to get the output to a Python variable instead - meaning piping input and piping output but I can’t seem to get it to work

    My current code :

    output = os.path.join(screenshots_dir, 'video1.mp4')

    cmd_out = ['ffmpeg',
              '-y',  # (optional) overwrite output file if it exists
              '-f', 'image2pipe',
              '-vcodec', 'png',
              '-r', str(fps),  # frames per second
              '-i', '-',  # The input comes from a pipe
              '-vcodec', 'png',
              '-qscale', '0',
              output]

    pipe = sp.Popen(cmd_out, stdin=sp.PIPE)

    for screenshot in screenshot_list:
       im = Image.open(BytesIO(base64.b64decode(screenshot)))
       im.save(pipe.stdin, 'PNG')

    pipe.stdin.close()
    pipe.wait()

    This results in a working mp4, but I would like to avoid saving to local.

    Running the same code with changing output to '-' or 'pipe:1' and adding stdout=sp.PIPE results in an error

    [NULL @ 0x2236000] Unable to find a suitable output format for ’pipe :’

  • ffmpeg debug experience consult

    28 octobre 2024, par AQ Q

    When using the avfilter_graph_create_filter function, I encountered a return value of -1414549496. I know this means invalid parameter.

    


    The current project environment does not support breakpoint debugging for the FFmpeg API.
I would like to know what good debugging strategies to use when using these functions.

    


    The code block is as follows

    


    AVFilterContext *bufferScr_ctx=NULL;
AVFilterContext *bufferSink_ctx=NULL;

char args[256]={0}; //During testing args = "buffer=video_size=400x300:pix_fmt=0:time_base=1/90000:pixel_aspect=0/1"
snprintf(args,sizeof(args),"buffer=video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",frame->width,frame->height,is->viddec.avctx->pix_fmt,is->video_st->time_base.num,is->video_st->time_base.den,is->video_st->sample_aspect_ratio.num,is->video_st->sample_aspect_ratio.den);

const AVFilter *bufferSrc = avfilter_get_by_name("buffer");
const AVFilter *bufferSink = avfilter_get_by_name("buffersink");
AVFilterInOut *input = avfilter_inout_alloc();
AVFilterInOut *output = avfilter_inout_alloc();
AVFilterGraph *filterGraph=avfilter_graph_alloc();
if(!output||!input||!filterGraph){
    ret =-1;
    goto end;
}
// ret = -1414549496
if((ret = avfilter_graph_create_filter(&bufferScr_ctx,bufferSrc,"in",args,NULL,filterGraph))<0){
    goto end;
}
if((ret = avfilter_graph_create_filter(&bufferSink_ctx,bufferSink,"out",NULL,NULL,filterGraph))<0){
    goto end;
}

enum AVPixelFormat pix_fmt[] = {AV_PIX_FMT_RGB8,AV_PIX_FMT_NONE};
ret = av_opt_set_int_list(bufferSink_ctx,"pix_fmts",pix_fmt,AV_PIX_FMT_NONE,AV_OPT_SEARCH_CHILDREN);
if(ret <0 ){
    goto end;
}
const char *filter_des = "fps=10,scale=320:-1:flags=lanczos";
output->name= av_strdup("in");
output->filter_ctx = bufferScr_ctx;
output->pad_idx=0;
output->next=NULL;
input->next=NULL;
input->pad_idx=0;
input->name= av_strdup("out");
input->filter_ctx=bufferSink_ctx;
if((ret = avfilter_graph_parse_ptr(filterGraph,filter_des,&input,&output,NULL))<0){
    goto end;
}
if((ret = avfilter_graph_config(filterGraph,NULL))<0){
    goto end;
}


    


    I've encountered similar issues with functions like avcodec_open2 and avio_alloc_context. It’s frustrating because structures like AVCodecContext have many attributes, and I’m not sure which ones are not set correctly. The function return value only tells me that the parameter is invalid.
I want to know how to troubleshoot similar errors .

    


  • Looping 2 videos simultaneously until audio file ends in FFMPEG

    20 septembre 2022, par Fin Cottle

    I'm very new to ffmpeg, learning quickly but struggling to find a solution to the following.

    


    I would like to be able to loop 2 videos until the end of an extra audio file.
One of the videos will be a base & the other will be an overlay of 50% opacity on top of the base.

    


    I've got the gist of how to execute these within other operations (e.g. the 50% opacity, or the looping of a single video until the end of an audio file, these don't need to be answered here), but the looping of both videos until the end of the separate audio is proving challenging.

    


    Here's where I've got so far :

    


    ffmpeg -stream_loop -1 -i base.mp4 -i overlay.mp4 -i audio.mp3 -filter_complex "[0:v]setpts=PTS-STARTPTS[top]; [1:v]setpts=PTS-STARTPTS, format=yuva420p,colorchannelmixer=aa=0.5[bottom]; [top][bottom]overlay=shortest=1[v1]" -map "[v1]" -map 2:a -vcodec libx264 -y out.mp4


    


    This loops the base until the end of the overlay, but then freezes the base & overlay until the end of the audio (as the audio is longer).

    


    One solution may be to loop v1 until the end of the audio ? How would I go about this ?

    


    Either way, no matter the length of either video, the final output should be the length of the audio.

    


    My implementation could be pretty messy as my attempts are all amalgamations of internet answers & research without knowing the full meaning of each param, so please let me know if anything is wrong.

    


    Thanks in advance.