Recherche avancée

Médias (0)

Mot : - Tags -/interaction

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (15150)

  • What is the right way to put data to abuffersrc and pull from abufersink when I have multiple inputs with FFMPEG "afir" filter ?

    19 décembre 2022, par Evgenii

    I want to understand why av_buffersink_get_frame() always return AVERROR(EAGAIN).
I use this algorithm :

    


      

    1. Open input file input.mp3, create decoder for input file
    2. 


    3. Open signal file signal.wav, create decoder for signal file
    4. 


    5. Create filter graph and inputs/outputs with function avfilter_graph_parse2("[in0][in1]afir[out], &inputs, &outputs")
    6. 


    7. Create abuffersrc for every inputs with names "in0", "in1"
    8. 


    9. Create one abuffersink for one outputs with name "out"
    10. 


    11. Read and decode samples with methods av_read_frame(), avcodec_send_packet(), avcodec_receive_frame(), put samples to abuffersrc with av_buffersrc_add_frame() and trying to read samples from abuffersink with av_buffersink_get_frame() - HERE I get AVERROR(EAGAIN) for every call
    12. 


    


    Please, help !

    


    I've tried to read all samples and push its to abuffersrc for input and signal pipes in first step and and call av_buffersink_get_frame one time in second step. I got AVERROR(EGAIN) again.

    


    I've tried to config two outputs with names "in0" and "in1" and one input with name "out" and call avfilter_graph_parse_ptr() as done in here :

    


    static AVFilterContext *init_buffersrc(AVCodecContext *decoder, AVFilterGraph *filter_graph, const char *pad_name) {
    AVFilterContext *buffersrc = NULL;
    uint8_t args[1024];

    snprintf(args, sizeof(args),
        "time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64"",
        decoder->time_base.num, decoder->time_base.den, decoder->sample_rate, av_get_sample_fmt_name(decoder->sample_fmt), decoder->channel_layout);

    CHECK_AV(avfilter_graph_create_filter(&buffersrc, avfilter_get_by_name("abuffer"), pad_name, args, NULL, filter_graph));
    return buffersrc;
}

static AVFilterContext *init_buffersink(AVFilterGraph *filter_graph) {
    AVFilterContext *buffersink = NULL;

    CHECK_AV(avfilter_graph_create_filter(&buffersink, avfilter_get_by_name("abuffersink"), "out", NULL, NULL, filter_graph));
    return buffersink;
}

static void init_complex_graph(const char *desc, Context *ctx) {

    AVFilterGraph *filter_graph;
    AVFilterInOut *inputs = avfilter_inout_alloc();
    AVFilterInOut **outputs = (AVFilterInOut**) av_malloc(ctx->nb_input_files*sizeof(AVFilterInOut *));
    char *dump;
    uint8_t pad_name[64];

    filter_graph = ctx->filter_graph = avfilter_graph_alloc();
    filter_graph->nb_threads = 1;

    for (int i = 0; i< ctx->nb_input_files; ++i) {
        outputs[i] = avfilter_inout_alloc();
    }

    for (int i = 0; i < ctx->nb_input_files; ++i) {
        AVFilterContext *buffersrc;
        snprintf(pad_name, sizeof(pad_name), "in%d", i);        
        buffersrc = init_buffersrc(ctx->input_files[i]->decoder, filter_graph, pad_name);
        ctx->input_files[i]->buffersrc = buffersrc;
        outputs[i]->name = av_strdup(pad_name);
        outputs[i]->filter_ctx = buffersrc;
        outputs[i]->pad_idx = 0;
        if (i == ctx->nb_input_files - 1) {
            outputs[i]->next = NULL;
        }
        else {
            outputs[i]->next = outputs[i + 1];
        }
    }

    ctx->output_file->buffersink = init_buffersink(filter_graph);

    inputs->name = av_strdup("out");
    inputs->filter_ctx = ctx->output_file->buffersink;
    inputs->pad_idx =0;
    inputs->next = NULL;

    CHECK_AV(avfilter_graph_parse_ptr(filter_graph, desc, &inputs, outputs, NULL));

    CHECK_AV(avfilter_graph_config(filter_graph, NULL));
}   

// Call init_complex_graph("[in0][in1]afir[out]", SOME_CONTEXT);


    


    and still nothing.

    


  • Python asyncio subprocess code returns "pipe closed by peer or os.write(pipe, data) raised exception."

    4 novembre 2022, par Duke Dougal

    I am trying to convert a synchronous Python process to asyncio. Any ideas what I am doing wrong ?

    


    This is the synchronous code which successfully starts ffmpeg and converts a directory of webp files into a video.

    


    import subprocess
import shlex
from os import listdir
from os.path import isfile, join

output_filename = 'output.mp4'
process = subprocess.Popen(shlex.split(f'ffmpeg -y -framerate 60 -i pipe: -vcodec libx265 -pix_fmt yuv420p -crf 24 output.mp4'), stdin=subprocess.PIPE)

thepath = '/home/ubuntu/webpfiles/'
thefiles = [f for f in listdir(thepath) if isfile(join(thepath, f))]
for filename in thefiles:
    absolute_path = f'{thepath}{filename}'
    with open(absolute_path, 'rb') as f:
        process.stdin.write(f.read())

process.stdin.close()
process.wait()
process.terminate()


    


    This async code fails :

    


    from os import listdir
from os.path import isfile, join
import shlex
import asyncio

outputfilename = 'output.mp4'

async def write_stdin(proc):
    thepath = '/home/ubuntu/webpfiles/'
    thefiles = [f for f in listdir(thepath) if isfile(join(thepath, f))]
    thefiles.sort()
    for filename in thefiles:
        absolute_path = f'{thepath}{filename}'
        with open(absolute_path, 'rb') as f:
            await proc.communicate(input=f.read())

async def create_ffmpeg_subprocess():
    bin = f'/home/ubuntu/bin/ffmpeg'
    params = f'-y -framerate 60 -i pipe: -vcodec libx265 -pix_fmt yuv420p -crf 24 {outputfilename}'
    proc = await asyncio.create_subprocess_exec(
        bin,
        *shlex.split(params),
        stdin=asyncio.subprocess.PIPE,
        stdout=asyncio.subprocess.PIPE,
        stderr=asyncio.subprocess.PIPE,
    )
    return proc

async def start():
    loop = asyncio.get_event_loop()
    proc = await create_ffmpeg_subprocess()
    task_stdout = loop.create_task(write_stdin(proc))
    await asyncio.gather(task_stdout)

if __name__ == '__main__':
    asyncio.run(start())


    


    The output for the async code is :

    


    pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.


    


    etc - one line for each webp file

    


  • "No JPEG data found in image" when using ffmpeg concat

    7 juillet 2021, par bpgeck

    We are using FFMPEG for some image processing. We are attempting to concatenate a series of images into one single video file using ffmpeg concat, similar to what is described in this guide.

    


    Our full command is the following :

    


    ffmpeg -loglevel debug -y concat -safe 0 -i /tmp/infile.txt -pix_fmt yuvj420p -c:v libx264 -bsf:v h264_mp4toannexb /tmp/outfile.ts


    


    This infile.txt contains a list of the image file names and the duration that image should take up in the video :

    


    file /tmp/media/tmppjdk_2jd.jpg
duration 0.049911
file /tmp/media/tmptjuoz56b.jpg
duration 0.050015
file /tmp/media/tmpzywkxe16.jpg
duration 0.049960
...


    


    When running this command, however, I see the following error :

    


    [mjpeg @ 0x565013c4b800] No JPEG data found in image
Error while decoding stream #0:0: Invalid data found when processing input
[mjpeg @ 0x565013c4b800] No JPEG data found in image
Error while decoding stream #0:0: Invalid data found when processing input
[mjpeg @ 0x565013c4b800] No JPEG data found in image
Error while decoding stream #0:0: Invalid data found when processing input
[mjpeg @ 0x565013c4b800] No JPEG data found in image
Error while decoding stream #0:0: Invalid data found when processing input
...
opping st:0
DTS 243, next:3170350840000 st:0 invalid dropping
PTS 243, next:3170350840000 invalid dropping st:0
DTS 244, next:3170350880000 st:0 invalid dropping
PTS 244, next:3170350880000 invalid dropping st:0
DTS 245, next:3170350920000 st:0 invalid dropping
PTS 245, next:3170350920000 invalid dropping st:0
DTS 246, next:3170350960000 st:0 invalid dropping
PTS 246, next:3170350960000 invalid dropping st:0
DTS 248, next:3170351000000 st:0 invalid dropping
PTS 248, next:3170351000000 invalid dropping st:0


    


    Any ideas on how I can properly debug this ?