Recherche avancée

Médias (1)

Mot : - Tags -/belgique

Autres articles (105)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Formulaire personnalisable

    21 juin 2013, par

    Cette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
    Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire. (...)

  • Qu’est ce qu’un masque de formulaire

    13 juin 2013, par

    Un masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
    Chaque formulaire de publication d’objet peut donc être personnalisé.
    Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
    Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)

Sur d’autres sites (9756)

  • Fluent FFMPEG complex filter to split file into multiple outputs

    26 octobre 2020, par lowcrawler

    It appears possible to have multiple outputs from a single FFMPEG command : ffmpeg overlay on multiple outputs

    


    I'd like to know how to do this in FFMPEG. I'm specifically using the complexFilter option in an attempt to split the video into 4 different sizes and place an overlay, and then save the 4 resulting files.

    


    The code is my attempt to simply split the video into 4 and save it. I get the Error: ffmpeg exited with code 1: Filter split:output3 has an unconnected output error. I'm unsure how to connect the output to a file in fluent-ffmpeg.

    


        let ffmpegCommand = ffmpeg() 
        .addInput(path.join(__dirname, PROCESSING_CACHE_DIRECTORY, "tempImage_%d.jpg"))

        .addOutput(outputPathFull)
        .addOutput(outputPathMed)
        .addOutput(outputPathSmall)
        .addOutput(outputPathThumb)


        .toFormat('mp4')
        .videoCodec('libx264')
        .outputOptions('-pix_fmt yuv420p')
        .complexFilter([
        {
                filter: 'split', options: '4',
                inputs: ['0:v'], outputs: [outputPathFull, outputPathMed, outputPathSmall, outputPathThumb]
            },
        ])


    


    When I flip the outputs and put them below the complexFilter, I get 4 files - one with the appropriate quality (and 4x bigger than expected) and the others in very low quality.

    


  • How can i stream through ffmpeg a canvas generated in Node.js to youtube/any other rtmp server ?

    10 octobre 2020, par DDC

    i wanted to generate some images in Node.JS, compile them to a video and stream them to youtube. To generate the images i'm using the node-canvas module. This sounds simple enough, but i wanted to generate the images continuously, and stream the result in realtime. I'm very new to this whole thing, and what i was thinking about doing, after reading a bunch of resources on the internet was :

    


      

    1. Open ffmpeg with spawn('ffmpeg', ...args), setting the output to the destination rtmp server
    2. 


    3. Generate the image in the canvas
    4. 


    5. Convert the content of the canvas to a buffer, and write it to the ffmpeg process through stdin
    6. 


    7. Enjoy the result on Youtube
    8. 


    


    But it's not as simple as that, is it ? I saw people sharing their code involving client-side JS running on the browser, but i wanted it to be a Node app so that i could run it from a remote VPS.
Is there a way for me to do this without using something like p5 in my browser and capturing the window to restream it ?
Is my thought process even remotely adequate ? For now i don't really care about performance/resources usage. Thanks in advance.

    


    EDIT :

    


    I worked on it for a bit, and i couldn't get it to work...
This is my code :

    


    const { spawn } = require('child_process');
const { createCanvas } = require('canvas');
const fs = require('fs');


const canvas = createCanvas(1920, 1080);
const ctx = canvas.getContext('2d');
const ffmpeg = spawn("ffmpeg",
    ["-re", "-f", "png_pipe", "-vcodec", "png", "-i", "pipe:0", "-vcodec", "h264", "-re", "-f", "flv", "rtmp://a.rtmp.youtube.com/live2/key-i-think"],
    { stdio: 'pipe' })

const randomColor = (depth) => Math.floor(Math.random() * depth)
const random = (min, max) => (Math.random() * (max - min)) + min;

let i = 0;
let drawSomething = function () {
    ctx.strokeStyle = `rgb(${randomColor(255)}, ${randomColor(255)}, ${randomColor(255)})`
    let x1 = random(0, canvas.width);
    let x2 = random(0, canvas.width);
    let y1 = random(0, canvas.height);
    let y2 = random(0, canvas.height);
    ctx.moveTo(x1, y1);
    ctx.lineTo(x2, y2);
    ctx.stroke();

    let out = canvas.toBuffer();
    ffmpeg.stdin.write(out)
    i++;
    if (i >= 30) {
        ffmpeg.stdin.end();
        clearInterval(int)
    };
}

drawSomething();
let int = setInterval(drawSomething, 1000);



    


    I'm not getting any errors, neither i am getting any video data from it. I have set up an rtmp server that i can connect to, and then get the stream with VLC, but i don't get any video data. Am i doing something wrong ? I Looked around for a while, and i can't seem to find anyone that tried this, so i don't really have a clue...

    


    EDIT 2 :
Apparently i was on the right track, but my approach just gave me like 2 seconds of "good" video and then it started becoming blocky and messy. i think that, most likely, my method of generating images is just too slow. I'll try to use some GPU accelerated code to generate the images, instead of using the canvas, which means i'll be doing fractals all the time, since i don't know how to do anything else with that. Also, a bigger buffer in ffmpeg might help too

    


  • avfilter/avf_concat : check for possible integer overflow

    13 septembre 2020, par Paul B Mahol
    avfilter/avf_concat : check for possible integer overflow
    

    Also check that segment delta pts is always bigger than input pts.

    There is nothing much currently that can be done to recover from
    this situation so just return AVERROR_INVALIDDATA error code.

    • [DH] libavfilter/avf_concat.c