Recherche avancée

Médias (91)

Autres articles (99)

  • Configuration spécifique d’Apache

    4 février 2011, par

    Modules spécifiques
    Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
    Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
    Création d’un (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Taille des images et des logos définissables

    9 février 2011, par

    Dans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
    Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)

Sur d’autres sites (4853)

  • Overlaying a text stream on a video stream with ffmpeg in Node.js

    16 mai 2023, par Tchoune

    I am creating a streaming system with Node.js that uses ffmpeg to send video and text streams to a local RTMP server, then combines those streams and sends them to Twitch.

    


    I'm using canvas to create a text image with a transparent background, and I need to change that text every time a new video in the playlist starts.

    


    Currently in stream I see only the video stream of my video and not the text. But if I go via VLC to see each more separate, I see them

    


    However, I'm running into a problem where the text stream doesn't appear in the final video stream on Twitch. In addition, I get the following error message :

    


    Combine stderr: [NULL @ 0x1407069f0] Unable to find a suitable output format for 'rtmp://live.twitch.tv/app/streamKey'
rtmp://live.twitch.tv/app/streamKey: Invalid argument


    


    Here is my current Node.js code :

    


    
const createTextImage = (runner) => {
    return new Promise((resolve, reject) => {
        const canvas = createCanvas(1920, 1080);
        const context = canvas.getContext('2d');

        // Fill the background with transparency
        context.fillStyle = 'rgba(0,0,0,0)';
        context.fillRect(0, 0, canvas.width, canvas.height);

        // Set the text options
        context.fillStyle = '#ffffff';
        context.font = '24px Arial';
        context.textAlign = 'start';
        context.textBaseline = 'middle';

        // Draw the text
        context.fillText(`Speedrun by ${runner}`, canvas.width / 2, canvas.height / 2);

        // Define the images directory
        const imagesDir = path.join(__dirname, 'images', 'runners');

        // Ensure the images directory exists
        fs.mkdirSync(imagesDir, { recursive: true });

        // Define the file path
        const filePath = path.join(imagesDir, runner + '.png');

        // Create the write stream
        const out = fs.createWriteStream(filePath);

        // Create the PNG stream
        const stream = canvas.createPNGStream();

        // Pipe the PNG stream to the write stream
        stream.pipe(out);

        out.on('finish', () => {
            console.log('The PNG file was created.');
            resolve();
        });

        out.on('error', reject);
    });
}
const streamVideo = (video) => {
    ffmpegLibrary.ffprobe(video.video, function (err, metadata) {
        if (err) {
            console.error(err);
            return;
        }
        currentVideoDuration = metadata.format.duration;

        // Annulez le délai précédent avant d'en créer un nouveau
        if (nextVideoTimeoutId) {
            clearTimeout(nextVideoTimeoutId);
        }

        // Déplacez votre appel setTimeout ici
        nextVideoTimeoutId = setTimeout(() => {
            console.log('Fin de la vidéo, passage à la suivante...');
            nextVideo();
        }, currentVideoDuration * 1000 + 10000);
    })


    ffmpegVideo = childProcess.spawn('ffmpeg', [
        '-nostdin', '-re', '-f', 'concat', '-safe', '0', '-i', 'playlist.txt',
        '-vcodec', 'libx264',
        '-s', '1920x1080',
        '-r', '30',
        '-b:v', '5000k',
        '-acodec', 'aac',
        '-preset', 'veryfast',
        '-f', 'flv',
        `rtmp://localhost:1935/live/video` // envoie le flux vidéo au serveur rtmp local
    ]);

    createTextImage(video.runner).then(() => {
        ffmpegText = childProcess.spawn('ffmpeg', [
            '-nostdin', '-re',
            '-loop', '1', '-i', `images/runners/${video.runner}.png`, // Utilise l'image créée par Puppeteer
            '-vcodec', 'libx264rgb', // Utilise le codec PNG pour conserver la transparence
            '-s', '1920x1080',
            '-r', '30',
            '-b:v', '5000k',
            '-acodec', 'aac',
            '-preset', 'veryfast',
            '-f', 'flv',
            `rtmp://localhost:1935/live/text` // envoie le flux de texte au serveur rtmp local
        ]);

        ffmpegText.stdout.on('data', (data) => {
            console.log(`text stdout: ${data}`);
        });

        ffmpegText.stderr.on('data', (data) => {
            console.error(`text stderr: ${data}`);
        });
    }).catch(error => {
        console.error(`Erreur lors de la création de l'image de texte: ${error}`);
    });

    ffmpegCombine = childProcess.spawn('ffmpeg', [
        '-i', 'rtmp://localhost:1935/live/video',
        '-i', 'rtmp://localhost:1935/live/text',
        '-filter_complex', '[0:v][1:v]overlay=main_w-overlay_w:0',
        '-s', '1920x1080',
        '-r', '30',
        '-vcodec', 'libx264',
        '-b:v', '5000k',
        '-acodec', 'aac',
        '-preset', 'veryfast',
        '-f', 'flv',
        `rtmp://live.twitch.tv/app/${twitchStreamKey}` // envoie le flux combiné à Twitch
    ]);

    ffmpegVideo.stdout.on('data', (data) => {
        console.log(`video stdout: ${data}`);
    });

    ffmpegVideo.stderr.on('data', (data) => {
        console.error(`video stderr: ${data}`);
    });

    ffmpegCombine.stdout.on('data', (data) => {
        console.log(`Combine stdout: ${data}`);
    });

    ffmpegCombine.stderr.on('data', (data) => {
        console.error(`Combine stderr: ${data}`);
    });

    ffmpegCombine.on('close', (code) => {
        console.log(`ffmpeg exited with code ${code}`);
        if (currentIndex >= playlist.length) {
            console.log('End of playlist');
            currentIndex = 0;
        }
    });
}



    


    Locally I use nginx with rtmp module to manage multi-streams and combined into one to send to twitch

    


    In NGINX it's my nginx.conf for module :

    


    rtmp {
    server {
        listen 1935; # le port pour le protocole RTMP
        
        application live {
            live on; # active le streaming en direct
            record off; # désactive l'enregistrement du flux
    
            # définit l'endroit où les flux doivent être envoyés
            push rtmp://live.twitch.tv/app/liveKey;
        }
    
        application text {
            live on; # active le streaming en direct
            record off; # désactive l'enregistrement du flux
        }
    }
}


    


    I have checked that the codecs, resolution and frame rate are the same for both streams. I am also overlaying the text stream on top of the video stream with the -filter_complex command, but I am not sure if it works correctly.

    


    Does each stream have to have the same parameters ?

    


    I would like to know if anyone has any idea what could be causing this problem and how to fix it. Should I use a different format for the output stream to Twitch ? Or is there another approach I should consider for layering a dynamic text stream over a video stream ?

    


    Also, I'm wondering if I'm handling updating the text stream correctly when the video changes. Currently, I create a new text image with Canvas every time the video changes, then create a new ffmpeg process for the text stream. Is this the right approach, or is there a better way to handle this ?

    


    Thanks in advance for any help or advice.

    


  • Write base64 encoded string as Image with javascript

    21 avril 2020, par Manoj Kumar

    I am using FFmpeg and html2canvus and trying to create an mp4 video with the screenshots taken from a slider.

    



    here is my worker initialization

    



    const worker = createWorker({
    //logger: ({ message }) => console.log(message),
    progress: (p) => console.log(p),
});


    



    Then in click, I take the screenshots and putting into the video

    



    const image2video = async () => {
    displayLoader();
    var zip = new JSZip();
    let selectedbanners = $(".selected_templates:checked");
    await worker.load();
    let promise = new Promise((resolve, reject) => {
        let Processed = 0;
        selectedbanners.each(async function () {
            var dataIndex = $(this).data('index');
            let ad = adTemplates[dataIndex];
            var innercounter = 0;
            $(`.template-container-${ad.name}`).each(async function () {
                var imgData;
                await html2canvas($(`.template-container-${ad.name}`)[innercounter], {allowTaint: true, width: ad.width, height: ad.height}).then(canvas => {
                    imgData = canvas.toDataURL('image/jpeg', 1.0).split('base64,')[1];
                    //await worker.write(`tmp.${ad.name}.${innercounter}.png`, imgData);

                });
                await worker.write(`tmp.${ad.name}.${innercounter}.png`, imgData);
                //await worker.write(`tmp.0.png`, `static/triangle/tmp.0.png`);   This is working
            });
        });
    });
};


    



    I have setup a codepen here. It works if I put the image path but doesn't work if I directly pass the base64 string. Here I found that it also supports the base64 string as well as the URL.
This is how it looks in console
enter image description here
Thanks in advance.

    


  • what is the faster way to load a local image using javascript and / or nodejs and faster way to getImageData ?

    4 octobre 2020, par Tom Lecoz

    I'm working on a video-editing-tool online for a large audience.
Users can create some "scenes" with multiple images, videos, text and sound , add a transition between 2 scenes, add some special effects, etc...

    


    When the users are happy with what they made, they can download the result as a mp4 file with a desired resolution and framerate. Let's say full-hd-60fps for example (it can be bigger).

    


    I'm using nodejs & ffmpeg to build the mp4 from HtmlCanvasElement.
Because it's impossible to seek perfectly frame-by-frame with a HtmlVideoElement, I start to convert the videos from each "scene" in a sequence of png using ffmpeg.
Then, I read my scene frame by frame and , if there are some videos, I replace the videoElements by an image containing the right frame. Once every images are loaded, I launch the capture and go to the next frame.

    


    Everythings works as expected but it's too slow !
Even with a powerfull computer (ryzen 3900X, rtx 2080 super, 32 gb of ram , nvme 970 evo plus) , in the best case, I can capture basic full-hd movie (if it contains videos inside) at 40 FPS.

    


    It may sounds good enought but it's not.
Our company produce thousands of mp4 every day.
A slow encoding process means more servers at works so it will be more expensive for us.

    


    Until now, my company used (and is still using) a tool based on Adobe Flash because the whole video-editing-tool was made with Flash. I was (and am) in charge to translate the whole thing into HTML. I reproduced every feature one by one during 4 years (it's by far my biggest project) and this is the very last step but even if the html-version of our player works very well, the encoding process is much slower than the flash version - able to encode full-hd at 90-100FPS - )

    


    I put console.log everywhere in order to find what makes the encoding so slow and there are 2 bottlenecks :

    


    As I said before, for each frame, if there are videos on the current scene, I replace video-elements by images representing the right frame at the right time. Since I'm using local files, I expected a loading time almost synchronous. It's not the case at all, it required more than 10 ms in most cases.

    


    So my first question is "what is the fastest way to handle local image loading with javascript used as final output ?".

    


    I don't care about the technology involved, I have no preference, I just want to be able to load my local image faster than what I get for now.

    


    The second bottleneck is weird and to be honest I don't understand what's happening here.

    


    When the current frame is ready to be captured, I need to get it's data using CanvasRenderingContext2D.getImageData in order to send it to ffmpeg and this particular step is very slow.

    


    This single line

    


    let imageData = canvas.getContext("2d").getImageData(0,0,1920,1080);  


    


    takes something like 12-13 ms.
It's very slow !

    


    So I'm also searching another way to extract the pixels-data from my canvas.

    


    Few days ago, I found an alternative to getImageData using the new class called VideoFrame that has been created to be used with the classes VideoEncoder & VideoDecoder that will come in Chrome 86.
You can do something like that

    


    let buffers:Uint8Array[] = [];
createImageBitmap(canvas).then((bmp)=>{
   let videoFrame = new VideoFrame(bmp);
   for(let i = 0;i<3;i++){
      buffers[i] = new Uint8Array(videoFrame.planes[id].length);
      videoFrame.planes[id].readInto(buffers[i])
   }
})


    


    It allow me to grab the pixel data around 25% quickly than getImageData but as you can see, I don't get a single RGBA buffer but 3 weirds buffers matching with I420 format.

    


    In an ideal way, I would like to send it directly to ffmpeg but I don't know how to deals with these 3 buffers (i have no experience with I420 format) .

    


    I'm not sure at all the solution that involve VideoFrame is a good one. If you know a faster way to transfer the data from a canvas to ffmpeg, please tell me.

    


    Thanks for reading this very long post.
Any help would be very appreciated