Recherche avancée

Médias (91)

Autres articles (111)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (13128)

  • Using NReco FFMpegConverter in an azure function or in a azure web job

    10 septembre 2020, par Manojb86

    Currently, I'm working on POC which converts h264 stream to MP4 file. I'm using NReco FFMpegConverter for that purpose and also use the Azure Cloud Service to host my service.
    
This Azure Cloud Service capture concurrent streams and convert the respective MP4 file. From the testing of this app so far I realize better to use Azure Function or Azure WebJobs to use only for the conversion of H264 stream to MP4 file. Then cloud service has more room to handle concurrent streams.

    


    That I want to know, Is it possible to use NReco FFMpegConverter in Azure Function or Azure WebJobs without an issue ? are there any special configurations to consider because FFmpeg runs in a separate thread with NReco FFMpegConverter ?

    


  • Update jpg during live broadcast with ffmpeg

    13 mars 2021, par FoxFr

    I encounter a problem with a stream ffmpeg, i search from several hours how my jpg wasn't update during my live, indeed i wish to cast a jpg updated each 5 seconds beteween other video inputs

    


      

    • i create a new jpg each 5 seconds
    • 


    • i wish to cast this new image each 5 seconds
    • 


    


    I can't find an option to refresh each 1 sec or more, u know ?

    


    -loop 1  -i /home/pi/videopi/map/map.jpg


    


    i try to add -update 1 , without success

    


  • FFMPEG Concatenating videos with same 25fps results in output file with 3.554fps

    5 juin 2024, par Kendra Broom

    I created an AWS Lambda function in node.js 18 that is using a static, ver 7 build of FFmpeg located in a lambda layer. Unfortunately it's just the ffmpeg build and doesn't include ffprobe.

    


    I have an mp4 audio file in one S3 bucket and a wav audio file in a second S3 bucket. I'm uploading the output file to a third S3 bucket.

    


    Specs on the files (please let me know if any more info is needed)

    


    Audio :
wav, 13kbps, aac (LC), 6:28 duration

    


    Video :
mp4, 1280x720 resolution, 25 frame rate, h264 codec, 3:27 duration

    


    Goal :
Create blank video to fill in the duration gaps so the full audio is covered before and after the mp4 video (using timestamps and duration). Strip the mp4 audio and use the wav audio only. Output should be an mp4 video with the wav audio playing over it and blank video for 27 seconds (based on timestamp) until mp4 video plays for 3:27, and then blank video to cover the rest of the audio until 6:28.

    


    Actual Result :
An mp4 file with 3.554 frame rate and 10:06 duration.

    


    import { S3Client, GetObjectCommand, PutObjectCommand } from "@aws-sdk/client-s3";
import { createWriteStream, createReadStream, promises as fsPromises } from 'fs';
import { exec } from 'child_process';
import { promisify } from 'util';
import { basename } from 'path';

const execAsync = promisify(exec);

const s3 = new S3Client({ region: 'us-east-1' });

async function downloadFileFromS3(bucket, key, downloadPath) {
    const getObjectParams = { Bucket: bucket, Key: key };
    const command = new GetObjectCommand(getObjectParams);
    const { Body } = await s3.send(command);
    return new Promise((resolve, reject) => {
        const fileStream = createWriteStream(downloadPath);
        Body.pipe(fileStream);
        Body.on('error', reject);
        fileStream.on('finish', resolve);
    });
}

async function uploadFileToS3(bucket, key, filePath) {
    const fileStream = createReadStream(filePath);
    const uploadParams = { Bucket: bucket, Key: key, Body: fileStream };
    try {
        await s3.send(new PutObjectCommand(uploadParams));
        console.log(`File uploaded successfully to ${bucket}/${key}`);
    } catch (err) {
        console.error("Error uploading file: ", err);
        throw new Error('Failed to upload file to S3');
    }
}

function parseDuration(durationStr) {
    const parts = durationStr.split(':');
    return parseInt(parts[0]) * 3600 + parseInt(parts[1]) * 60 + parseFloat(parts[2]);
}

export async function handler(event) {
    const videoBucket = "video-interaction-content";
    const videoKey = event.videoKey;
    const audioBucket = "audio-call-recordings";
    const audioKey = event.audioKey;
    const outputBucket = "synched-audio-video";
    const outputKey = `combined_${basename(videoKey, '.mp4')}.mp4`;

    const audioStartSeconds = new Date(event.audioStart).getTime() / 1000;
    const videoStartSeconds = new Date(event.videoStart).getTime() / 1000;
    const audioDurationSeconds = event.audioDuration / 1000;
    const timeDifference = audioStartSeconds - videoStartSeconds;

    try {
        const videoPath = `/tmp/${basename(videoKey)}`;
        const audioPath = `/tmp/${basename(audioKey)}`;
        await downloadFileFromS3(videoBucket, videoKey, videoPath);
        await downloadFileFromS3(audioBucket, audioKey, audioPath);

        //Initialize file list with video
        let filelist = [`file '${videoPath}'`];
        let totalVideoDuration = 0; // Initialize total video duration

        // Create first blank video if needed
        if (timeDifference < 0) {
            const blankVideoDuration = Math.abs(timeDifference);
            const blankVideoPath = `/tmp/blank_video.mp4`;
            await execAsync(`/opt/bin/ffmpeg -f lavfi -i color=c=black:s=1280x720:r=25 -c:v libx264 -t ${blankVideoDuration} ${blankVideoPath}`);
            //Add first blank video first in file list
            filelist.unshift(`file '${blankVideoPath}'`);
            totalVideoDuration += blankVideoDuration;
            console.log(`First blank video created with duration: ${blankVideoDuration} seconds`);
        }
        
        const videoInfo = await execAsync(`/opt/bin/ffmpeg -i ${videoPath} -f null -`);
        const videoDurationMatch = videoInfo.stderr.match(/Duration: ([\d:.]+)/);
        const videoDuration = videoDurationMatch ? parseDuration(videoDurationMatch[1]) : 0;
        totalVideoDuration += videoDuration;

        // Calculate additional blank video duration
        const additionalBlankVideoDuration = audioDurationSeconds - totalVideoDuration;
        if (additionalBlankVideoDuration > 0) {
            const additionalBlankVideoPath = `/tmp/additional_blank_video.mp4`;
            await execAsync(`/opt/bin/ffmpeg -f lavfi -i color=c=black:s=1280x720:r=25 -c:v libx264 -t ${additionalBlankVideoDuration} ${additionalBlankVideoPath}`);
            //Add to the end of the file list
            filelist.push(`file '${additionalBlankVideoPath}'`);
            console.log(`Additional blank video created with duration: ${additionalBlankVideoDuration} seconds`);
        }

        // Create and write the file list to disk
        const concatFilePath = '/tmp/filelist.txt';
        await fsPromises.writeFile('/tmp/filelist.txt', filelist.join('\n'));

        const extendedVideoPath = `/tmp/extended_${basename(videoKey)}`;
        //await execAsync(`/opt/bin/ffmpeg -f concat -safe 0 -i /tmp/filelist.txt -c copy ${extendedVideoPath}`);
        
        // Use -vsync vfr to adjust frame timing without full re-encoding
        await execAsync(`/opt/bin/ffmpeg -f concat -safe 0 -i ${concatFilePath} -c copy -vsync vfr ${extendedVideoPath}`);

        const outputPath = `/tmp/output_${basename(videoKey, '.mp4')}.mp4`;
        //await execAsync(`/opt/bin/ffmpeg -i ${extendedVideoPath} -i ${audioPath} -map 0:v:0 -map 1:a:0 -c:v copy -c:a aac -b:a 192k -shortest ${outputPath}`);

        await execAsync(`/opt/bin/ffmpeg -i ${extendedVideoPath} -i ${audioPath} -map 0:v:0 -map 1:a:0 -c:v copy -c:a aac -b:a 192k -shortest -r 25 ${outputPath}`);
        console.log('Video and audio have been merged successfully');

        await uploadFileToS3(outputBucket, outputKey, outputPath);
        console.log('File upload complete.');

        return { statusCode: 200, body: JSON.stringify('Video and audio have been merged successfully.') };
    } catch (error) {
        console.error('Error in Lambda function:', error);
        return { statusCode: 500, body: JSON.stringify('Failed to process video and audio.') };
    }
}


    


    Attempts :
I've tried re-encoding the concatenated file but the lambda function times out. I hoped that by creating blank video with a 25fps and all the other specs from the original mp4, I wouldn't have to re-encode the concatenated file. Obviously something is wrong, though. In the commented out code you can see I tried specifying 25 or not, and also tried -vsync and no -vsync. I'm new to FFmpeg so all tips are appreciated !