Recherche avancée

Médias (3)

Mot : - Tags -/pdf

Autres articles (104)

  • Soumettre améliorations et plugins supplémentaires

    10 avril 2011

    Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
    Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (10117)

  • FFMPEG Concatenating videos with same 25fps results in output file with 3.554fps

    5 juin 2024, par Kendra Broom

    I created an AWS Lambda function in node.js 18 that is using a static, ver 7 build of FFmpeg located in a lambda layer. Unfortunately it's just the ffmpeg build and doesn't include ffprobe.

    


    I have an mp4 audio file in one S3 bucket and a wav audio file in a second S3 bucket. I'm uploading the output file to a third S3 bucket.

    


    Specs on the files (please let me know if any more info is needed)

    


    Audio :
wav, 13kbps, aac (LC), 6:28 duration

    


    Video :
mp4, 1280x720 resolution, 25 frame rate, h264 codec, 3:27 duration

    


    Goal :
Create blank video to fill in the duration gaps so the full audio is covered before and after the mp4 video (using timestamps and duration). Strip the mp4 audio and use the wav audio only. Output should be an mp4 video with the wav audio playing over it and blank video for 27 seconds (based on timestamp) until mp4 video plays for 3:27, and then blank video to cover the rest of the audio until 6:28.

    


    Actual Result :
An mp4 file with 3.554 frame rate and 10:06 duration.

    


    import { S3Client, GetObjectCommand, PutObjectCommand } from "@aws-sdk/client-s3";
import { createWriteStream, createReadStream, promises as fsPromises } from 'fs';
import { exec } from 'child_process';
import { promisify } from 'util';
import { basename } from 'path';

const execAsync = promisify(exec);

const s3 = new S3Client({ region: 'us-east-1' });

async function downloadFileFromS3(bucket, key, downloadPath) {
    const getObjectParams = { Bucket: bucket, Key: key };
    const command = new GetObjectCommand(getObjectParams);
    const { Body } = await s3.send(command);
    return new Promise((resolve, reject) => {
        const fileStream = createWriteStream(downloadPath);
        Body.pipe(fileStream);
        Body.on('error', reject);
        fileStream.on('finish', resolve);
    });
}

async function uploadFileToS3(bucket, key, filePath) {
    const fileStream = createReadStream(filePath);
    const uploadParams = { Bucket: bucket, Key: key, Body: fileStream };
    try {
        await s3.send(new PutObjectCommand(uploadParams));
        console.log(`File uploaded successfully to ${bucket}/${key}`);
    } catch (err) {
        console.error("Error uploading file: ", err);
        throw new Error('Failed to upload file to S3');
    }
}

function parseDuration(durationStr) {
    const parts = durationStr.split(':');
    return parseInt(parts[0]) * 3600 + parseInt(parts[1]) * 60 + parseFloat(parts[2]);
}

export async function handler(event) {
    const videoBucket = "video-interaction-content";
    const videoKey = event.videoKey;
    const audioBucket = "audio-call-recordings";
    const audioKey = event.audioKey;
    const outputBucket = "synched-audio-video";
    const outputKey = `combined_${basename(videoKey, '.mp4')}.mp4`;

    const audioStartSeconds = new Date(event.audioStart).getTime() / 1000;
    const videoStartSeconds = new Date(event.videoStart).getTime() / 1000;
    const audioDurationSeconds = event.audioDuration / 1000;
    const timeDifference = audioStartSeconds - videoStartSeconds;

    try {
        const videoPath = `/tmp/${basename(videoKey)}`;
        const audioPath = `/tmp/${basename(audioKey)}`;
        await downloadFileFromS3(videoBucket, videoKey, videoPath);
        await downloadFileFromS3(audioBucket, audioKey, audioPath);

        //Initialize file list with video
        let filelist = [`file '${videoPath}'`];
        let totalVideoDuration = 0; // Initialize total video duration

        // Create first blank video if needed
        if (timeDifference < 0) {
            const blankVideoDuration = Math.abs(timeDifference);
            const blankVideoPath = `/tmp/blank_video.mp4`;
            await execAsync(`/opt/bin/ffmpeg -f lavfi -i color=c=black:s=1280x720:r=25 -c:v libx264 -t ${blankVideoDuration} ${blankVideoPath}`);
            //Add first blank video first in file list
            filelist.unshift(`file '${blankVideoPath}'`);
            totalVideoDuration += blankVideoDuration;
            console.log(`First blank video created with duration: ${blankVideoDuration} seconds`);
        }
        
        const videoInfo = await execAsync(`/opt/bin/ffmpeg -i ${videoPath} -f null -`);
        const videoDurationMatch = videoInfo.stderr.match(/Duration: ([\d:.]+)/);
        const videoDuration = videoDurationMatch ? parseDuration(videoDurationMatch[1]) : 0;
        totalVideoDuration += videoDuration;

        // Calculate additional blank video duration
        const additionalBlankVideoDuration = audioDurationSeconds - totalVideoDuration;
        if (additionalBlankVideoDuration > 0) {
            const additionalBlankVideoPath = `/tmp/additional_blank_video.mp4`;
            await execAsync(`/opt/bin/ffmpeg -f lavfi -i color=c=black:s=1280x720:r=25 -c:v libx264 -t ${additionalBlankVideoDuration} ${additionalBlankVideoPath}`);
            //Add to the end of the file list
            filelist.push(`file '${additionalBlankVideoPath}'`);
            console.log(`Additional blank video created with duration: ${additionalBlankVideoDuration} seconds`);
        }

        // Create and write the file list to disk
        const concatFilePath = '/tmp/filelist.txt';
        await fsPromises.writeFile('/tmp/filelist.txt', filelist.join('\n'));

        const extendedVideoPath = `/tmp/extended_${basename(videoKey)}`;
        //await execAsync(`/opt/bin/ffmpeg -f concat -safe 0 -i /tmp/filelist.txt -c copy ${extendedVideoPath}`);
        
        // Use -vsync vfr to adjust frame timing without full re-encoding
        await execAsync(`/opt/bin/ffmpeg -f concat -safe 0 -i ${concatFilePath} -c copy -vsync vfr ${extendedVideoPath}`);

        const outputPath = `/tmp/output_${basename(videoKey, '.mp4')}.mp4`;
        //await execAsync(`/opt/bin/ffmpeg -i ${extendedVideoPath} -i ${audioPath} -map 0:v:0 -map 1:a:0 -c:v copy -c:a aac -b:a 192k -shortest ${outputPath}`);

        await execAsync(`/opt/bin/ffmpeg -i ${extendedVideoPath} -i ${audioPath} -map 0:v:0 -map 1:a:0 -c:v copy -c:a aac -b:a 192k -shortest -r 25 ${outputPath}`);
        console.log('Video and audio have been merged successfully');

        await uploadFileToS3(outputBucket, outputKey, outputPath);
        console.log('File upload complete.');

        return { statusCode: 200, body: JSON.stringify('Video and audio have been merged successfully.') };
    } catch (error) {
        console.error('Error in Lambda function:', error);
        return { statusCode: 500, body: JSON.stringify('Failed to process video and audio.') };
    }
}


    


    Attempts :
I've tried re-encoding the concatenated file but the lambda function times out. I hoped that by creating blank video with a 25fps and all the other specs from the original mp4, I wouldn't have to re-encode the concatenated file. Obviously something is wrong, though. In the commented out code you can see I tried specifying 25 or not, and also tried -vsync and no -vsync. I'm new to FFmpeg so all tips are appreciated !

    


  • React Native (Android) : Download mp3 file

    21 février 2024, par Batuhan Fındık

    I get the youtube video link from the ui. I download the video from this link and convert it to mp3. I download it to my phone as mp3. The song opens on WhatsApp on the phone. but it doesn't open on the mp3 player. The song is not broken because it opens on WhatsApp too. Why do you think the mp3 player doesn't open ? Could it be from the file information ? I tried to enter some file information but it still won't open. For example, there is from information in songs played on an mp3 player. There is no from information in my song file. I tried to add it but it wasn't added.

    


    .net 8 api return :

    


        [HttpPost("ConvertVideoToMp3")]&#xA;public async Task<iactionresult> ConvertVideoToMp3(Mp3 data)&#xA;{&#xA;    try&#xA;    {&#xA;        string videoId = GetYoutubeVideoId(data.VideoUrl);&#xA;&#xA;        var streamInfoSet = await _youtubeClient.Videos.Streams.GetManifestAsync(videoId);&#xA;        var videoStreamInfo = streamInfoSet.GetAudioOnlyStreams().GetWithHighestBitrate();&#xA;&#xA;        if (videoStreamInfo != null)&#xA;        {&#xA;            var videoStream = await _youtubeClient.Videos.Streams.GetAsync(videoStreamInfo);&#xA;            var memoryStream = new MemoryStream();&#xA;&#xA;            await videoStream.CopyToAsync(memoryStream);&#xA;            memoryStream.Seek(0, SeekOrigin.Begin);&#xA;&#xA;            var videoFilePath = $"{videoId}.mp4";&#xA;            await System.IO.File.WriteAllBytesAsync(videoFilePath, memoryStream.ToArray());&#xA;&#xA;            var mp3FilePath = $"{videoId}.mp3";&#xA;            var ffmpegProcess = Process.Start(new ProcessStartInfo&#xA;            {&#xA;                FileName = "ffmpeg",&#xA;                Arguments = $"-i \"{videoFilePath}\" -vn -acodec libmp3lame -ab 128k -id3v2_version 3 -metadata artist=\"YourArtistName\" -metadata title=\"YourTitle\" -metadata from=\"youtube\" \"{mp3FilePath}\"",&#xA;                RedirectStandardError = true,&#xA;                UseShellExecute = false,&#xA;                CreateNoWindow = true&#xA;            });&#xA;&#xA;            await ffmpegProcess.WaitForExitAsync();&#xA;&#xA;            var file = TagLib.File.Create(mp3FilePath);&#xA;&#xA;   &#xA;            file.Tag.Artists = new string [] { "YourArtistName"};&#xA;            file.Tag.Title = "YourTitle";&#xA;            file.Tag.Album = "YourAlbumName"; &#xA;            file.Tag.Comment = "Source: youtube";&#xA;  &#xA;&#xA;            file.Save();&#xA;&#xA;            var mp3Bytes = await System.IO.File.ReadAllBytesAsync(mp3FilePath);&#xA;&#xA;            System.IO.File.Delete(videoFilePath);&#xA;            System.IO.File.Delete(mp3FilePath);&#xA;&#xA;            return File(mp3Bytes, "audio/mpeg", $"{videoId}.mp3");&#xA;        }&#xA;        else&#xA;        {&#xA;            return NotFound("Video stream not found");&#xA;        }&#xA;    }&#xA;    catch (Exception ex)&#xA;    {&#xA;        return StatusCode(500, $"An error occurred: {ex.Message}");&#xA;    }&#xA;}&#xA;</iactionresult>

    &#xA;

    React Native :

    &#xA;

         const handleConvertAndDownload = async () => {&#xA;    try {&#xA;      const url = &#x27;http://192.168.1.5:8080/api/Mp3/ConvertVideoToMp3&#x27;;&#xA;      const fileName = &#x27;example&#x27;;&#xA;      const newFileName = generateUniqueSongName(fileName);&#xA;      const filePath = RNFS.DownloadDirectoryPath &#x2B; &#x27;/&#x27;&#x2B;newFileName;&#xA;&#xA;      fetch(url, {&#xA;        method: &#x27;POST&#x27;,&#xA;        headers: {&#xA;          &#x27;Content-Type&#x27;: &#x27;application/json&#x27;,&#xA;        },&#xA;        body: JSON.stringify({videoUrl:videoUrl}),&#xA;      })&#xA;      .then((response) => {&#xA;        if (!response.ok) {&#xA;          Alert.alert(&#x27;Error&#x27;, &#x27;Network&#x27;);&#xA;          throw new Error(&#x27;Network response was not ok&#x27;);&#xA;        }&#xA;        return response.blob();&#xA;      })&#xA;      .then((blob) => {&#xA;        return new Promise((resolve, reject) => {&#xA;          const reader = new FileReader();&#xA;          reader.onloadend = () => {&#xA;            resolve(reader.result.split(&#x27;,&#x27;)[1]); &#xA;          };&#xA;          reader.onerror = reject;&#xA;          reader.readAsDataURL(blob);&#xA;        });&#xA;      })&#xA;      .then((base64Data) => {&#xA;        // Dosyanın varlığını kontrol et&#xA;        return RNFS.exists(filePath)&#xA;          .then((exists) => {&#xA;            if (exists) {&#xA;              console.log(&#x27;File already exists&#x27;);&#xA;              return RNFS.writeFile(filePath, base64Data, &#x27;base64&#x27;, &#x27;append&#x27;);&#xA;            } else {&#xA;              console.log(&#x27;File does not exist&#x27;);&#xA;              return RNFS.writeFile(filePath, base64Data, &#x27;base64&#x27;);&#xA;            }&#xA;          })&#xA;          .catch((error) => {&#xA;            console.error(&#x27;Error checking file existence:&#x27;, error);&#xA;            throw error;&#xA;          });&#xA;      })&#xA;      .then(() => {&#xA;        Alert.alert(&#x27;Success&#x27;, &#x27;MP3 file downloaded successfully.&#x27;);&#xA;        console.log(&#x27;File downloaded successfully!&#x27;);&#xA;      })&#xA;      .catch((error) => {&#xA;        Alert.alert(&#x27;Error&#x27;, error.message);&#xA;        console.error(&#x27;Error downloading file:&#x27;, error);&#xA;      });&#xA;    } catch (error) {&#xA;      Alert.alert(&#x27;Error&#x27;, error.message);&#xA;      console.error(error);&#xA;    }&#xA;  };&#xA;

    &#xA;

  • How to Correctly Implement ffmpeg Complex Filters in Node.js for Image Processing ?

    24 janvier 2024, par Luke

    Problem :

    &#xA;

    I am trying to add filters and transitions between my image slideshow array, and am struggling to apply the proper filters. For example, I get errors like this :

    &#xA;

    {&#xA;  "errorType": "Error",&#xA;  "errorMessage": "ffmpeg exited with code 234: Failed to set value &#x27;fade=type=in:start_time=0:duration=1,zoompan=z=zoom&#x2B;0.002:d=120:x=if(gte(zoom,1.2),x,x&#x2B;1):y=if(gte(zoom,1.2),y,y&#x2B;1)&#x27; for option &#x27;filter_complex&#x27;: Invalid argument\nError parsing global options: Invalid argument\n",&#xA;  "trace": [&#xA;    "Error: ffmpeg exited with code 234: Failed to set value &#x27;fade=type=in:start_time=0:duration=1,zoompan=z=zoom&#x2B;0.002:d=120:x=if(gte(zoom,1.2),x,x&#x2B;1):y=if(gte(zoom,1.2),y,y&#x2B;1)&#x27; for option &#x27;filter_complex&#x27;: Invalid argument",&#xA;    "Error parsing global options: Invalid argument",&#xA;    "",&#xA;    "    at ChildProcess.<anonymous> (/opt/nodejs/node_modules/fluent-ffmpeg/lib/processor.js:182:22)",&#xA;    "    at ChildProcess.emit (node:events:517:28)",&#xA;    "    at ChildProcess._handle.onexit (node:internal/child_process:292:12)"&#xA;  ]&#xA;}&#xA;</anonymous>

    &#xA;

    Lambda Function Code :

    &#xA;

     async function concat(bucketName, imageKeys) {&#xA;      const imageStreams = await Promise.all(&#xA;        imageKeys.map(async (key, i) => {&#xA;          const command = new GetObjectCommand({ Bucket: bucketName, Key: key });&#xA;          const response = await s3.send(command);&#xA;          // Define the temporary file path based on the index&#xA;          const tempFilePath = `/tmp/${i}.png`;&#xA;    &#xA;          // Write the image data to the temporary file&#xA;          await fs.writeFile(tempFilePath, response.Body);&#xA;    &#xA;          // Return the file path to be used later&#xA;          return tempFilePath;&#xA;        })&#xA;      );&#xA;    &#xA;      // Create a file list content with durations&#xA;      let fileContent = "";&#xA;      for (let i = 0; i &lt; imageStreams.length; i&#x2B;&#x2B;) {&#xA;        fileContent &#x2B;= `file &#x27;${imageStreams[i]}&#x27;\nduration 1\n`;&#xA;    &#xA;        // Check if it&#x27;s the last image, and if so, add it again&#xA;        if (i === imageStreams.length - 1) {&#xA;          fileContent &#x2B;= `file &#x27;${imageStreams[i]}&#x27;\nduration 1\n`;&#xA;        }&#xA;      }&#xA;    &#xA;      // Define the file path for the file list&#xA;      const fileListPath = "/tmp/file_list.txt";&#xA;    &#xA;      // Write the file list content to the file&#xA;      await fs.writeFile(fileListPath, fileContent);&#xA;    &#xA;      try {&#xA;        await fs.writeFile(fileListPath, fileContent);&#xA;      } catch (error) {&#xA;        console.error("Error writing file list:", error);&#xA;        throw error;&#xA;      }&#xA;    &#xA;      // Create a complex filter to add zooms and pans&#xA;      // Simplified filter example&#xA;  let complexFilter = [&#xA;    // Example of a fade transition&#xA;    {&#xA;      filter: &#x27;fade&#x27;,&#xA;      options: { type: &#x27;in&#x27;, start_time: 0, duration: 1 },&#xA;      inputs: &#x27;0:v&#x27;, // first video stream&#xA;      outputs: &#x27;fade0&#x27;&#xA;    },&#xA;    // Example of dynamic zoompan&#xA;    {&#xA;      filter: &#x27;zoompan&#x27;,&#xA;      options: {&#xA;        z: &#x27;zoom&#x2B;0.002&#x27;,&#xA;        d: 120, // duration for this image&#xA;        x: &#x27;if(gte(zoom,1.2),x,x&#x2B;1)&#x27;, // dynamic x position&#xA;        y: &#x27;if(gte(zoom,1.2),y,y&#x2B;1)&#x27; // dynamic y position&#xA;      },&#xA;      inputs: &#x27;fade0&#x27;,&#xA;      outputs: &#x27;zoom0&#x27;&#xA;    }&#xA;    // Continue adding filters for each image&#xA;  ];&#xA;&#xA;  let filterString = complexFilter&#xA;    .map(&#xA;      (f) =>&#xA;        `${f.filter}=${Object.entries(f.options)&#xA;          .map(([key, value]) => `${key}=${value}`)&#xA;          .join(":")}`&#xA;    )&#xA;    .join(",");&#xA;    &#xA;      let filterString = complexFilter&#xA;        .map(&#xA;          (f) =>&#xA;            `${f.filter}=${Object.entries(f.options)&#xA;              .map(([key, value]) => `${key}=${value}`)&#xA;              .join(":")}`&#xA;        )&#xA;        .join(",");&#xA;    &#xA;      console.log("Filter String:", filterString);&#xA;    &#xA;      return new Promise((resolve, reject) => {&#xA;        ffmpeg()&#xA;          .input(fileListPath)&#xA;          .complexFilter(filterString)&#xA;          .inputOptions(["-f concat", "-safe 0"])&#xA;          .outputOptions("-c copy")&#xA;          .outputOptions("-c:v libx264")&#xA;          .outputOptions("-pix_fmt yuv420p")&#xA;          .outputOptions("-r 30")&#xA;          .on("end", () => {&#xA;            resolve();&#xA;          })&#xA;          .on("error", (err) => {&#xA;            console.error("Error during video concatenation:", err);&#xA;            reject(err);&#xA;          })&#xA;          .saveToFile("/tmp/output.mp4");&#xA;      });&#xA;    }&#xA;

    &#xA;

    Filter String Console Log :

    &#xA;

    Filter String: fade=type=in:start_time=0:duration=1,zoompan=z=zoom&#x2B;0.002:d=120:x=if(gte(zoom,1.2),x,x&#x2B;1):y=if(gte(zoom,1.2),y,y&#x2B;1)&#xA;

    &#xA;

    Questions :

    &#xA;

      &#xA;
    1. What is the correct syntax for implementing complex filters like zoompan and fade in ffmpeg when used in a Node.js environment ?
    2. &#xA;

    3. How do I ensure the filters are applied correctly to each image in the sequence ?
    4. &#xA;

    5. Is there a better way to dynamically generate these filters based on the number of images or their content ?
    6. &#xA;

    &#xA;

    Any insights or examples of correctly implementing this would be greatly appreciated !

    &#xA;