
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (102)
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (10210)
-
error building ffmpeg on mac high sierra 10.13 "workspace/bin/ffmpeg' : No such file or directory"
14 mars 2023, par MartinHello I am trying to create a shell script
buildffmpeg.sh
which when ran, will download and build ffmpeg, so you can automate the process of making a custom ffmpeg build where the end result is an ffmpeg and ffprobe executable.

If you run the below script on mac or linux, it's worked perfectly, but when I test on my older macos 10.13 version, I first got an error about my ffmpeg snapshot url being too old :

FFMPEG_URL="https://git.ffmpeg.org/gitweb/ffmpeg.git/snapshot/74c4c539538e36d8df02de2484b045010d292f2c.tar.gz"

so I updated the var so it 'should' link to ffmpeg 6.0 (the most recent version, but im not sure if my link is correct)
FFMPEG_URL="https://git.ffmpeg.org/gitweb/ffmpeg.git/snapshot/adb4688bfb0652b2ffa5bc29e53761e27e1a3b3e.tar.gz"


When I run my below script on my mac terminal with the command
$ ./buildffmpeg.sh
it prints out '11' and then fails with an error :

...
INSTALL libavutil/ffversion.h
INSTALL libavutil/libavutil.pc
~11~
/Library/Developer/CommandLineTools/usr/bin/objdump: '/Users/apple/Documents/projects/buildffmpeghighsierra/workspace/bin/ffmpeg': No such file or directory



With the error being
workspace/bin/ffmpeg': No such file or directory


Is there something wrong with how my script builds ffmpeg ?


#!/bin/bash

set -e

CWD=$(pwd)
PACKAGES="$CWD/packages"
WORKSPACE="$CWD/workspace"
ADDITIONAL_CONFIGURE_OPTIONS=""


mkdir -p "$PACKAGES"
mkdir -p "$WORKSPACE"
echo '~0~'
FFMPEG_TAG="$1"
FFMPEG_URL="https://git.ffmpeg.org/gitweb/ffmpeg.git/snapshot/adb4688bfb0652b2ffa5bc29e53761e27e1a3b3e.tar.gz"
echo '~1~'
FFMPEG_ARCHIVE="$PACKAGES/ffmpeg.tar.gz"
echo '~2~'
if [ ! -f "$FFMPEG_ARCHIVE" ]; then
 echo "Downloading tag ${FFMPEG_TAG}..."
 echo "~2.1~ FFMPEG_ARCHIVE=$FFMPEG_ARCHIVE"
 echo "~2.2~ FFMPEG_URL=$FFMPEG_URL"
 curl -L -k -o "$FFMPEG_ARCHIVE" "$FFMPEG_URL"
fi
echo '~3~'
EXTRACTED_DIR="$PACKAGES/extracted"
echo '~4~'
mkdir -p "$EXTRACTED_DIR"
echo '~5~'
echo "Extracting..."
tar -xf "$FFMPEG_ARCHIVE" --strip-components=1 -C "$EXTRACTED_DIR"
echo '~6~'
cd "$EXTRACTED_DIR"
echo '~7~'
echo "Building..."
echo '~8~'
# Min electron supported version
MACOS_MIN="10.10"
echo '~9~'
./configure $ADDITIONAL_CONFIGURE_OPTIONS \
 --pkgconfigdir="$WORKSPACE/lib/pkgconfig" \
 --pkg-config-flags="--static" \
 --extra-cflags="-I$WORKSPACE/include -mmacosx-version-min=${MACOS_MIN}" \
 --extra-ldflags="-L$WORKSPACE/lib -mmacosx-version-min=${MACOS_MIN}" \
 --extra-libs="-lpthread -lm" \
 --enable-static \
 --disable-securetransport \
 --disable-debug \
 --disable-shared \
 --disable-ffplay \
 --disable-lzma \
 --disable-doc \
 --enable-version3 \
 --enable-pthreads \
 --enable-runtime-cpudetect \
 --enable-avfilter \
 --enable-filters \
 --disable-libxcb \
 --enable-gpl \
 --disable-libass \
 --enable-libmp3lame \
 --enable-libx264 
echo '~10~'
make -j 4
echo '~11~'
make install
echo '~11~'
otool -L "$WORKSPACE/bin/ffmpeg"
echo '~12~'
otool -L "$WORKSPACE/bin/ffprobe"
echo '~13~'
echo "Building done. The binaries can be found here: $WORKSPACE/bin/ffmpeg $WORKSPACE/bin/ffprobe"
echo '~14~'
mkdir ffmpeg-mac/ 
echo '~15~'
cp -r "$WORKSPACE/bin/" "$CWD/ffmpeg-mac/"
echo '~16~'
rm -rf "$PACKAGES"
echo '~17~'
rm -rf "$WORKSPACE"
echo '~18~'
exit 0



-
How do I merge images and an audio file into a single video ?
3 janvier 2024, par AnilI am creating a web application using next js.
I want to create a video by combining three images and an audio track in such a way that each image is displayed for an equal duration that collectively matches the length of the audio. It will all happen locally on the browser.


This is my code for converting images and audio into a video.


import {FFmpeg} from '@ffmpeg/ffmpeg';
import { fetchFile, toBlobURL } from '@ffmpeg/util';


export async function createVideo(ImageFiles, audioFile) {

 try {
 const baseURL = 'https://unpkg.com/@ffmpeg/core@0.12.4/dist/umd';
 const ffmpeg = new FFmpeg({ log: true});

 console.log('Loading ffmpeg core');
 await ffmpeg.load({
 corePath: await toBlobURL(`${baseURL}/ffmpeg-core.js`, 'text/javascript'),
 wasmPath: await toBlobURL(`${baseURL}/ffmpeg-core.wasm`, 'application/wasm'),
 });
 await ffmpeg.load();
 console.log('Finished loading ffmpeg core');

 for (let i = 0; i < ImageFiles.length; i++) {
 ffmpeg.writeFile(
 `image${i+1}.jpg`,
 await fetchFile(ImageFiles[i].imageUrl)
 );
 }

 ffmpeg.FS('writeFile', 'audio.mp3', await fetchFile(audioFile));


 const durationPerImage = (await getAudioDuration(ffmpeg, 'audio.mp3')) / ImageFiles.length;
 let filterComplex = '';
 for (let i = 0; i < ImageFiles.length - 1; i++) {filterComplex += `[${i}:v]trim=duration=${durationPerImage},setpts=PTS-STARTPTS[v${i}]; `;
 }
 filterComplex += `${ImageFiles.slice(0, -1).map((_, i) => `[v${i}]`).join('')}concat=n=${ImageFiles.length - 1}:v=1:a=0,format=yuv420p[v];`;

 await ffmpeg.run(
 '-framerate', '1', '-loop', '1', '-t', durationPerImage, '-i', 'image%d.jpg', '-i', 'audio.mp3',
 '-filter_complex', filterComplex, '-map', '[v]', '-map', '1:a',
 '-c:v', 'libx264', '-tune', 'stillimage', '-c:a', 'aac', '-b:a', '192k', 'output.mp4'
 );

 const data = ffmpeg.FS('readFile', 'output.mp4');

 const videoURL = URL.createObjectURL(new Blob([data.buffer], { type: 'video/mp4' }));
 return videoURL;
 } catch (error) {
 console.error('Error creating video:', error);
 throw new Error('Failed to create video');
 }
}

async function getAudioDuration(ffmpeg, audioFilename) {
 await ffmpeg.run('-i', audioFilename, '-show_entries', 'format=duration', '-of', 'default=noprint_wrappers=1:nokey=1', 'duration.txt');
 const data = ffmpeg.FS('readFile', 'duration.txt');
 const durationString = new TextDecoder().decode(data);
 const duration = Math.floor(parseFloat(durationString.trim())); 
 return duration;
}



I am getting this error :


CreateVideo.js:65 Error creating video: RuntimeError: Aborted(LinkError: WebAssembly.instantiate(): Import #70 module="a" function="qa": function import requires a callable). Build with -sASSERTIONS for more info.



Can someone help me with this ?


-
FFMPEG Concatenating videos with same 25fps results in output file with 3.554fps
5 juin 2024, par Kendra BroomI created an AWS Lambda function in node.js 18 that is using a static, ver 7 build of FFmpeg located in a lambda layer. Unfortunately it's just the ffmpeg build and doesn't include ffprobe.


I have an mp4 audio file in one S3 bucket and a wav audio file in a second S3 bucket. I'm uploading the output file to a third S3 bucket.


Specs on the files (please let me know if any more info is needed)


Audio :
wav, 13kbps, aac (LC), 6:28 duration


Video :
mp4, 1280x720 resolution, 25 frame rate, h264 codec, 3:27 duration


Goal :
Create blank video to fill in the duration gaps so the full audio is covered before and after the mp4 video (using timestamps and duration). Strip the mp4 audio and use the wav audio only. Output should be an mp4 video with the wav audio playing over it and blank video for 27 seconds (based on timestamp) until mp4 video plays for 3:27, and then blank video to cover the rest of the audio until 6:28.


Actual Result :
An mp4 file with 3.554 frame rate and 10:06 duration.


import { S3Client, GetObjectCommand, PutObjectCommand } from "@aws-sdk/client-s3";
import { createWriteStream, createReadStream, promises as fsPromises } from 'fs';
import { exec } from 'child_process';
import { promisify } from 'util';
import { basename } from 'path';

const execAsync = promisify(exec);

const s3 = new S3Client({ region: 'us-east-1' });

async function downloadFileFromS3(bucket, key, downloadPath) {
 const getObjectParams = { Bucket: bucket, Key: key };
 const command = new GetObjectCommand(getObjectParams);
 const { Body } = await s3.send(command);
 return new Promise((resolve, reject) => {
 const fileStream = createWriteStream(downloadPath);
 Body.pipe(fileStream);
 Body.on('error', reject);
 fileStream.on('finish', resolve);
 });
}

async function uploadFileToS3(bucket, key, filePath) {
 const fileStream = createReadStream(filePath);
 const uploadParams = { Bucket: bucket, Key: key, Body: fileStream };
 try {
 await s3.send(new PutObjectCommand(uploadParams));
 console.log(`File uploaded successfully to ${bucket}/${key}`);
 } catch (err) {
 console.error("Error uploading file: ", err);
 throw new Error('Failed to upload file to S3');
 }
}

function parseDuration(durationStr) {
 const parts = durationStr.split(':');
 return parseInt(parts[0]) * 3600 + parseInt(parts[1]) * 60 + parseFloat(parts[2]);
}

export async function handler(event) {
 const videoBucket = "video-interaction-content";
 const videoKey = event.videoKey;
 const audioBucket = "audio-call-recordings";
 const audioKey = event.audioKey;
 const outputBucket = "synched-audio-video";
 const outputKey = `combined_${basename(videoKey, '.mp4')}.mp4`;

 const audioStartSeconds = new Date(event.audioStart).getTime() / 1000;
 const videoStartSeconds = new Date(event.videoStart).getTime() / 1000;
 const audioDurationSeconds = event.audioDuration / 1000;
 const timeDifference = audioStartSeconds - videoStartSeconds;

 try {
 const videoPath = `/tmp/${basename(videoKey)}`;
 const audioPath = `/tmp/${basename(audioKey)}`;
 await downloadFileFromS3(videoBucket, videoKey, videoPath);
 await downloadFileFromS3(audioBucket, audioKey, audioPath);

 //Initialize file list with video
 let filelist = [`file '${videoPath}'`];
 let totalVideoDuration = 0; // Initialize total video duration

 // Create first blank video if needed
 if (timeDifference < 0) {
 const blankVideoDuration = Math.abs(timeDifference);
 const blankVideoPath = `/tmp/blank_video.mp4`;
 await execAsync(`/opt/bin/ffmpeg -f lavfi -i color=c=black:s=1280x720:r=25 -c:v libx264 -t ${blankVideoDuration} ${blankVideoPath}`);
 //Add first blank video first in file list
 filelist.unshift(`file '${blankVideoPath}'`);
 totalVideoDuration += blankVideoDuration;
 console.log(`First blank video created with duration: ${blankVideoDuration} seconds`);
 }
 
 const videoInfo = await execAsync(`/opt/bin/ffmpeg -i ${videoPath} -f null -`);
 const videoDurationMatch = videoInfo.stderr.match(/Duration: ([\d:.]+)/);
 const videoDuration = videoDurationMatch ? parseDuration(videoDurationMatch[1]) : 0;
 totalVideoDuration += videoDuration;

 // Calculate additional blank video duration
 const additionalBlankVideoDuration = audioDurationSeconds - totalVideoDuration;
 if (additionalBlankVideoDuration > 0) {
 const additionalBlankVideoPath = `/tmp/additional_blank_video.mp4`;
 await execAsync(`/opt/bin/ffmpeg -f lavfi -i color=c=black:s=1280x720:r=25 -c:v libx264 -t ${additionalBlankVideoDuration} ${additionalBlankVideoPath}`);
 //Add to the end of the file list
 filelist.push(`file '${additionalBlankVideoPath}'`);
 console.log(`Additional blank video created with duration: ${additionalBlankVideoDuration} seconds`);
 }

 // Create and write the file list to disk
 const concatFilePath = '/tmp/filelist.txt';
 await fsPromises.writeFile('/tmp/filelist.txt', filelist.join('\n'));

 const extendedVideoPath = `/tmp/extended_${basename(videoKey)}`;
 //await execAsync(`/opt/bin/ffmpeg -f concat -safe 0 -i /tmp/filelist.txt -c copy ${extendedVideoPath}`);
 
 // Use -vsync vfr to adjust frame timing without full re-encoding
 await execAsync(`/opt/bin/ffmpeg -f concat -safe 0 -i ${concatFilePath} -c copy -vsync vfr ${extendedVideoPath}`);

 const outputPath = `/tmp/output_${basename(videoKey, '.mp4')}.mp4`;
 //await execAsync(`/opt/bin/ffmpeg -i ${extendedVideoPath} -i ${audioPath} -map 0:v:0 -map 1:a:0 -c:v copy -c:a aac -b:a 192k -shortest ${outputPath}`);

 await execAsync(`/opt/bin/ffmpeg -i ${extendedVideoPath} -i ${audioPath} -map 0:v:0 -map 1:a:0 -c:v copy -c:a aac -b:a 192k -shortest -r 25 ${outputPath}`);
 console.log('Video and audio have been merged successfully');

 await uploadFileToS3(outputBucket, outputKey, outputPath);
 console.log('File upload complete.');

 return { statusCode: 200, body: JSON.stringify('Video and audio have been merged successfully.') };
 } catch (error) {
 console.error('Error in Lambda function:', error);
 return { statusCode: 500, body: JSON.stringify('Failed to process video and audio.') };
 }
}



Attempts :
I've tried re-encoding the concatenated file but the lambda function times out. I hoped that by creating blank video with a 25fps and all the other specs from the original mp4, I wouldn't have to re-encode the concatenated file. Obviously something is wrong, though. In the commented out code you can see I tried specifying 25 or not, and also tried -vsync and no -vsync. I'm new to FFmpeg so all tips are appreciated !