
Recherche avancée
Autres articles (93)
-
Changer son thème graphique
22 février 2011, parLe thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
Modifier le thème graphique utilisé
Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
Il suffit ensuite de se rendre dans l’espace de configuration du (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (5258)
-
How to overlay two mp3 audio files with different bitrates using fluent-ffmpeg
7 janvier 2023, par Alex P MnzI am trying to overlay one mp3 file (a speech track, the file name for which is passed into the below function) on top of another (a background music track, the url of which is passed into the function), so that they both play simultaneously, using NodeJS and fluent-ffmpeg.


What happens when I call the function is that I get an output file which 1) only plays the background audio track, and 2) upon opening the output track, the time seeker skips straight to where the end of the first audio track would have been (around 1 minute 30 seconds).


I'd like to in the first instance have them play at the same time as each other in the output file, without this skipping ahead effect (even if I drag the slider back to within the first 1 minute 30, it just jumps straight back to 1 minute 31 seconds - as if somehow there was no data in that first minute and a half).


In the second instance, I'd also like to have the background audio track loop until the first track finishes, so any help with that as part of an answer would be very much appreciated. But the immediate problem is just getting the two audios to actually play simultaneously starting from 0 seconds.


I have tried the below to get to this point :


const fs = require('fs');
const ffmpegPath = require('@ffmpeg-installer/ffmpeg').path;
const ffmpeg = require('fluent-ffmpeg');
ffmpeg.setFfmpegPath(ffmpegPath);
var ffprobe = require('ffprobe-static');
ffmpeg.setFfprobePath(ffprobe.path);
const axios = require('axios');
const crypto = require('crypto');
const path = require('path');


const overlayBackgroundAudio = async (inputFileName, backgroundFileUrl) => {

 const backgroundTrack = await axios({
 method: 'GET',
 url: backgroundFileUrl,
 responseType: 'arraybuffer'
 });

 // write a file with the retrieved background audio track
 const backgroundTrackFileName = crypto.randomBytes(12).toString('hex');
 fs.writeFileSync(`${backgroundTrackFileName}.mp3`, backgroundTrack.data); //todo - delete after


 // set the name of the output file
 const newFileName = crypto.randomBytes(12).toString('hex');
 const outputFileName = `${newFileName}.mp3`;

 // run the relevant ffmpeg commands

 const overlayTracks = async () => {
 
 return new Promise((resolve, reject) => {
 ffmpeg()
 .input(`${inputFileName}.mp3`)
 .input(`${backgroundTrackFileName}.mp3`)
 .complexFilter([
 {
 filter: 'volume',
 options: '1',
 inputs: '[0:0]',
 outputs: '[a]'
 },
 {
 filter: 'volume',
 options: '1',
 inputs: '[1:0]',
 outputs: '[b]'
 },
 {
 filter: 'adelay',
 options: '0',
 inputs: '[a]',
 outputs: '[a1]'
 },
 {
 filter: 'adelay',
 options: '0|0',
 inputs: '[b]',
 outputs: '[b1]'
 },
 {
 filter: 'amix',
 options: 'inputs=2:duration=first',
 inputs: '[a1][b1]',
 outputs: '[out]'
 }
 ])
 .outputOptions(['-map', '[out]', outputFileName])
 .output(outputFileName)
 .on('end', function() {
 resolve(outputFileName);
 })
 .on('stderr', console.log) // log ffmpeg output to console
 .on('error', function(err) {
 console.log(`An error occurred overlaying tracks: ${err.message}`);
 reject(err);
 })
 .run()
 })
 }

 const overlaidTracksFileName = await overlayTracks();

 console.log('overlaid file name:', overlaidTracksFileName)

 return overlaidTracksFileName;
 
}

module.exports = overlayBackgroundAudio;



Here is what the ffmpeg library is logging to my console (which may help in figuring out why this is not working as intended) :


Input #0, mp3, from '95c8ec8ccbb100d2bfe81ffd.mp3':
 Metadata:
 encoder : Lavf58.24.101
 Duration: 00:02:41.42, start: 0.046042, bitrate: 48 kb/s
 Stream #0:0: Audio: mp3, 24000 Hz, mono, fltp, 48 kb/s
[mp3 @ 000002c77978ddc0] Estimating duration from bitrate, this may be inaccurate
Input #1, mp3, from '14e70bbd339b612e96f29017.mp3':
 Metadata:
 date : 2022-12-30 17:56
 id3v2_priv.XMP : <?xpacket begin="\xef\xbb\xbf" id="W5M0MpCehiHzreSzNTczkc9d"?>\x0a\x0a \x0a s
 Stream #1:0: Audio: mp3, 48000 Hz, stereo, fltp, 192 kb/s
Stream mapping:
 Stream #0:0 (mp3float) -> volume (graph 0)
 Stream #1:0 (mp3float) -> volume (graph 0)
 amix (graph 0) -> Stream #0:0 (libmp3lame)
 Stream #1:0 -> #1:0 (mp3 (mp3float) -> mp3 (libmp3lame))
Press [q] to stop, [?] for help
Output #0, mp3, to '0c0404b793d488bf38e882a0.mp3':
 Metadata:
 TSSE : Lavf58.24.101
 Stream #0:0: Audio: mp3 (libmp3lame), 24000 Hz, mono, fltp (default)
 Metadata:
 encoder : Lavc58.42.102 libmp3lame
Output #1, mp3, to '0c0404b793d488bf38e882a0.mp3':
 Metadata:
 TSSE : Lavf58.24.101
 Stream #1:0: Audio: mp3 (libmp3lame), 48000 Hz, stereo, fltp
 Metadata:
 encoder : Lavc58.42.102 libmp3lame





Thank you very much in advance for any help that you can try to provide !


-
how to make the script re-ask for the input again after failure ?
6 avril 2022, par Rami Magdithis some script for ffmpeg that i added to windows context menu
so i just copy paste what i want done
when this .bat fails [wrong argument for instance ] it closes and i have to restart it
how to make the script re-ask for the input again after failure ?


@echo off
echo -------------------------------------------------------------
echo FILTERS
echo -------------------------------------------------------------
echo,
echo VERTICAL FLIP= -lavfi vflip
echo HORIZONTAL FLIP= -lavfi hflip
echo NEGAT COLORS= -lavfi Enegate
echo NEGATE LUMINANCE= -lavfi lutyuv=y=negval
echo GRAYSCALE= -lavfi hue=s=0
echo ISOLATE COLOR= -lavfi colorhold=color="orange":similarity=0.29:blend=0
echo PS CURVES PRESET= -lavfi curves=psfile='MyCurvesPresets/purple.acv'
echo VIGNETTE EFFECT= -lavfi vignette=PI/4
echo LOOKUP TABLE= -lavfi lut3d=c\\:/nnn/ggg.cube
echo SPEED UP OR SLOW DOWN= -lavfi setpts=PTS/2
echo SPEED UP VIDEOS, AUDIOS= -lavfi "[0:v]setpts=PTS/2[v];[0:a]atempo=2.0[a]" -map "[v]" -map "[a]"
echo REVERSE= -lavfi reverse
echo POSTERIZE= -lavfi elbg=l=8:n=1
echo MOTION BLUR= -lavfi tmix=frames=20:weights="10 1 1"
echo HARD SUB= -lavfi subtitles=s.ass
echo SOFT SUB= -scodec mov_text -metadata:s:s:0 language=eng
echo ONE IMAGE= -lavfi -frames 1
echo AN IMAGE EVERY 60 SEC= -lavfi fps=1/60
echo ONLY IFRAMES= -skip_frame nokey
echo GIF= -lavfi "fps=10,scale=320:-2:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse" -loop 0
echo STACKS= -lavfi "[0:v][1:v][2:v][3:v]xstack=inputs=4:layout=0_0|w0_0|0_h0|w0_h0[v]" -map "[v]"
echo EMBED THUMBNAIL = -i 1.jpg -map 0:0 -map 0:1 -map 1 -c:0 copy -c:v:1 png -disposition:v:1 attached_pic
ECHO,
echo -------------------------------------------------------------
echo CODEC OPTIONS
echo -------------------------------------------------------------
echo,
echo -lavfi "[0:v]scale=-2:720,setpts=PTS/1.5[v];[0:a]atempo=1.5[a]" -map "[v]" -map "[a]" -crf 40 -preset ultrafast
echo -map 0 -codec copy
echo -acodec copy -vcodec libx264 -vsync cfr -crf 20 -pix_fmt yuv420p -tune film -preset veryfast -movflags +faststart
echo -acodec aac -ac 2 -ab 128k -ar 44100 / -acodec libmp3lame -ab 320k -ar 44100 -id3v2_version 3 
echo -qscale:v 2
echo,



the main part :


set /P extra="ENTER CODEC OPTIONS="

echo,
echo Processing "%~nx1"
echo Output will be "%~n1"_output.mp4"
ffmpeg -v error -stats -y -i "%~1" %extra% "%~n1"_output.mp4
pause



-
How can I smoothly stream frames from a video that is currently being recorded (Python)
16 juin 2023, par PeterI am trying to stream a video (into numpy arrays) while recording it in Python. Currently, I am recording using ffmpeg, and opencv to read it, but it is not going smoothly.


GOAL : Receive and record a video stream in real-time, into a video file with reasonable compression, while being able to have fast random access to any frame received up to that point (minus perhaps an acceptable buffer of e.g. 30 frames / 1 second).


The frame read from a given index lookup must always be identical.. i.e. if we open the video later and read from it again, we must get identical frames as when we read them "live".


Here is my current code (which runs on Mac), which attempts to save a video stream from the camera, while reading from the video file as it is being written to.


import datetime, os, subprocess, time, cv2

video_path = os.path.expanduser(f"~/Downloads/stream_{datetime.datetime.now()}.mkv")
# Start a process that record the video with ffmpeg
process = subprocess.Popen(['ffmpeg', '-y', '-f', 'avfoundation', '-framerate', '30', '-i', '0:none', '-preset', 'fast', '-crf', '23', '-b:v', '8000k', video_path])
time.sleep(4) # Let it start
assert os.path.exists(video_path), "Video file not created"

# Let's simulate a process that gets random frames from the video
cap = cv2.VideoCapture(video_path)
try:
 while True:
 ret, frame = cap.read()

 # print(f"Captured frame of shape {frame.shape}" if frame is not None else "No frame available")
 if frame is not None:
 print(f"Got frame of shape {frame.shape}")
 cv2.imshow("Q to Quit", frame)
 if cv2.waitKey(1) & 0xFF == ord('q'):
 break
 else:
 print(f"No frame available on {video_path}.. waiting for more frames...")
 time.sleep(0.1)

except KeyboardInterrupt:
 pass

process.terminate()
cap.release()




This code never loads a frame.


Things I have discovered :


- 

- If I renew the cap with
cap = cv2.VideoCapture(video_path)
in the else block, it does start showing frames from the start of the recording. After using up all frames, it stops. If I keep track frames seen so far and do




else:
 print(f"No frame available on {video_path}")
 time.sleep(0.1)
 cap = cv2.VideoCapture(video_path)
 print(f"Restarting video capture from frame {frames_seen_so_far}")
 cap.set(cv2.CAP_PROP_POS_FRAMES, frames_seen_so_far) # DOES NOT WORK




It does not work (the
cap.set..
makes no difference).

- If I renew the cap with