
Recherche avancée
Autres articles (96)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
-
Installation en mode standalone
4 février 2011, parL’installation de la distribution MediaSPIP se fait en plusieurs étapes : la récupération des fichiers nécessaires. À ce moment là deux méthodes sont possibles : en installant l’archive ZIP contenant l’ensemble de la distribution ; via SVN en récupérant les sources de chaque modules séparément ; la préconfiguration ; l’installation définitive ;
[mediaspip_zip]Installation de l’archive ZIP de MediaSPIP
Ce mode d’installation est la méthode la plus simple afin d’installer l’ensemble de la distribution (...)
Sur d’autres sites (5394)
-
fluent-ffmpeg mergeToFile always 128kb audio bit-rate no matter what
12 septembre 2020, par MartinI am trying to use fluent-ffmpeg with my electron app to concatenate multiple audio files together with an image in a video. So if i have three files :


song1.mp3 1:00
song2.mp3 0:30
song3.mp3 2:00
front.jpg


I could create
output.mp4
which would be 3:30 seconds long, and play each file one after the other in order. With front.jpg set as the background image.

I am trying to create the concatenated audio file first for this video, then I can just render a vid with two inputs ; image and the 3:30second long concatenated audio file. But I'm having difficulty getting my electron app to wait for the ffmpeg job to run and complete.


I know how to do all of these ffmpeg jobs on the command-line, but I've been following this guide for how to package ffmpeg into an electron app that can run on mac/win10/linux environments. I'm developing it on win10 right now.
gur.com/LtykP.png


I have a button :

<button>FULLALBUM</button>


that when I click runs the
fullAlbum()
function that callscombineMp3FilesOrig
to run the actual ffmpeg job :

async function fullAlbum(uploadName) {
 //document.getElementById("buttonId").disabled = true;

 //get table
 var table = $(`#upload_${uploadNumber}_table`).DataTable()
 //get all selected rows
 var selectedRows = table.rows( '.selected' ).data()
 //get outputFile location
 var path = require('path');
 var outputDir = path.dirname(selectedRows[0].audioFilepath)
 //create outputfile
 var timestamp = new Date().getUTCMilliseconds();
 let outputFilepath = `${outputDir}/output-${timestamp}.mp3` 

 
 console.log('fullAlbum() button pressed: ', timestamp)

 await combineMp3FilesOrig(selectedRows, outputFilepath, '320k', timestamp);
 //document.getElementById("buttonId").disabled = false;

 console.log(`fullAlbum() /output-${timestamp}.mp3 should be created now`)
}

function combineMp3FilesOrig(selectedRows, outputFilepath, bitrate, timestamp) {
 console.log(`combineMp3FilesOrig(): ${outputFilepath}`)
 
 //begin get ffmpeg info
 const ffmpeg = require('fluent-ffmpeg');
 //Get the paths to the packaged versions of the binaries we want to use
 const ffmpegPath = require('ffmpeg-static').replace('app.asar','app.asar.unpacked');
 const ffprobePath = require('ffprobe-static').path.replace('app.asar','app.asar.unpacked');
 //tell the ffmpeg package where it can find the needed binaries.
 ffmpeg.setFfmpegPath(ffmpegPath);
 ffmpeg.setFfprobePath(ffprobePath);
 //end set ffmpeg info

 //create ffmpeg command
 console.log(`combineMp3FilesOrig(): create command`)
 const command = ffmpeg();
 //set command inputs
 command.input('C:\\Users\\marti\\Documents\\martinradio\\uploads\\CharlyBoyUTurn\\5. Akula (Club Mix).flac')
 command.input('C:\\Users\\marti\\Documents\\martinradio\\uploads\\CharlyBoyUTurn\\4. Civilian Barracks.flac')

 return new Promise((resolve, reject) => {
 console.log(`combineMp3FilesOrig(): command status logging`)
 command.on('progress', function(progress) {
 console.info(`Processing : ${progress.percent} % done`);
 })
 .on('codecData', function(data) {
 console.log('codecData=',data);
 })
 .on('end', function() {
 console.log('file has been converted succesfully; resolve() promise');
 resolve();
 })
 .on('error', function(err) {
 console.log('an error happened: ' + err.message, ', reject()');
 reject(err);
 })
 console.log(`combineMp3FilesOrig(): add audio bitrate to command`)
 command.audioBitrate(bitrate)
 console.log(`combineMp3FilesOrig(): tell command to merge inputs to single file`)
 command.mergeToFile(outputFilepath);
 console.log(`combineMp3FilesOrig(): end of promise`)
 });
 console.log(`combineMp3FilesOrig(): end of function`)
}



When I click my button once, my console.logs show the promise is entered, the command is created, but the function just ends without waiting for a resolve() ;
Waiting a couple minutes doesnt change anything.




If I press the button again :




A new command gets created, reaches the end of the promise, but this time actually starts, and triggers the previous command to start. Both jobs then run and their files are rendered at the correct length (12:08) and the correct quality (320k)


Is there something with my promise I need to fix involving async functions and promises in an electron app ? I tried editing my ffmpeg command to include


command.run()


At the end of my promise to ensure it gets triggered ; but that leads to an err in console saying
Uncaught (in promise) Error: No output specified
because apparently in fluent-ffmpegcommand.mergeToFile(outputFilepath);
isnt good enough and I need to include.output(outputFilepath)
as well. If I changecommand.run()
tocommand.output(outputFilepath).run()
, when i click my button, the ffmpeg job gets triggered and rendered perfectly fine. EXCEPT THAT THE FILE IS ALWAYS 128kbps

So I'm trying to figure out why my included code block, my ffmpeg command doesn't run the first time when its created.


-
ffmpeg options for encoding a video mpeg2video but with .mov extension
20 septembre 2020, par SeleneI just finished a project, and need to produce an output in the specific format, should be exactly the same as the format of the video I received.
The best way for me to identify the source format was to use ffprobe. Here was the output of that :


Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mov':
 Metadata:
 creation_time : 2020-02-27T04:15:23.000000Z
 Duration: 00:00:22.13, start: 0.000000, bitrate: 111320 kb/s
 Stream #0:0(eng): Video: mpeg2video (4:2:2) (xd5c / 0x63356478), yuv422p(tv, bt709, top coded first (swapped)), 1920x1080 [SAR 1:1 DAR 16:9], 109779 kb/s, 54.94 fps, 54.94 tbr, 5494 tbn, 50 tbc (default)
 Metadata:
 creation_time : 2020-02-27T04:15:23.000000Z
 handler_name : Gestor de contenido de v?deo Apple
 encoder : XDCAM HD422 1080i50 (50 Mb/s)
 Stream #0:1(eng): Audio: pcm_s16be (twos / 0x736F7774), 48000 Hz, 2 channels, s16, 1536 kb/s (default)
 Metadata:
 creation_time : 2020-02-27T04:15:23.000000Z
 handler_name : Gestor de contenido de sonido Apple



I did a lot of video work on the above file, and as part of my pipeline, I converted the file to ProRes4444. Now I need to get this video into the same format as above.


Couple of questions on the format, if I understand it correctly, mpeg2video is mpeg2, this would not normally appear as .mov file, but the source is as mov container. Why ?


Does the encoder from the input format matter ? specifically the XDCAM ?


Alternative to solving my problem would be to use media encoder, but even that application doesn't seem to give me options at mov + mpeg2, and if it is mov, it almost forces to use Apple ProRes to keep high resolution. Also, none of the options allow me to set fps at the source level, which is 54.94, and the closest option I have is 59.94.


Please help,


-
Python and ffmpeg audio sync and screen recording issues
9 août 2020, par odddollarI'm using ffmpeg and python to record my desktop screen. When the program is run, it starts recording, then when I press a key-combo it cuts off the last x amount of seconds and saves it then starts recording again ; similar to the "record that" functionality of windows game bar.


I have it working so it records video just fine, but then I change the ffmpeg command to record audio from my desktop and I get an error saying
ValueError: could not convert string to float: 'N/A'
occurring when I try to calculate the length of the recorded video. It appears as though the recording isn't being stopped until after I try to calculate the video length, even though this exact same code works fine when not recording audio.

Additionally, I also have an issue when recording audio in that the audio is a couple hundred milliseconds in front of the video. It's not a lot but it's enough to be noticeable.


What I'm overall asking, is there a way I can modify the ffmpeg command to prevent the audio desync issues, and what might be causing the problems I'm getting when attempting to find the length of the video with audio ?


import keyboard, signal
from os import remove
from os.path import isfile
from subprocess import Popen, getoutput
from datetime import datetime
import configparser

class Main:
 def __init__(self, save_location, framerate, duration):
 self.save_location = save_location
 self.framerate = int(framerate)
 self.duration = int(duration)
 self.working = self.save_location + '\\' + 'working.avi'
 self.start_recording()

 def start_recording(self):
 if isfile(self.working):
 remove(self.working)

 # start recording to working file at set framerate
 self.process = Popen(f'ffmpeg -thread_queue_size 578 -f gdigrab -video_size 1920x1080 -i desktop -f dshow -i audio="Stereo Mix (Realtek High Definition Audio)" -b:v 7M -minrate 4M -framerate {self.framerate} {self.working}')
 #self.process = Popen(f'ffmpeg -f gdigrab -framerate {self.framerate} -video_size 1920x1080 -i desktop -b:v 7M -minrate 2M {self.working}')

 def trim_video(self):
 # stop recording working file
 self.process.send_signal(signal.CTRL_C_EVENT)

 # call 'cause I have to
 getoutput(f"ffprobe -i {self.working}")

 # get length of working video
 length = getoutput(f'ffprobe -i {self.working} -show_entries format=duration -v quiet -of csv="p=0"')

 # get time before desired recording time
 start = float(length) - self.duration

 # get save location and title
 title = self.save_location+'\\'+self.get_time()+'.avi'

 # cut to last amount of desired time
 Popen(f"ffmpeg -ss {start} -i {self.working} -c copy -t {self.duration} {title}")
 getoutput(f"ffprobe -i {self.working}")

 self.start_recording()

 def get_time(self):
 now = datetime.now()
 return now.strftime("%Y_%m_%d#%H-%M-%S")


if __name__ == "__main__":
 config = configparser.ConfigParser()
 config.read("settings.ini")
 config = config["DEFAULT"]

 run = Main(config["savelocation"].replace("\\", "\\\\"), config["framerate"], config["recordlast"])
 keyboard.add_hotkey("ctrl+shift+alt+g", lambda:run.trim_video())

 while True:
 try:
 keyboard.wait()
 except KeyboardInterrupt:
 pass



The contents of the settings.ini file are listed below


[DEFAULT]
savelocation = C:\
framerate = 30
recordlast = 10



In the code block, the first line with
self.process = Popen
is the one that records audio and has the issues, the second line (the commented out one below) is the one that works fine.