
Recherche avancée
Autres articles (47)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (5152)
-
Prevent "guessing mono" when using FFMPEG ?
25 juillet 2020, par DaemitrousThis extention works perfectly fine, I'm just concerned about the message it prints everytime it's called.


The Message when the script is called:


Guessed Channel Layout for Input Stream #0.0 : mono



This is an extention to the discord bot I'm working on using
discord.py
.

Script


from discord.ext import commands
from pyttsx3 import init # TTS Audio converter

import discord


ffmpeg_path = r"C:\Users\evand\Documents\Misc\ffmpeg\bin\ffmpeg.exe"

audio_path = r"C:\Users\evand\Documents\Misc\Discord\suhj_bot\Test_mp3\AudioFileForDiscord.mp3"


engine = init() # Initializes speach TTS engine with "sapi5" configuration


def TTS_save(text, audio=engine): # converts and saves the user's text into a TTS audio file ".mp3"

 audio.save_to_file(text, audio_path)

 audio.runAndWait()


class Voice(commands.Cog):

 def __init__(self, bot):

 self.bot = bot

 @commands.command(

 name="v",
 description="The voice command:\n\ng.v <content>"

 )

 async def voice_command(self, ctx, *, args):

 VoiceClient = ctx.voice_client


 if VoiceClient is None:

 VoiceClient = await ctx.author.voice.channel.connect() # bot joins voice-channel of user


 TTS_save(args) # creates TTS audio file

 VoiceClient.play(discord.FFmpegOpusAudio(audio_path, executable=ffmpeg_path)) # plays audio file through discord bot (.mp3, ffmpeg.exe)

 return


def setup(bot):

 bot.add_cog(Voice(bot))
</content>


Is there any way to prevent the program from guessing mono ? I like to keep my code clean and close to errorness.


-
Transcode GCS files using ffmpeg
12 août 2020, par hudsonhooperMy ultimate goal is to transcode video files uploaded to one GCS bucket, transcode them for HLS using App Engine, then store them in another bucket for streaming. I started with code someone else made which closely fit my use case. Right now I just have it set up to save the files as mp4 to see if it works and I receive no error messages and is logs 'Success' to the console but when I check GCS the output file it has created is only around a kb (basically its just the metadata). I believe the issue has to do with the
.pipe(remoteWriteStream, { end: true });
line and something with ffmpeg because if I instead run the lineoriginStream.pipe(remoteWriteStream, {end: true});
to test the.pipe
functionality I get what I'd expect (a copy of the file of the correct size). I've seen a lot of other people such as here use essentially the same method just with cloud functions and have success. Any idea where I went wrong ?


import { Storage } from '@google-cloud/storage';

import ffmpegInstaller from '@ffmpeg-installer/ffmpeg';
import ffmpeg from 'fluent-ffmpeg';

ffmpeg.setFfmpegPath(ffmpegInstaller.path);

export default async function transcode(
 //My try
 name: String,
 bucket: String,
 suffix: String,
 size: String,

): Promise<void> {
 return new Promise((resolve, reject) => {
 const storage = new Storage();

 const originBucket = storage.bucket(bucket);
 const destinationBucket = storage.bucket('storieshls-2e4b1');

 const originFile = originBucket.file(name);
 const originStream = originFile.createReadStream();

 const remoteWriteStream = destinationBucket.file(name.replace(".MOV", ".mp4")).createWriteStream({
 metadata: {
 contentType: 'video/mp4', // This could be whatever else you are transcoding to
 },
 });

 //originStream.pipe(remoteWriteStream, {end: true});

 ffmpeg()
 .input(originStream)
 .outputOptions('-c:v copy') // Change these options to whatever suits your needs
 .outputOptions('-c:a aac')
 .outputOptions('-b:a 160k')
 .outputOptions('-f mp4')
 .outputOptions('-preset fast')
 .outputOptions('-movflags frag_keyframe+empty_moov')
 .on('start', cmdLine => {
 console.log(`[${suffix}] Started FFMpeg`, cmdLine);
 })
 .on('end', () => {
 console.log(`[${suffix}] Success!.`);
 resolve();
 })
 .on('error', (err: Error, stdout, stderr) => {
 console.log(`[${suffix}] Error:`, err.message);
 console.error('stdout:', stdout);
 console.error('stderr:', stderr);

 reject();
 })
 .pipe(remoteWriteStream, { end: true });
 });
}
</void>


-
Tracking granular duration from an ffmpeg stream piped into Nodejs
20 mai 2020, par sliftyThe Situation



I have a video file that I want to process in nodejs, but I want to run it through ffmpeg first in order to normalize the encoding. As I get the data / for a given data event I would like to be able to know how "far" into the video the piped stream has progressed, at as close to frame-level granularity as possible.



0.1 second granularity would be fine.



The Code



I am using Nodejs to invoke ffmpeg to take the video file path and then output the data to stdout :



const ffmpegSettings = [
 '-i', './path/to/video.mp4', // Ingest a file
 '-f', 'mpegts', // Specify the wrapper
 '-' // Output to stdout
]
const ffmpegProcess = spawn('ffmpeg', ffmpegSettings)
const readableStream = ffmpegProcess.stdout
readableStream.on('data', async (data) => {
 readableStream.pause() // Pause the stream to make sure we process in order.

 // Process the data.
 // This is where I want to know the "timestamp" for the data.

 readableStream.resume() // Restart the stream.
})




The Question



How can I efficiently and accurately keep track of how far into the video a given 'data' event represents ? Keeping in mind that a given
on
event in this context might not even have enough data to represent even a full frame.




Temporary Edit



(I will delete this section once a conclusive answer is identified)



Since posting this question I've done some exploration using ffprobe.



start = () => {
 this.ffmpegProcess = spawn('ffmpeg', this.getFfmpegSettings())
 this.ffprobeProcess = spawn('ffprobe', this.getFfprobeSettings())
 this.inputStream = this.getInputStream()

 this.inputStream.on('data', (rawData) => {
 this.inputStream.pause()
 if(rawData) {
 this.ffmpegProcess.stdin.write(rawData)
 }
 this.inputStream.resume()
 })

 this.ffmpegProcess.stdout.on('data', (mpegtsData) => {
 this.ffmpegProcess.stdout.pause()
 this.ffprobeProcess.stdin.write(mpegtsData)
 console.log(this.currentPts)
 this.ffmpegProcess.stdout.resume()
 })

 this.ffprobeProcess.stdout.on('data', (probeData) => {
 this.ffprobeProcess.stdout.pause()
 const lastLine = probeData.toString().trim().split('\n').slice(-1).pop()
 const lastPacket = lastLine.split(',')
 const pts = lastPacket[4]
 console.log(`FFPROBE: ${pts}`)
 this.currentPts = pts
 this.ffprobeProcess.stdout.resume()
 })

 logger.info(`Starting ingestion from ${this.constructor.name}...`)
 }

 /**
 * Returns an ffmpeg settings array for this ingestion engine.
 *
 * @return {String[]} A list of ffmpeg command line parameters.
 */
 getFfmpegSettings = () => [
 '-loglevel', 'info',
 '-i', '-',
 '-f', 'mpegts',
 // '-vf', 'fps=fps=30,signalstats,metadata=print:key=lavfi.signalstats.YDIF:file=\'pipe\\:3\'',
 '-',
 ]

 /**
 * Returns an ffprobe settings array for this ingestion engine.
 *
 * @return {String[]} A list of ffprobe command line parameters.
 */
 getFfprobeSettings = () => [
 '-f', 'mpegts',
 '-i', '-',
 '-print_format', 'csv',
 '-show_packets',
 ]




This pipes the ffmpeg output into ffprobe and uses that to "estimate" where in the stream the processing has gotten. It starts off wildly inaccurate, but after about 5 seconds of processed-video ffprobe and ffmpeg are producing data at a similar pace.



This is a hack, but a step towards the granularity I want. It may be that I need an mpegts parser / chunker that can run on the ffmpeg output directly in NodeJS.



The output of the above is something along the lines of :



undefined (repeated around 100x as I assume ffprobe needs more data to start)
FFPROBE: 1.422456 // Note that this represents around 60 ffprobe messages sent at once.
FFPROBE: 1.933867
1.933867 // These lines represent mpegts data sent from ffmpeg, and the latest pts reported by ffprobe
FFPROBE: 2.388989
2.388989
FFPROBE: 2.728578
FFPROBE: 2.989811
FFPROBE: 3.146544
3.146544
FFPROBE: 3.433889
FFPROBE: 3.668989
FFPROBE: 3.802400
FFPROBE: 3.956333
FFPROBE: 4.069333
4.069333
FFPROBE: 4.426544
FFPROBE: 4.609400
FFPROBE: 4.870622
FFPROBE: 5.184089
FFPROBE: 5.337267
5.337267
FFPROBE: 5.915522
FFPROBE: 6.104700
FFPROBE: 6.333478
6.333478
FFPROBE: 6.571833
FFPROBE: 6.705300
6.705300
FFPROBE: 6.738667
6.738667
FFPROBE: 6.777567
FFPROBE: 6.772033
6.772033
FFPROBE: 6.805400
6.805400
FFPROBE: 6.829811
FFPROBE: 6.838767
6.838767
FFPROBE: 6.872133
6.872133
FFPROBE: 6.882056
FFPROBE: 6.905500
6.905500