Recherche avancée

Médias (0)

Mot : - Tags -/xml-rpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (47)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (5152)

  • Prevent "guessing mono" when using FFMPEG ?

    25 juillet 2020, par Daemitrous

    This extention works perfectly fine, I'm just concerned about the message it prints everytime it's called.

    


    The Message when the script is called:

    


    Guessed Channel Layout for Input Stream #0.0 : mono


    


    This is an extention to the discord bot I'm working on using discord.py.

    


    Script

    


    from discord.ext import commands&#xA;from pyttsx3 import init  # TTS Audio converter&#xA;&#xA;import discord&#xA;&#xA;&#xA;ffmpeg_path = r"C:\Users\evand\Documents\Misc\ffmpeg\bin\ffmpeg.exe"&#xA;&#xA;audio_path = r"C:\Users\evand\Documents\Misc\Discord\suhj_bot\Test_mp3\AudioFileForDiscord.mp3"&#xA;&#xA;&#xA;engine = init() # Initializes speach TTS engine with "sapi5" configuration&#xA;&#xA;&#xA;def TTS_save(text, audio=engine): # converts and saves the user&#x27;s text into a TTS audio file ".mp3"&#xA;&#xA;    audio.save_to_file(text, audio_path)&#xA;&#xA;    audio.runAndWait()&#xA;&#xA;&#xA;class Voice(commands.Cog):&#xA;&#xA;    def __init__(self, bot):&#xA;&#xA;        self.bot = bot&#xA;&#xA;    @commands.command(&#xA;&#xA;        name="v",&#xA;        description="The voice command:\n\ng.v <content>"&#xA;&#xA;    )&#xA;&#xA;    async def voice_command(self, ctx, *, args):&#xA;&#xA;        VoiceClient = ctx.voice_client&#xA;&#xA;&#xA;        if VoiceClient is None:&#xA;&#xA;            VoiceClient = await ctx.author.voice.channel.connect() # bot joins voice-channel of user&#xA;&#xA;&#xA;        TTS_save(args) # creates TTS audio file&#xA;&#xA;        VoiceClient.play(discord.FFmpegOpusAudio(audio_path, executable=ffmpeg_path)) # plays audio file through discord bot (.mp3, ffmpeg.exe)&#xA;&#xA;        return&#xA;&#xA;&#xA;def setup(bot):&#xA;&#xA;    bot.add_cog(Voice(bot))&#xA;</content>

    &#xA;

    Is there any way to prevent the program from guessing mono ? I like to keep my code clean and close to errorness.

    &#xA;

  • Transcode GCS files using ffmpeg

    12 août 2020, par hudsonhooper

    My ultimate goal is to transcode video files uploaded to one GCS bucket, transcode them for HLS using App Engine, then store them in another bucket for streaming. I started with code someone else made which closely fit my use case. Right now I just have it set up to save the files as mp4 to see if it works and I receive no error messages and is logs 'Success' to the console but when I check GCS the output file it has created is only around a kb (basically its just the metadata). I believe the issue has to do with the .pipe(remoteWriteStream, { end: true }); line and something with ffmpeg because if I instead run the line originStream.pipe(remoteWriteStream, {end: true}); to test the .pipe functionality I get what I'd expect (a copy of the file of the correct size). I've seen a lot of other people such as here use essentially the same method just with cloud functions and have success. Any idea where I went wrong ?

    &#xA;&#xA;

    import { Storage } from &#x27;@google-cloud/storage&#x27;;&#xA;&#xA;import ffmpegInstaller from &#x27;@ffmpeg-installer/ffmpeg&#x27;;&#xA;import ffmpeg from &#x27;fluent-ffmpeg&#x27;;&#xA;&#xA;ffmpeg.setFfmpegPath(ffmpegInstaller.path);&#xA;&#xA;export default async function transcode(&#xA;  //My try&#xA;  name: String,&#xA;  bucket: String,&#xA;  suffix: String,&#xA;  size: String,&#xA;&#xA;): Promise<void> {&#xA;  return new Promise((resolve, reject) => {&#xA;    const storage = new Storage();&#xA;&#xA;    const originBucket = storage.bucket(bucket);&#xA;    const destinationBucket = storage.bucket(&#x27;storieshls-2e4b1&#x27;);&#xA;&#xA;    const originFile = originBucket.file(name);&#xA;    const originStream = originFile.createReadStream();&#xA;&#xA;    const remoteWriteStream = destinationBucket.file(name.replace(".MOV", ".mp4")).createWriteStream({&#xA;      metadata: {&#xA;        contentType: &#x27;video/mp4&#x27;, // This could be whatever else you are transcoding to&#xA;      },&#xA;    });&#xA;&#xA;    //originStream.pipe(remoteWriteStream, {end: true});&#xA;&#xA;    ffmpeg()&#xA;      .input(originStream)&#xA;      .outputOptions(&#x27;-c:v copy&#x27;) // Change these options to whatever suits your needs&#xA;      .outputOptions(&#x27;-c:a aac&#x27;)&#xA;      .outputOptions(&#x27;-b:a 160k&#x27;)&#xA;      .outputOptions(&#x27;-f mp4&#x27;)&#xA;      .outputOptions(&#x27;-preset fast&#x27;)&#xA;      .outputOptions(&#x27;-movflags frag_keyframe&#x2B;empty_moov&#x27;)&#xA;      .on(&#x27;start&#x27;, cmdLine => {&#xA;        console.log(`[${suffix}] Started FFMpeg`, cmdLine);&#xA;      })&#xA;      .on(&#x27;end&#x27;, () => {&#xA;        console.log(`[${suffix}] Success!.`);&#xA;        resolve();&#xA;      })&#xA;      .on(&#x27;error&#x27;, (err: Error, stdout, stderr) => {&#xA;        console.log(`[${suffix}] Error:`, err.message);&#xA;        console.error(&#x27;stdout:&#x27;, stdout);&#xA;        console.error(&#x27;stderr:&#x27;, stderr);&#xA;&#xA;        reject();&#xA;      })&#xA;      .pipe(remoteWriteStream, { end: true });&#xA;  });&#xA;}&#xA;</void>

    &#xA;

  • Tracking granular duration from an ffmpeg stream piped into Nodejs

    20 mai 2020, par slifty

    The Situation

    &#xA;&#xA;

    I have a video file that I want to process in nodejs, but I want to run it through ffmpeg first in order to normalize the encoding. As I get the data / for a given data event I would like to be able to know how "far" into the video the piped stream has progressed, at as close to frame-level granularity as possible.

    &#xA;&#xA;

    0.1 second granularity would be fine.

    &#xA;&#xA;

    The Code

    &#xA;&#xA;

    I am using Nodejs to invoke ffmpeg to take the video file path and then output the data to stdout :

    &#xA;&#xA;

    const ffmpegSettings = [&#xA;    &#x27;-i&#x27;, &#x27;./path/to/video.mp4&#x27;,       // Ingest a file&#xA;    &#x27;-f&#x27;, &#x27;mpegts&#x27;,                    // Specify the wrapper&#xA;    &#x27;-&#x27;                                // Output to stdout&#xA;]&#xA;const ffmpegProcess = spawn(&#x27;ffmpeg&#x27;, ffmpegSettings)&#xA;const readableStream = ffmpegProcess.stdout&#xA;readableStream.on(&#x27;data&#x27;, async (data) => {&#xA;    readableStream.pause() // Pause the stream to make sure we process in order.&#xA;&#xA;    // Process the data.&#xA;    // This is where I want to know the "timestamp" for the data.&#xA;&#xA;    readableStream.resume() // Restart the stream.&#xA;})&#xA;

    &#xA;&#xA;

    The Question

    &#xA;&#xA;

    How can I efficiently and accurately keep track of how far into the video a given 'data' event represents ? Keeping in mind that a given on event in this context might not even have enough data to represent even a full frame.

    &#xA;&#xA;


    &#xA;&#xA;

    Temporary Edit

    &#xA;&#xA;

    (I will delete this section once a conclusive answer is identified)

    &#xA;&#xA;

    Since posting this question I've done some exploration using ffprobe.

    &#xA;&#xA;

        start = () => {&#xA;        this.ffmpegProcess = spawn(&#x27;ffmpeg&#x27;, this.getFfmpegSettings())&#xA;        this.ffprobeProcess = spawn(&#x27;ffprobe&#x27;, this.getFfprobeSettings())&#xA;        this.inputStream = this.getInputStream()&#xA;&#xA;        this.inputStream.on(&#x27;data&#x27;, (rawData) => {&#xA;            this.inputStream.pause()&#xA;            if(rawData) {&#xA;                this.ffmpegProcess.stdin.write(rawData)&#xA;            }&#xA;            this.inputStream.resume()&#xA;        })&#xA;&#xA;        this.ffmpegProcess.stdout.on(&#x27;data&#x27;, (mpegtsData) => {&#xA;            this.ffmpegProcess.stdout.pause()&#xA;            this.ffprobeProcess.stdin.write(mpegtsData)&#xA;            console.log(this.currentPts)&#xA;            this.ffmpegProcess.stdout.resume()&#xA;        })&#xA;&#xA;        this.ffprobeProcess.stdout.on(&#x27;data&#x27;, (probeData) => {&#xA;            this.ffprobeProcess.stdout.pause()&#xA;            const lastLine = probeData.toString().trim().split(&#x27;\n&#x27;).slice(-1).pop()&#xA;            const lastPacket = lastLine.split(&#x27;,&#x27;)&#xA;            const pts = lastPacket[4]&#xA;            console.log(`FFPROBE: ${pts}`)&#xA;            this.currentPts = pts&#xA;            this.ffprobeProcess.stdout.resume()&#xA;        })&#xA;&#xA;        logger.info(`Starting ingestion from ${this.constructor.name}...`)&#xA;    }&#xA;&#xA;    /**&#xA;     * Returns an ffmpeg settings array for this ingestion engine.&#xA;     *&#xA;     * @return {String[]} A list of ffmpeg command line parameters.&#xA;     */&#xA;    getFfmpegSettings = () => [&#xA;        &#x27;-loglevel&#x27;, &#x27;info&#x27;,&#xA;        &#x27;-i&#x27;, &#x27;-&#x27;,&#xA;        &#x27;-f&#x27;, &#x27;mpegts&#x27;,&#xA;        // &#x27;-vf&#x27;, &#x27;fps=fps=30,signalstats,metadata=print:key=lavfi.signalstats.YDIF:file=\&#x27;pipe\\:3\&#x27;&#x27;,&#xA;        &#x27;-&#x27;,&#xA;    ]&#xA;&#xA;    /**&#xA;     * Returns an ffprobe settings array for this ingestion engine.&#xA;     *&#xA;     * @return {String[]} A list of ffprobe command line parameters.&#xA;     */&#xA;    getFfprobeSettings = () => [&#xA;        &#x27;-f&#x27;, &#x27;mpegts&#x27;,&#xA;        &#x27;-i&#x27;, &#x27;-&#x27;,&#xA;        &#x27;-print_format&#x27;, &#x27;csv&#x27;,&#xA;        &#x27;-show_packets&#x27;,&#xA;    ]&#xA;

    &#xA;&#xA;

    This pipes the ffmpeg output into ffprobe and uses that to "estimate" where in the stream the processing has gotten. It starts off wildly inaccurate, but after about 5 seconds of processed-video ffprobe and ffmpeg are producing data at a similar pace.

    &#xA;&#xA;

    This is a hack, but a step towards the granularity I want. It may be that I need an mpegts parser / chunker that can run on the ffmpeg output directly in NodeJS.

    &#xA;&#xA;

    The output of the above is something along the lines of :

    &#xA;&#xA;

    undefined (repeated around 100x as I assume ffprobe needs more data to start)&#xA;FFPROBE: 1.422456 // Note that this represents around 60 ffprobe messages sent at once.&#xA;FFPROBE: 1.933867&#xA;1.933867 // These lines represent mpegts data sent from ffmpeg, and the latest pts reported by ffprobe&#xA;FFPROBE: 2.388989&#xA;2.388989&#xA;FFPROBE: 2.728578&#xA;FFPROBE: 2.989811&#xA;FFPROBE: 3.146544&#xA;3.146544&#xA;FFPROBE: 3.433889&#xA;FFPROBE: 3.668989&#xA;FFPROBE: 3.802400&#xA;FFPROBE: 3.956333&#xA;FFPROBE: 4.069333&#xA;4.069333&#xA;FFPROBE: 4.426544&#xA;FFPROBE: 4.609400&#xA;FFPROBE: 4.870622&#xA;FFPROBE: 5.184089&#xA;FFPROBE: 5.337267&#xA;5.337267&#xA;FFPROBE: 5.915522&#xA;FFPROBE: 6.104700&#xA;FFPROBE: 6.333478&#xA;6.333478&#xA;FFPROBE: 6.571833&#xA;FFPROBE: 6.705300&#xA;6.705300&#xA;FFPROBE: 6.738667&#xA;6.738667&#xA;FFPROBE: 6.777567&#xA;FFPROBE: 6.772033&#xA;6.772033&#xA;FFPROBE: 6.805400&#xA;6.805400&#xA;FFPROBE: 6.829811&#xA;FFPROBE: 6.838767&#xA;6.838767&#xA;FFPROBE: 6.872133&#xA;6.872133&#xA;FFPROBE: 6.882056&#xA;FFPROBE: 6.905500&#xA;6.905500&#xA;

    &#xA;