Recherche avancée

Médias (3)

Mot : - Tags -/collection

Autres articles (54)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

Sur d’autres sites (11405)

  • HTML 5 currentTime accuracy

    1er septembre 2015, par Daniel

    I’m working on a project where we are using the value from the video element’s currentTime property to perform processing on the server backend using ffmpeg. I’ve come across an issue where the video element seems to report a time code that is slightly different from the time code ffmpeg needs to access the correct point in the video.

    So for instance in Firefox if the currentTime property reports that the current video time is 26.83 I might find that the frame I really want ended at 26.72 and so if I use the time to extract a frame using ffmpeg on the server I get the next frame instead of the current frame.

    The amount of offset seems to be slightly different in different parts of the video and in different videos. But the offset is usually close to one tenth of a second in Firefox. In chrome the currentTime actually seems to be ahead or the correct currentTime by about 5 hundredths of a second. It’s more difficult to figure out the offset in IE because the place where the frame shifts seems to change as I enter different time codes to look for the exact time code where the frame changes.

    I’m pretty sure the time as used by ffmpeg is the correct time. It seems to agree more closely with other video editing software such as adobe premier.

    Any ideas on what could be causing this behavior ?

    JS to get currentTime :

    AVideo.prototype.getCurrentTime = function()
    {
      return this.videoElement[0].currentTime;
    };

    Resulting ffmpeg command :

    ffmpeg -y -i '/tmp/myVideo.mov' -vframes 1 -ss 2.4871 -f image2   -y '/tmp/myFrame.jpg' 2>&1
  • Emscripten and Web Audio API

    29 avril 2015, par Multimedia Mike — HTML5

    Ha ! They said it couldn’t be done ! Well, to be fair, I said it couldn’t be done. Or maybe that I just didn’t have any plans to do it. But I did it– I used Emscripten to cross-compile a CPU-intensive C/C++ codebase (Game Music Emu) to JavaScript. Then I leveraged the Web Audio API to output audio and visualize the audio using an HTML5 canvas.

    Want to see it in action ? Here’s a demonstration. Perhaps I will be able to expand the reach of my Game Music site when I can drop the odd Native Client plugin. This JS-based player works great on Chrome, Firefox, and Safari across desktop operating systems.

    But this endeavor was not without its challenges.

    Programmatically Generating Audio
    First, I needed to figure out the proper method for procedurally generating audio and making it available to output. Generally, there are 2 approaches for audio output :

    1. Sit in a loop and generate audio, writing it out via a blocking audio call
    2. Implement a callback that the audio system can invoke in order to generate more audio when needed

    Option #1 is not a good idea for an event-driven language like JavaScript. So I hunted through the rather flexible Web Audio API for a method that allowed something like approach #2. Callbacks are everywhere, after all.

    I eventually found what I was looking for with the ScriptProcessorNode. It seems to be intended to apply post-processing effects to audio streams. A program registers a callback which is passed configurable chunks of audio for processing. I subverted this by simply overwriting the input buffers with the audio generated by the Emscripten-compiled library.

    The ScriptProcessorNode interface is fairly well documented and works across multiple browsers. However, it is already marked as deprecated :

    Note : As of the August 29 2014 Web Audio API spec publication, this feature has been marked as deprecated, and is soon to be replaced by Audio Workers.

    Despite being marked as deprecated for 8 months as of this writing, there exists no appreciable amount of documentation for the successor API, these so-called Audio Workers.

    Vive la web standards !

    Visualize This
    The next problem was visualization. The Web Audio API provides the AnalyzerNode API for accessing both time and frequency domain data from a running audio stream (and fetching the data as both unsigned bytes or floating-point numbers, depending on what the application needs). This is a pretty neat idea. I just wish I could make the API work. The simple demos I could find worked well enough. But when I wired up a prototype to fetch and visualize the time-domain wave, all I got were center-point samples (an array of values that were all 128).

    Even if the API did work, I’m not sure if it would have been that useful. Per my reading of the AnalyserNode API, it only returns data as a single channel. Why would I want that ? My application supports audio with 2 channels. I want 2 channels of data for visualization.

    How To Synchronize
    So I rolled my own visualization solution by maintaining a circular buffer of audio when samples were being generated. Then, requestAnimationFrame() provided the rendering callbacks. The next problem was audio-visual sync. But that certainly is not unique to this situation– maintaining proper A/V sync is a perennial puzzle in real-time multimedia programming. I was able to glean enough timing information from the environment to achieve reasonable A/V sync (verify for yourself).

    Pause/Resume
    The next problem I encountered with the Web Audio API was pause/resume facilities, or the lack thereof. For all its bells and whistles, the API’s omission of such facilities seems most unusual, as if the design philosophy was, “Once the user starts playing audio, they will never, ever have cause to pause the audio.”

    Then again, I must understand that mine is not a use case that the design committee considered and I’m subverting the API in ways the designers didn’t intend. Typical use cases for this API seem to include such workloads as :

    • Downloading, decoding, and playing back a compressed audio stream via the network, applying effects, and visualizing the result
    • Accessing microphone input, applying effects, visualizing, encoding and sending the data across the network
    • Firing sound effects in a gaming application
    • MIDI playback via JavaScript (this honestly amazes me)

    What they did not seem to have in mind was what I am trying to do– synthesize audio in real time.

    I implemented pause/resume in a sub-par manner : pausing has the effect of generating 0 values when the ScriptProcessorNode callback is invoked, while also canceling any animation callbacks. Thus, audio output is technically still occurring, it’s just that the audio is pure silence. It’s not a great solution because CPU is still being used.

    Future Work
    I have a lot more player libraries to port to this new system. But I think I have a good framework set up.

  • Combining audio and video streams in ffmpeg in nodejs

    10 juillet 2015, par LouisK

    This is a similar question to Merge WAV audio and WebM video but I’m attempting to deal with two streams instead of static files. It’s kind of a multi-part question.

    It may be as much an ffmpeg question as a Node.js question (or more). I’ve never used ffmpeg before and haven’t done a ton of streaming/piping.

    I’m using Mauz-Khan’s MediaStreamCapture (an expansion on RecordRTC) in conjunction with Socket.io-stream to stream media from the browser to the server. From webkit this delivers independent streams for audio and video which I’d like to combine in a single transcoding pass.

    Looking at FFmpeg’s docs it looks like it’s 100% capable of using and merging these streams simultaneously.

    Looking at these NPM modules :

    https://www.npmjs.com/package/fluent-ffmpeg and https://www.npmjs.com/package/stream-transcoder

    Fluent-ffmpeg’s docs suggest it can take a stream and a bunch of static files as inputs, while stream-transcoder only takes a single stream.

    I see this as a use case that just wasn’t built in (or needed) by the module developers, but wanted to see if anyone had used either (or another module) to accomplish this before I get on with forking and trying to add the functionality ?

    Looking at the source of stream-transcoder it’s clearly setup to only use one input, but may not be that hard to add a second to. From the ffmpeg perspective, is adding a second input as simple as adding an extra source stream and an extra ’-i’ in the command ? (I think yes, but can foresee a lot of time burned trying to figure this out through Node).

    This section of stream-transcoder is where the work is really being done :

    /* Spawns child and sets up piping */
    Transcoder.prototype._exec = function(a) {

       var self = this;

       if ('string' == typeof this.source) a = [ '-i', this.source ].concat(a);
       else a = [ '-i', '-' ].concat(a);

       var child = spawn(FFMPEG_BIN_PATH, a, {
           cwd: os.tmpdir()
       });
       this._parseMetadata(child);

       child.stdin.on('error', function(err) {
           try {
               if ('object' == typeof self.source) self.source.unpipe(this.stdin);
           } catch (e) {
               // Do nothing
           }
       });

       child.on('exit', function(code) {
           if (!code) self.emit('finish');
           else self.emit('error', new Error('FFmpeg error: ' + self.lastErrorLine));
       });

       if ('object' == typeof this.source) this.source.pipe(child.stdin);

       return child;

    };

    I’m not quite experienced enough with piping and child processes to see off the bat where I’d add the second source - could I simply do something along the lines of this.source2.pipe(child.stdin) ? How would I go about getting the 2nd stream into the FFmpeg child process ?