Recherche avancée

Médias (2)

Mot : - Tags -/map

Autres articles (89)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (10292)

  • FFMpeg process created from Java on CentOS doesn't exit

    21 juin 2017, par Donz

    I need to convert a lot of wave files simultaneously. About 300 files in parallel. And new files come constantly. I use ffmpeg process call from my Java 1.8 app, which is running on CentOS. I know that I have to read error and input streams for making created process from Java possible to exit.

    My code after several expirements :

       private void ffmpegconverter(String fileIn, String fileOut){
       String[] comand = new String[]{"ffmpeg", "-v", "-8", "-i", fileIn, "-acodec", "pcm_s16le", fileOut};

       Process process = null;
       BufferedReader reader = null;
       try {
           ProcessBuilder pb = new ProcessBuilder(comand);
           pb.redirectErrorStream(true);
           process = pb.start();

           //Reading from error and standard output console buffer of process. Or it could halts because of nobody
           //reads its buffer
           reader = new BufferedReader(new InputStreamReader(process.getInputStream()));
           String s;
           //noinspection StatementWithEmptyBody
           while ((s = reader.readLine()) != null) {
               log.info(Thread.currentThread().getName() + " with fileIn " + fileIn + " and fileOut " + fileOut + " writes " + s);
               //Ignored as we just need to empty the output buffer from process
           }
           log.info(Thread.currentThread().getName() + " ffmpeg process will be waited for");
           if (process.waitFor( 10, TimeUnit.SECONDS )) {
               log.info(Thread.currentThread().getName() + " ffmpeg process exited normally");
           } else {
               log.info(Thread.currentThread().getName() + " ffmpeg process timed out and will be killed");
           }

       } catch (IOException | InterruptedException e) {
           log.error(Thread.currentThread().getName() + "Error during ffmpeg process executing", e);
       } finally {
           if (process != null) {
               if (reader != null) {
                   try {
                       reader.close();
                   } catch (IOException e) {
                       log.error("Error during closing the process streams reader", e);
                   }
               }
               try {
                   process.getOutputStream().close();
               } catch (IOException e) {
                   log.error("Error during closing the process output stream", e);
               }
               process.destroyForcibly();
               log.info(Thread.currentThread().getName() + " ffmpeg process " + process + " must be dead now");
           }
       }
    }

    If I run separate test with this code it goes normally. But in my app I have hundreds of RUNNING deamon threads "process reaper" which are waiting for ffmpeg process finish. In my real app ffpmeg is started from timer thread. Also I have another activity in separate threads, but I don’t think that this is the problem. Max CPU consume is about 10%.

    Here is that I usual see in thread dump :

    "process reaper" #454 daemon prio=10 os_prio=0 tid=0x00007f641c007000 nid=0x5247 runnable [0x00007f63ec063000]
      java.lang.Thread.State: RUNNABLE
       at java.lang.UNIXProcess.waitForProcessExit(Native Method)
       at java.lang.UNIXProcess.lambda$initStreams$3(UNIXProcess.java:289)
       at java.lang.UNIXProcess$$Lambda$32/2113551491.run(Unknown Source)
       at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
       at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
       at java.lang.Thread.run(Thread.java:745)

    What am I doing wrong ?

    UPD :
    My app accepts a lot of connects with voice traffic. So I have about 300-500 another "good" threads in every moment. Could it be the reason ? Deamon threads have low priority. But I don’t beleive that they really can’t do their jobs in one hour. Ususally it takes some tens of millis.

    UPD2 :
    My synthetic test that runs fine. I tried with new threads option and without it just with straigt calling of run method.

    import java.io.BufferedReader;
    import java.io.File;
    import java.io.IOException;
    import java.io.InputStreamReader;

    public class FFmpegConvert {

       public static void main(String[] args) throws Exception {

           FFmpegConvert f = new FFmpegConvert();
           f.processDir(args[0], args[1], args.length > 2);
       }

       private void processDir(String dirPath, String dirOutPath, boolean isNewThread) {
           File dir = new File(dirPath);
           File dirOut = new File(dirOutPath);
           if(!dirOut.exists()){
               dirOut.mkdir();
           }
           for (int i = 0; i < 1000; i++) {
               for (File f : dir.listFiles()) {
                   try {
                       System.out.println(f.getName());
                       FFmpegRunner fFmpegRunner = new FFmpegRunner(f.getAbsolutePath(), dirOut.getAbsolutePath() + "/" + System.currentTimeMillis() + f.getName());
                       if (isNewThread) {
                           new Thread(fFmpegRunner).start();
                       } else {
                           fFmpegRunner.run();
                       }
                   } catch (Exception e) {
                       e.printStackTrace();
                   }
               }
           }
       }

       class FFmpegRunner implements Runnable {
           private String fileIn;
           private String fileOut;

           FFmpegRunner(String fileIn, String fileOut) {
               this.fileIn = fileIn;
               this.fileOut = fileOut;
           }

           @Override
           public void run() {
               try {
                   ffmpegconverter(fileIn, fileOut);
               } catch (Exception e) {
                   e.printStackTrace();
               }
           }

           private void ffmpegconverter(String fileIn, String fileOut) throws Exception{
               String[] comand = new String[]{"ffmpeg", "-i", fileIn, "-acodec", "pcm_s16le", fileOut};

               Process process = null;
               try {
                   ProcessBuilder pb = new ProcessBuilder(comand);
                   pb.redirectErrorStream(true);
                   process = pb.start();

                   //Reading from error and standard output console buffer of process. Or it could halts because of nobody
                   //reads its buffer
                   BufferedReader reader =
                           new BufferedReader(new InputStreamReader(process.getInputStream()));
                   String line;
                   //noinspection StatementWithEmptyBody
                   while ((line = reader.readLine()) != null) {
                       System.out.println(line);
                       //Ignored as we just need to empty the output buffer from process
                   }

                   process.waitFor();
               } catch (IOException | InterruptedException e) {
                   throw e;
               } finally {
                   if (process != null)
                       process.destroy();
               }
           }

       }

    }

    UPD3 :
    Sorry, I forgot to notice that I see the work of all these process - they created new converted files but anyway don’t exit.

  • Salty Game Music

    31 mai 2011, par Multimedia Mike — General

    Have you heard of Google’s Native Client (NaCl) project ? Probably not. Basically, it allows native code modules to run inside a browser (where ‘browser’ is defined pretty narrowly as ‘Google Chrome’ in this case). Programs are sandboxed so they aren’t a security menace (or so the whitepapers claim) but are allowed to access a variety of APIs including video and audio. The latter API is significant because sound tends to be forgotten in all the hullabaloo surrounding non-Flash web technologies. At any rate, enjoy NaCl while you can because I suspect it won’t be around much longer.

    After my recent work upgrading some old music synthesis programs to user more modern audio APIs, I got the idea to try porting the same code to run under NaCl in Chrome (first Nosefart, then Game Music Emu/GME). In this exercise, I met with very limited success. This blog post documents some of the pitfalls in my excursion.



    Infrastructure
    People who know me know that I’m rather partial — to put it gently — to straight-up C vs. C++. The NaCl SDK is heavily skewed towards C++. However, it does provide a Python tool called init_project.py which can create the skeleton of a project and can do so in C with the '-c' option :

    ./init_project.py -c -n saltynosefart
    

    This generates something that can be built using a simple ‘make’. When I added Nosefart’s C files, I learned that the project Makefile has places for project-necessary CFLAGS but does not honor them. The problem is that the generated Makefile includes a broader system Makefile that overrides the CFLAGS in the project Makefile. Going into the system Makefile and changing "CFLAGS =" -> "CFLAGS +=" solves this problem.

    Still, maybe I’m the first person to attempt building something in Native Client so I’m the first person to notice this ?

    Basic Playback
    At least the process to create an audio-enabled NaCl app is well-documented. Too bad it doesn’t seem to compile as advertised. According to my notes on the matter, I filled in PPP_InitializeModule() with the appropriate boilerplate as outlined in the docs but got a linker error concerning get_browser_interface().

    Plan B : C++
    Obviously, the straight C stuff is very much a second-class citizen in this NaCl setup. Fortunately, there is already that fully functional tone generator example program in the limited samples suite. Plan B is to copy that project and edit it until it accepts Nosefart/GME audio instead of a sine wave.

    The build system assumes all C++ files should have .cc extensions. I have to make some fixes so that it will accept .cpp files (either that, or rename all .cpp to .cc, but that’s not very clean).

    Making Noise
    You’ll be happy to know that I did successfully swap out the tone generator for either Nosefart or GME. Nosefart has a slightly fickle API that requires revving the emulator frame by frame and generating a certain number of audio samples. GME’s API is much easier to work with in this situation — just tell it how many samples it needs to generate and give it a pointer to a buffer. I played NES and SNES music play through this ad-hoc browser plugin, and I’m confident all the other supported formats would have worked if I went through the bother of converting the music data files into C headers to be included in the NaCl executable binaries (dynamically loading data via the network promised to be a far more challenging prospect reserved for phase 3 of the project).

    Portable ?
    I wouldn’t say so. I developed it on Linux and things ran fine there. I tried to run the same binaries on the Windows version of Chrome to no avail. It looks like it wasn’t even loading the .nexe files (NaCl executables).

    Thinking About The (Lack Of A) Future
    As I was working on this project, I noticed that the online NaCl documentation materialized explicit banners warning that my NaCl binaries compiled for Chrome 11 won’t work for Chrome 12 and that I need to code to the newly-released 0.3 SDK version. Not a fuzzy feeling. I also don’t feel good that I’m working from examples using bleeding edge APIs that feature deprecation as part of their naming convention, e.g., pp::deprecated::ScriptableObject().

    Ever-changing API + minimal API documentation + API that only works in one browser brand + requiring end user to explicitly enable feature = … well, that’s why I didn’t bother to release any showcase pertaining to this little experiment. Would have been neat, but I strongly suspect that this is yet another one of those APIs that Google decides to deprecate soon.

    See Also :

  • Emscripten and Web Audio API

    29 avril 2015, par Multimedia Mike — HTML5

    Ha ! They said it couldn’t be done ! Well, to be fair, I said it couldn’t be done. Or maybe that I just didn’t have any plans to do it. But I did it– I used Emscripten to cross-compile a CPU-intensive C/C++ codebase (Game Music Emu) to JavaScript. Then I leveraged the Web Audio API to output audio and visualize the audio using an HTML5 canvas.

    Want to see it in action ? Here’s a demonstration. Perhaps I will be able to expand the reach of my Game Music site when I can drop the odd Native Client plugin. This JS-based player works great on Chrome, Firefox, and Safari across desktop operating systems.

    But this endeavor was not without its challenges.

    Programmatically Generating Audio
    First, I needed to figure out the proper method for procedurally generating audio and making it available to output. Generally, there are 2 approaches for audio output :

    1. Sit in a loop and generate audio, writing it out via a blocking audio call
    2. Implement a callback that the audio system can invoke in order to generate more audio when needed

    Option #1 is not a good idea for an event-driven language like JavaScript. So I hunted through the rather flexible Web Audio API for a method that allowed something like approach #2. Callbacks are everywhere, after all.

    I eventually found what I was looking for with the ScriptProcessorNode. It seems to be intended to apply post-processing effects to audio streams. A program registers a callback which is passed configurable chunks of audio for processing. I subverted this by simply overwriting the input buffers with the audio generated by the Emscripten-compiled library.

    The ScriptProcessorNode interface is fairly well documented and works across multiple browsers. However, it is already marked as deprecated :

    Note : As of the August 29 2014 Web Audio API spec publication, this feature has been marked as deprecated, and is soon to be replaced by Audio Workers.

    Despite being marked as deprecated for 8 months as of this writing, there exists no appreciable amount of documentation for the successor API, these so-called Audio Workers.

    Vive la web standards !

    Visualize This
    The next problem was visualization. The Web Audio API provides the AnalyzerNode API for accessing both time and frequency domain data from a running audio stream (and fetching the data as both unsigned bytes or floating-point numbers, depending on what the application needs). This is a pretty neat idea. I just wish I could make the API work. The simple demos I could find worked well enough. But when I wired up a prototype to fetch and visualize the time-domain wave, all I got were center-point samples (an array of values that were all 128).

    Even if the API did work, I’m not sure if it would have been that useful. Per my reading of the AnalyserNode API, it only returns data as a single channel. Why would I want that ? My application supports audio with 2 channels. I want 2 channels of data for visualization.

    How To Synchronize
    So I rolled my own visualization solution by maintaining a circular buffer of audio when samples were being generated. Then, requestAnimationFrame() provided the rendering callbacks. The next problem was audio-visual sync. But that certainly is not unique to this situation– maintaining proper A/V sync is a perennial puzzle in real-time multimedia programming. I was able to glean enough timing information from the environment to achieve reasonable A/V sync (verify for yourself).

    Pause/Resume
    The next problem I encountered with the Web Audio API was pause/resume facilities, or the lack thereof. For all its bells and whistles, the API’s omission of such facilities seems most unusual, as if the design philosophy was, “Once the user starts playing audio, they will never, ever have cause to pause the audio.”

    Then again, I must understand that mine is not a use case that the design committee considered and I’m subverting the API in ways the designers didn’t intend. Typical use cases for this API seem to include such workloads as :

    • Downloading, decoding, and playing back a compressed audio stream via the network, applying effects, and visualizing the result
    • Accessing microphone input, applying effects, visualizing, encoding and sending the data across the network
    • Firing sound effects in a gaming application
    • MIDI playback via JavaScript (this honestly amazes me)

    What they did not seem to have in mind was what I am trying to do– synthesize audio in real time.

    I implemented pause/resume in a sub-par manner : pausing has the effect of generating 0 values when the ScriptProcessorNode callback is invoked, while also canceling any animation callbacks. Thus, audio output is technically still occurring, it’s just that the audio is pure silence. It’s not a great solution because CPU is still being used.

    Future Work
    I have a lot more player libraries to port to this new system. But I think I have a good framework set up.