Recherche avancée

Médias (0)

Mot : - Tags -/serveur

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (79)

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (7920)

  • How to make video from images using Java + x264 ; cross platform solution required

    19 octobre 2014, par Shashank Tulsyan

    I have made a software which records my entire day into a video.
    Example video : https://www.youtube.com/watch?v=ITZYMMcubdw (Note : >16hrs compressed in 2mins, video speed too high, might cause epilepsy :P )

    The approach that I use right now is, Avisynth + x264 + Java.
    This is very very efficient. The video for entire day is created in 3-4mins, and reduced to a size of 40-50MB. This is perfect, the only issue is that this solution is not cross platform.
    Does anyone have a better idea ?

    I tried using java based x246 libraries but

    1. They are slow as hell
    2. The video output size is too big
    3. The video quality is not satisfactory.

    Some website suggest a command such as :

    x264.exe --crf 18 --fps 24 --input-res 1920x1080 --input-csp rgb -o "T:\crf18.mkv" "T:\___BBB\big_buck_bunny_%05d.png"

    There are 2 problems with this approach.

    1. As far as I know, x264 does accept image sequence as input, ffmpeg does
    2. The input images are not named in sequence such as image01.png , image02.png etc. They are named as timestamp_as_longinteger.png . So inorder to allow x264 to accept these images as input, I have to rename all of them ( i make a symbolic link for all images in a new folder ). This approach is again unsatisfactory, because I need more flexibility in selecting/unselecting files which would be converted to a video. Right now my approach is a hack.

    The best solution is x264. But not sure how I can send it an image sequence from Java. That too, images which are not named in sequential fashion.


    BTW The purpose of making video is going back in time, and finding out how time was spend/wasted.
    The software is aware of what the user is doing. So using this I can find out (visually) how a class evolved with time. How much time I spend on a particular class/package/module/project/customer. The granuality right now is upto the class level, I wish to take it to the function level. The software is called jitendriya.

    Here is a sample graph


    Here is 1 solution
    How does one encode a series of images into H264 using the x264 C API ?

    But this is for C. If I have to do the same in java, and in a cross plaform fashion, I will have to resort to JNA/JNI. JNA might have a significant performance hit. JNI would be more work.
    FFMpeg also looks like a nice alternative, but I am still not satisfied by any of these solutions looking at the pros and cons.


    Solution Adapted.

    package weeklyvideomaker;

    import java.awt.AWTException;
    import java.awt.Rectangle;
    import java.awt.Robot;
    import java.awt.Toolkit;
    import java.awt.image.BufferedImage;
    import java.io.File;
    import java.io.IOException;
    import java.io.OutputStream;
    import java.util.Calendar;
    import java.util.LinkedList;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import neembuu.release1.util.StreamGobbler;
    import org.shashaank.activitymonitor.ScreenCaptureHandler;
    import org.shashaank.jitendriya.JitendriyaParams;

    /**
    *
    * @author Shashank
    */
    public class DirectVideoScreenHandler implements ScreenCaptureHandler {
       private final JitendriyaParams  jp;

       private String extension="264";
       private boolean lossless=false;
       private String fps="24/1";

       private Process p = null;
       private Rectangle r1;
       private Robot r;

       private int currentDay;

       private static final String[]weeks={"sun","mon","tue","wed","thu","fri","sat"};

       public DirectVideoScreenHandler(JitendriyaParams jp) {
           this.jp = jp;
       }

       public String getExtension() {
           return extension;
       }

       public void setExtension(String extension) {
           this.extension = extension;
       }

       public boolean isLossless() {
           return lossless;
       }

       public void setLossless(boolean lossless) {
           this.lossless = lossless;
       }

       public String getFps() {
           return fps;
       }

       public void setFps(String fps) {
           this.fps = fps;
       }

       private static int getday(){
           return Calendar.getInstance().get(Calendar.DAY_OF_WEEK) - 1;
       }

       public void make()throws IOException,AWTException{
           currentDay = getday();
           File week = jp.getWeekFolder();

           String destinationFile = week+"\\videos\\"+weeks[currentDay]+"_"+System.currentTimeMillis()+"_direct."+extension;

           r = new Robot();
           r1 = getScreenSize();

           ProcessBuilder pb = makeProcess(destinationFile, 0, r1.width, r1.height);

           p = pb.start();
           StreamGobbler out = new StreamGobbler(p.getInputStream(), "out");
           StreamGobbler err = new StreamGobbler(p.getErrorStream(), "err");
           out.start();err.start();
       }

       private static Rectangle getScreenSize(){
           return new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
       }

       private void screenShot(OutputStream os)throws IOException{        
           BufferedImage bi = r.createScreenCapture(r1);
           int[]intRawData = ((java.awt.image.DataBufferInt)
                   bi.getRaster().getDataBuffer()).getData();
           byte[]rawData = new byte[intRawData.length*3];
           for (int i = 0; i < intRawData.length; i++) {
               int rgb = intRawData[i];
               rawData[ i*3 + 0 ] = (byte) (rgb >> 16);
               rawData[ i*3 + 1 ] = (byte) (rgb >> 8);
               rawData[ i*3 + 2 ] = (byte) (rgb);
           }
           os.write(rawData);
       }

       private ProcessBuilder makeProcess(String destinationFile, int numberOfFrames,
               int width, int height){
           LinkedList<string> commands = new LinkedList&lt;>();
           commands.add("\""+encoderPath()+"\"");
           if(true){
               commands.add("-");
               if(lossless){
                   commands.add("--qp");
                   commands.add("0");
               }
               commands.add("--keyint");
               commands.add("240");
               commands.add("--sar");
               commands.add("1:1");
               commands.add("--output");
               commands.add("\""+destinationFile+"\"");
               if(numberOfFrames>0){
                   commands.add("--frames");
                   commands.add(String.valueOf(numberOfFrames));
               }else{
                   commands.add("--stitchable");
               }
               commands.add("--fps");
               commands.add(fps);
               commands.add("--input-res");
               commands.add(width+"x"+height);
               commands.add("--input-csp");
               commands.add("rgb");//i420
           }
           return new ProcessBuilder(commands);
       }

       private String encoderPath(){
           return jp.getToolsPath()+File.separatorChar+"x264_64.exe";
       }

       @Override public void run() {
           try {
               if(p==null){
                   make();
               }
               if(currentDay!=getday()){// day changed
                   destroy();
                   return;
               }
               if(!r1.equals(getScreenSize())){// screensize changed
                   destroy();
                   return;
               }
               screenShot(p.getOutputStream());
           } catch (Exception ex) {
               Logger.getLogger(DirectVideoScreenHandler.class.getName()).log(Level.SEVERE, null, ex);
           }
       }

       private void destroy()throws Exception{
           p.getOutputStream().flush();
           p.getOutputStream().close();
           p.destroy();
           p = null;
       }

    }
    </string>

    package weeklyvideomaker;

    import org.shashaank.jitendriya.JitendriyaParams;

    /**
    *
    * @author Shashank
    */
    public class DirectVideoScreenHandlerTest {
       public static void main(String[] args)throws Exception {
           JitendriyaParams  jp = new JitendriyaParams.Builder()
                   .setToolsPath("F:\\GeneralProjects\\JReminder\\development_environment\\tools")
                   .setOsDependentDataFolderPath("J:\\jt_data")
                   .build();
           DirectVideoScreenHandler w = new DirectVideoScreenHandler(jp);
           w.setExtension("264");
           w.setFps("24/1");
           w.setLossless(false);
           w.make();

           for (int i = 0; ; i++) {
               w.run();
               Thread.sleep(1000);
           }
       }
    }
  • Progress with rtc.io

    12 août 2014, par silvia

    At the end of July, I gave a presentation about WebRTC and rtc.io at the WDCNZ Web Dev Conference in beautiful Wellington, NZ.

    webrtc_talk

    Putting that talk together reminded me about how far we have come in the last year both with the progress of WebRTC, its standards and browser implementations, as well as with our own small team at NICTA and our rtc.io WebRTC toolbox.

    WDCNZ presentation page5

    One of the most exciting opportunities is still under-exploited : the data channel. When I talked about the above slide and pointed out Bananabread, PeerCDN, Copay, PubNub and also later WebTorrent, that’s where I really started to get Web Developers excited about WebRTC. They can totally see the shift in paradigm to peer-to-peer applications away from the Server-based architecture of the current Web.

    Many were also excited to learn more about rtc.io, our own npm nodules based approach to a JavaScript API for WebRTC.

    rtcio_modules

    We believe that the World of JavaScript has reached a critical stage where we can no longer code by copy-and-paste of JavaScript snippets from all over the Web universe. We need a more structured module reuse approach to JavaScript. Node with JavaScript on the back end really only motivated this development. However, we’ve needed it for a long time on the front end, too. One big library (jquery anyone ?) that does everything that anyone could ever need on the front-end isn’t going to work any longer with the amount of functionality that we now expect Web applications to support. Just look at the insane growth of npm compared to other module collections :

    Packages per day across popular platforms (Shamelessly copied from : http://blog.nodejitsu.com/npm-innovation-through-modularity/)

    For those that – like myself – found it difficult to understand how to tap into the sheer power of npm modules as a font end developer, simply use browserify. npm modules are prepared following the CommonJS module definition spec. Browserify works natively with that and “compiles” all the dependencies of a npm modules into a single bundle.js file that you can use on the front end through a script tag as you would in plain HTML. You can learn more about browserify and module definitions and how to use browserify.

    For those of you not quite ready to dive in with browserify we have prepared prepared the rtc module, which exposes the most commonly used packages of rtc.io through an “RTC” object from a browserified JavaScript file. You can also directly download the JavaScript file from GitHub.

    Using rtc.io rtc JS library
    Using rtc.io rtc JS library

    So, I hope you enjoy rtc.io and I hope you enjoy my slides and large collection of interesting links inside the deck, and of course : enjoy WebRTC ! Thanks to Damon, JEeff, Cathy, Pete and Nathan – you’re an awesome team !

    On a side note, I was really excited to meet the author of browserify, James Halliday (@substack) at WDCNZ, whose talk on “building your own tools” seemed to take me back to the times where everything was done on the command-line. I think James is using Node and the Web in a way that would appeal to a Linux Kernel developer. Fascinating !!

  • Ffmpeg set duration using node-fluent-ffmpeg

    23 mai 2013, par Vprnl

    I'm really new to the world of ffmpeg so please excuses me if this is a stupid queston.

    I'm using the module Node-fluent-ffmpeg to stream a movie and convert it from avi to webm.
    So far so good (it plays the video), but I'm having trouble parsing the duration to the player. My ultimate goal is to be able to skip ahead in the movie. But first the player needs to know how long the video is.

    my code is as followed :

    var stat = fs.statSync(movie);

    var start = 0;
    var end = 0;
    var range = req.header(&#39;Range&#39;);
    if (range != null) {
    start = parseInt(range.slice(range.indexOf(&#39;bytes=&#39;)+6,
     range.indexOf(&#39;-&#39;)));
    end = parseInt(range.slice(range.indexOf(&#39;-&#39;)+1,
     range.length));
    }
    if (isNaN(end) || end == 0) end = stat.size-1;
    if (start > end) return;

    res.writeHead(206, { // NOTE: a partial http response
       &#39;Connection&#39;:&#39;close&#39;,
       &#39;Content-Type&#39;:&#39;video/webm&#39;,
       &#39;Content-Length&#39;:end - start,
       &#39;Content-Range&#39;:&#39;bytes &#39;+start+&#39;-&#39;+end+&#39;/&#39;+stat.size,
       &#39;Transfer-Encoding&#39;:&#39;chunked&#39;
    });

    var  proc = new ffmpeg({ source: movie, nolog: true, priority: 1, timeout:15000})
       .toFormat(&#39;webm&#39;)
       .withVideoBitrate(&#39;1024k&#39;)
       .addOptions([&#39;-probesize 900000&#39;, &#39;-analyzeduration 0&#39;, &#39;-bufsize 14000&#39;])
       .writeToStream(res, function(retcode, error){
       if (!error){
           console.log(&#39;file has been converted succesfully&#39;,retcode);
       }else{
           console.log(&#39;file conversion error&#39;,error);
       }
    });

    I tried to set the header with a start and a end based on this article : http://delog.wordpress.com/2011/04/25/stream-webm-file-to-chrome-using-node-js/

    I also looked in the FFmpeg documentation and found -f duration and -ss.
    But I don't quite know how to convert the byte range to seconds.

    I feel like I'm pretty close to a solution but my inexperience with the subject matter prohibits me from getting it to work. If I'm unclear in any way please let me know. (I have a tendency of explaining things fuzzy.)

    Thanks in advance !