Recherche avancée

Médias (0)

Mot : - Tags -/upload

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (86)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Configuration spécifique pour PHP5

    4 février 2011, par

    PHP5 est obligatoire, vous pouvez l’installer en suivant ce tutoriel spécifique.
    Il est recommandé dans un premier temps de désactiver le safe_mode, cependant, s’il est correctement configuré et que les binaires nécessaires sont accessibles, MediaSPIP devrait fonctionner correctement avec le safe_mode activé.
    Modules spécifiques
    Il est nécessaire d’installer certains modules PHP spécifiques, via le gestionnaire de paquet de votre distribution ou manuellement : php5-mysql pour la connectivité avec la (...)

Sur d’autres sites (6489)

  • Adding ffmpeg OMX codec to Genymotion Android 4.4.2 emulator

    22 avril 2016, par photon

    Basic Question :

    Is there a way to add a new audio codec to the Genymotion Android emulator, short of downloading the entire Android source, learning how to build it, and creating my own version of Android ?


    Context :

    I have written a java Android app that acts as an audio renderer, as well as being a DLNA/OpenHome server and client. Think "BubbleUpnp" without video. My primary development platform is Win8.1. The program started as an ActiveState "pure-perl" DLNA MediaServer on Windows, which I then ported to Ubuntu, which I got working under Android a few years ago. It was pretty funky ... all UI being presented thru an HTTP server/jquery/jquery-ui, served from an Ubuntu shell running under Android (a trick in itself), serving up HTML pages to Chrome running on the same (Android) device. Besides being "funky" it had a major drawback that it required a valid IP address to work ... as I could not figure out how to get ubuntu to have a local loopback device for a 127.0.0.01 localhost I use the app as a "car stereo" on my boat (which is my home), which is often not hooked up to the internet.

    I had a hard time getting started in Android app development because the speed of the Android emulators in Eclipse was horrid, and the ADB drivers did not work from Win8 for the longest time.

    Then one day, about a year ago, I ran into Genymotion (kudos to the authors), and all of a sudden I had a workable Android development environment, so I added a Java implementation of the DLNA server, which then grew into a renderer also, using Android’s MediaPlayer class, and, adding the ability to act as a DLNA control point, and more recently also added OpenHome servers and renderers to it.

    In a separate effort, I created a build environment for this program called fpCalc, based on ffMpeg, on a variety of platforms, including Win, Linux, and Android x86, arm, and arm7 devices (bitbucket.org/phorton1/) and did an extensive series of tests to determine the validity, and longevity of fpcalc fingerprints, discovering that the fpCalc fingerprint changed based on the version of ffmpeg it was built against, a separate topic to be sure, but in the process, learned at least a bit about how to build ffmpeg as well as Android shared libraries, JNI interfaces, etc.

    So now the Android-Java version of the program has advanced past the old perl version, and I am debating whether I want to continue to try to build the perl version (and or add an wxPerl UI) to it.

    One issue that has arisen, for me, is that the Genymotion emulator does not support WMA decoding ... as Android dropped support for WMA due to licensing issues, etc, a ways back in time ... yet my music library has significant numbers of tunes in WMA files, and I don’t want to "convert" them, my carefully thought-out philosophy is that my program does not modify the contents, or tags, or anything in the original media files that I have accumulated, or will receive in the future, rather treating them as "artifacts" worth preserving "as is". No conversion is going to make a file "better" than it was, and I wish to preserve ALL of the original sources for ALL of my music going forward.

    So, I’m thinking, gee, I can build FFMPEG on 7 different platforms, and I see all these references to "OMX FFMPEG Codec Support for Android" on the net, so I’m thinking, "All I need to do is create the OMX Component and somehow get it into Genymotion".

    I have studied up OMX, OpenMaxIL, seen Michael Chen’s posts, seen the stack overflow questions

    How to make ffmpeg codec componet as OMX component

    and

    Android : How to integrate a decoder to multimedia framework

    and Cedric Fung’s page https://vec.io/posts/use-android-hardware-decoder-with-omxcodec-in-ndk, and Michael Chen’s repository at https://github.com/omxcodec , as well as virtually every other page on the net that mentions any combination of libstagefright, OMX, Genymotion, and FFMPEG.

    (this page would not let me put more than 2 links as i don’t have a "10" reputation, or I would have listed some of the sources I have seen) ..

    My Linux development environment is a Ubuntu12.04 vbox running on my win machine. I have downloaded and run the Android-x86 iso as a vbox, and IT contains the ffmpeg codecs, but unfortunately, it neither supports a wifi interface, nor the vbox "guest additions", so it has a really funky mouse. I tried for about 3 days to address those two issues, but in the end do not feel it is usable for my puproses, and I really like the way genymotion "feels", particularly the moust support, so I’d like to keep genymotion as my "windows android" virtual device under which I may run my program, deprecate and stop using my old perl source,

    except genymotion does not support WMA files ...


    Several side notes :

    (a) There is no good way to write a single sourced application in Java that runs natively in Windows, AND as an Android app.

    (b) I don’t want to reboot my Windows machine to a "real" Android device just to play my music files. The machine has to stay in Windows as I use it for other things as well.

    (c) I am writing this as my machine is in the 36th hour of downloading the entire ASOP source code base to a partition in my Ubuntu vbox while I am sitting in a hotel room on a not-so-good internet connection in Panama City, Panama, before I return to my boat in remote Bocas Del Toro Panama, where the internet connection is even worse.

    (d) I did get WMA decoding to work in my app by calling my FFMPEG executable from Java (converting it to either WAV/PCM or AAC), but, because of limitations in Android’s MediaPlayer, it does not work well, particularly for remotely hosted WMA files ... MediaPlayer insists on having the whole file present before it starts to play, which can take several seconds or longer, and I am hoping that by getting a ’real’ WMA codec underneath MediaPlayer, that problem will just disappear ....


    So, I’m trying to figure this whole mess out. There are a lot of tantalizing clues, and suggestions, but what I have found, or at least what I am starting to believe, is that if I want to add a simple WMA audio decoding codec to Android (Genymotion), not only do I have to download, basically, the ENTIRE ASOP Android source tree, and learn a new set of tools (repo, etc), but I have to (be able to) rebuild, from scratch, the entire Android system, esp. libstagefright.so in such a way as to be COMPLETELY compatible with the existing one in GenyMotion, while at the same time adding ffmpeg codecs ala Michael Chen’s page.

    And I’m just asking, is it, could it really be that difficult ?


    Anyways, this makes me crazy. Is there no way to just build a new component, or at worst a new OMX core, and add it to Genymotion, WITHOUT building all of Android, and preferably, based only on the OMX h files ? Or do I REALLY have to replace the existing libstagefright.so, which means, basically, rebuilding all of Android ...

    p.s. I thought it would be nice to get this figured out, build it, and then post the installable new FFMPEG codecs someplace for other people to use, so that they don’t also grow warts on their ears and have steam shooting out of their eyeballs, while they get old trying to figure it out ....

  • WebRTC predictions for 2016

    17 février 2016, par silvia

    I wrote these predictions in the first week of January and meant to publish them as encouragement to think about where WebRTC still needs some work. I’d like to be able to compare the state of WebRTC in the browser a year from now. Therefore, without further ado, here are my thoughts.

    WebRTC Browser support

    I’m quite optimistic when it comes to browser support for WebRTC. We have seen Edge bring in initial support last year and Apple looking to hire engineers to implement WebRTC. My prediction is that we will see the following developments in 2016 :

    • Edge will become interoperable with Chrome and Firefox, i.e. it will publish VP8/VP9 and H.264/H.265 support
    • Firefox of course continues to support both VP8/VP9 and H.264/H.265
    • Chrome will follow the spec and implement H.264/H.265 support (to add to their already existing VP8/VP9 support)
    • Safari will enter the WebRTC space but only with H.264/H.265 support

    Codec Observations

    With Edge and Safari entering the WebRTC space, there will be a larger focus on H.264/H.265. It will help with creating interoperability between the browsers.

    However, since there are so many flavours of H.264/H.265, I expect that when different browsers are used at different endpoints, we will get poor quality video calls because of having to negotiate a common denominator. Certainly, baseline will work interoperably, but better encoding quality and lower bandwidth will only be achieved if all endpoints use the same browser.

    Thus, we will get to the funny situation where we buy ourselves interoperability at the cost of video quality and bandwidth. I’d call that a “degree of interoperability” and not the best possible outcome.

    I’m going to go out on a limb and say that at this stage, Google is going to consider strongly to improve the case of VP8/VP9 by improving its bandwidth adaptability : I think they will buy themselves some SVC capability and make VP9 the best quality codec for live video conferencing. Thus, when Safari eventually follows the standard and also implements VP8/VP9 support, the interoperability win of H.264/H.265 will become only temporary overshadowed by a vastly better video quality when using VP9.

    The Enterprise Boundary

    Like all video conferencing technology, WebRTC is having a hard time dealing with the corporate boundary : firewalls and proxies get in the way of setting up video connections from within an enterprise to people outside.

    The telco world has come up with the concept of SBCs (session border controller). SBCs come packed with functionality to deal with security, signalling protocol translation, Quality of Service policing, regulatory requirements, statistics, billing, and even media service like transcoding.

    SBCs are a total overkill for a world where a large number of Web applications simply want to add a WebRTC feature – probably mostly to provide a video or audio customer support service, but it could be a live training session with call-in, or an interest group conference all.

    We cannot install a custom SBC solution for every WebRTC service provider in every enterprise. That’s like saying we need a custom Web proxy for every Web server. It doesn’t scale.

    Cloud services thrive on their ability to sell directly to an individual in an organisation on their credit card without that individual having to ask their IT department to put special rules in place. WebRTC will not make progress in the corporate environment unless this is fixed.

    We need a solution that allows all WebRTC services to get through an enterprise firewall and enterprise proxy. I think the WebRTC standards have done pretty well with firewalls and connecting to a TURN server on port 443 will do the trick most of the time. But enterprise proxies are the next frontier.

    What it takes is some kind of media packet forwarding service that sits on the firewall or in a proxy and allows WebRTC media packets through – maybe with some configuration that is necessary in the browsers or the Web app to add this service as another type of TURN server.

    I don’t have a full understanding of the problems involved, but I think such a solution is vital before WebRTC can go mainstream. I expect that this year we will see some clever people coming up with a solution for this and a new type of product will be born and rolled out to enterprises around the world.

    Summary

    So these are my predictions. In summary, they address the key areas where I think WebRTC still has to make progress : interoperability between browsers, video quality at low bitrates, and the enterprise boundary. I’m really curious to see where we stand with these a year from now.

    It’s worth mentioning Philipp Hancke’s tweet reply to my post :

    — we saw some clever people come up with a solution already. Now it needs to be implemented

  • PHP-FFMPEG Error FFMpeg\Exception\RuntimeException

    27 avril 2016, par JJ The Second

    I have installed FFMPEG on a Ubuntu 14.04 server which works fine. I have also installed all dependencies/libraries successfully and don’t have any problem transcoding using linux command lines. My user/group can now process SUDO command so there isn’t any issue there. So I started experimenting with a PHP library called PHP-FFMPEG (Please click here to view this library on github) and have installed is using composer in my vendor folder. Please note, I’m using CodeIgniter 3 as PHP library and my composer has installed all dependencies in vendor folder.

    In my welcome controller that comes with CodeIgniter, this is how I’m calling FFMPEG

    public function __construct()
    {
       parent::__construct();
       // include_once('/var/www/vhosts/domain.com/streaming.domain.com/application/libraries/FFMpeg/FFMpeg.php');
       include('/var/www/vhosts/domain.com/streaming.domain.com/vendor/autoload.php');

    }

    and this is my controller

    public function preset_one()

       {


       $ffmpeg = FFMpeg\FFMpeg::create([
               'ffmpeg.binaries'  => '/bin/ffmpeg', // the path to the FFMpeg binary
               'ffprobe.binaries' => '/bin/ffprobe', // the path to the FFProbe binary
               'timeout'          => 0,
           ]);        


       $video = $ffmpeg->open('./media/video.mp4');
       $format = new FFMpeg\Format\Video\X264();
       $format->on('progress', function ($video, $format, $percentage) {
           echo "$percentage % transcoded";
       });

       $format
           -> setKiloBitrate(1000)
           -> setAudioChannels(2)
           -> setAudioKiloBitrate(256);

       $video
       ->save($format, './media/video.avi');

       }

    My media folder has 0777 permission for files and subdirectories but when I run this controller this is what I get

    An uncaught Exception was encountered

    Type: FFMpeg\Exception\RuntimeException

    Message: Encoding failed

    Filename: /var/www/vhosts/domain.com/streaming.domain.com/vendor/php-ffmpeg/php-ffmpeg/src/FFMpeg/Media/Video.php

    Line Number: 168

    Backtrace:

    File: /var/www/vhosts/domain.com/streaming.domain.com/application/controllers/Welcome.php
    Line: 124
    Function: save

    File: /var/www/vhosts/domain.com/streaming.domain.com/index.php
    Line: 315
    Function: require_once

    I’ve done everything to make it work, tried different file formats, small files and even 2GB files. I’ve also set timeout to 0 but still getting this error. What else I did was I looked at Line 168 in /var/www/vhosts/domain.com/streaming.domain.com/vendor/php-ffmpeg/php-ffmpeg/src/FFMpeg/Media/Video.php and it seems FFMPEG creates temp folders during transcoding process and there are some threads that I don’t understand.

    Has anyone experienced same issue ? Would you please educate me understanding what’s happening and why this is not functioning as it should ? Thank you all