Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (50)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

Sur d’autres sites (7874)

  • Adding ffmpeg OMX codec to Genymotion Android 4.4.2 emulator

    22 avril 2016, par photon

    Basic Question :

    Is there a way to add a new audio codec to the Genymotion Android emulator, short of downloading the entire Android source, learning how to build it, and creating my own version of Android ?


    Context :

    I have written a java Android app that acts as an audio renderer, as well as being a DLNA/OpenHome server and client. Think "BubbleUpnp" without video. My primary development platform is Win8.1. The program started as an ActiveState "pure-perl" DLNA MediaServer on Windows, which I then ported to Ubuntu, which I got working under Android a few years ago. It was pretty funky ... all UI being presented thru an HTTP server/jquery/jquery-ui, served from an Ubuntu shell running under Android (a trick in itself), serving up HTML pages to Chrome running on the same (Android) device. Besides being "funky" it had a major drawback that it required a valid IP address to work ... as I could not figure out how to get ubuntu to have a local loopback device for a 127.0.0.01 localhost I use the app as a "car stereo" on my boat (which is my home), which is often not hooked up to the internet.

    I had a hard time getting started in Android app development because the speed of the Android emulators in Eclipse was horrid, and the ADB drivers did not work from Win8 for the longest time.

    Then one day, about a year ago, I ran into Genymotion (kudos to the authors), and all of a sudden I had a workable Android development environment, so I added a Java implementation of the DLNA server, which then grew into a renderer also, using Android’s MediaPlayer class, and, adding the ability to act as a DLNA control point, and more recently also added OpenHome servers and renderers to it.

    In a separate effort, I created a build environment for this program called fpCalc, based on ffMpeg, on a variety of platforms, including Win, Linux, and Android x86, arm, and arm7 devices (bitbucket.org/phorton1/) and did an extensive series of tests to determine the validity, and longevity of fpcalc fingerprints, discovering that the fpCalc fingerprint changed based on the version of ffmpeg it was built against, a separate topic to be sure, but in the process, learned at least a bit about how to build ffmpeg as well as Android shared libraries, JNI interfaces, etc.

    So now the Android-Java version of the program has advanced past the old perl version, and I am debating whether I want to continue to try to build the perl version (and or add an wxPerl UI) to it.

    One issue that has arisen, for me, is that the Genymotion emulator does not support WMA decoding ... as Android dropped support for WMA due to licensing issues, etc, a ways back in time ... yet my music library has significant numbers of tunes in WMA files, and I don’t want to "convert" them, my carefully thought-out philosophy is that my program does not modify the contents, or tags, or anything in the original media files that I have accumulated, or will receive in the future, rather treating them as "artifacts" worth preserving "as is". No conversion is going to make a file "better" than it was, and I wish to preserve ALL of the original sources for ALL of my music going forward.

    So, I’m thinking, gee, I can build FFMPEG on 7 different platforms, and I see all these references to "OMX FFMPEG Codec Support for Android" on the net, so I’m thinking, "All I need to do is create the OMX Component and somehow get it into Genymotion".

    I have studied up OMX, OpenMaxIL, seen Michael Chen’s posts, seen the stack overflow questions

    How to make ffmpeg codec componet as OMX component

    and

    Android : How to integrate a decoder to multimedia framework

    and Cedric Fung’s page https://vec.io/posts/use-android-hardware-decoder-with-omxcodec-in-ndk, and Michael Chen’s repository at https://github.com/omxcodec , as well as virtually every other page on the net that mentions any combination of libstagefright, OMX, Genymotion, and FFMPEG.

    (this page would not let me put more than 2 links as i don’t have a "10" reputation, or I would have listed some of the sources I have seen) ..

    My Linux development environment is a Ubuntu12.04 vbox running on my win machine. I have downloaded and run the Android-x86 iso as a vbox, and IT contains the ffmpeg codecs, but unfortunately, it neither supports a wifi interface, nor the vbox "guest additions", so it has a really funky mouse. I tried for about 3 days to address those two issues, but in the end do not feel it is usable for my puproses, and I really like the way genymotion "feels", particularly the moust support, so I’d like to keep genymotion as my "windows android" virtual device under which I may run my program, deprecate and stop using my old perl source,

    except genymotion does not support WMA files ...


    Several side notes :

    (a) There is no good way to write a single sourced application in Java that runs natively in Windows, AND as an Android app.

    (b) I don’t want to reboot my Windows machine to a "real" Android device just to play my music files. The machine has to stay in Windows as I use it for other things as well.

    (c) I am writing this as my machine is in the 36th hour of downloading the entire ASOP source code base to a partition in my Ubuntu vbox while I am sitting in a hotel room on a not-so-good internet connection in Panama City, Panama, before I return to my boat in remote Bocas Del Toro Panama, where the internet connection is even worse.

    (d) I did get WMA decoding to work in my app by calling my FFMPEG executable from Java (converting it to either WAV/PCM or AAC), but, because of limitations in Android’s MediaPlayer, it does not work well, particularly for remotely hosted WMA files ... MediaPlayer insists on having the whole file present before it starts to play, which can take several seconds or longer, and I am hoping that by getting a ’real’ WMA codec underneath MediaPlayer, that problem will just disappear ....


    So, I’m trying to figure this whole mess out. There are a lot of tantalizing clues, and suggestions, but what I have found, or at least what I am starting to believe, is that if I want to add a simple WMA audio decoding codec to Android (Genymotion), not only do I have to download, basically, the ENTIRE ASOP Android source tree, and learn a new set of tools (repo, etc), but I have to (be able to) rebuild, from scratch, the entire Android system, esp. libstagefright.so in such a way as to be COMPLETELY compatible with the existing one in GenyMotion, while at the same time adding ffmpeg codecs ala Michael Chen’s page.

    And I’m just asking, is it, could it really be that difficult ?


    Anyways, this makes me crazy. Is there no way to just build a new component, or at worst a new OMX core, and add it to Genymotion, WITHOUT building all of Android, and preferably, based only on the OMX h files ? Or do I REALLY have to replace the existing libstagefright.so, which means, basically, rebuilding all of Android ...

    p.s. I thought it would be nice to get this figured out, build it, and then post the installable new FFMPEG codecs someplace for other people to use, so that they don’t also grow warts on their ears and have steam shooting out of their eyeballs, while they get old trying to figure it out ....

  • avcodec : add a native SMPTE VC-2 HQ encoder

    10 février 2016, par Rostislav Pehlivanov
    avcodec : add a native SMPTE VC-2 HQ encoder
    

    This commit adds a new encoder capable of creating BBC/SMPTE Dirac/VC-2 HQ
    profile files.

    Dirac is a wavelet based codec created by the BBC a little more than 10
    years ago. Since then, wavelets have mostly gone out of style as they
    did not provide adequate encoding gains at lower bitrates. Dirac was a
    fully featured video codec equipped with perceptual masking, support for
    most popular pixel formats, interlacing, overlapped-block motion
    compensation, and other features. It found new life after being stripped
    of various features and standardized as the VC-2 codec by the SMPTE with
    an extra profile, the HQ profile that this encoder supports, added.

    The HQ profile was based off of the Low-Delay profile previously
    existing in Dirac. The profile forbids DC prediction and arithmetic
    coding to focus on high performance and low delay at higher bitrates.
    The standard bitrates for this profile vary but generally 1:4
    compression is expected ( 525 Mbps vs the 2200 Mbps for uncompressed
    1080p50). The codec only supports I-frames, hence the high bitrates.

    The structure of this encoder is simple : do a DWT transform on the
    entire image, split it into multiple slices (specified by the user) and
    encode them in parallel. All of the slices are of the same size, making
    rate control and threading very trivial. Although only in C, this encoder
    is capable of 30 frames per second on an 4 core 8 threads Ivy Bridge.
    A lookup table is used to encode most of the coefficients.

    No code was used from the GSoC encoder from 2007 except for the 2
    transform functions in diracenc_transforms.c. All other code was written
    from scratch.

    This encoder outperforms any other encoders in quality, usability and in
    features. Other existing implementations do not support 4 level
    transforms or 64x64 blocks (slices), which greatly increase compression.

    As previously said, the codec is meant for broadcasting, hence support
    for non-broadcasting image widths, heights, bit depths, aspect ratios,
    etc. are limited by the "level". Although this codec supports a few
    chroma subsamplings (420, 422, 444), signalling those is generally
    outside the specifications of the level used (3) and the reference
    decoder will outright refuse to read any image with such a flag
    signalled (it only supports 1920x1080 yuv422p10). However, most
    implementations will happily read files with alternate dimensions,
    framerates and formats signalled.

    Therefore, in order to encode files other than 1080p50 yuv422p10le, you
    need to provide an "-strict -2" argument to the command line. The FFmpeg
    decoder will happily read any files made with non-standard parameters,
    dimensions and subsamplings, and so will other implementations. IMO this
    should be "-strict -1", but I’ll leave that up for discussion.

    There are still plenty of stuff to implement, for instance 5 more
    wavelet transforms are still in the specs and supported by the decoder.

    The encoder can be lossless, given a high enough bitrate.

    Signed-off-by : Rostislav Pehlivanov <atomnuker@gmail.com>

    • [DH] libavcodec/Makefile
    • [DH] libavcodec/allcodecs.c
    • [DH] libavcodec/vc2enc.c
    • [DH] libavcodec/vc2enc_dwt.c
    • [DH] libavcodec/vc2enc_dwt.h
    • [DH] libavcodec/version.h
  • cuvid : Add MIT licenced nvcuid headers from Video SDK 7.0

    21 septembre 2016, par Philip Langdale
    cuvid : Add MIT licenced nvcuid headers from Video SDK 7.0
    

    For unknown reasons, the only accurately descriptive version of
    cuviddec.h is in the Video SDK - the one in CUDA 7.5 lacks vp8
    PICPARAMS and the vp9 struct definition is inaccurate. The CUDA 8 RC
    includes an ancient version of this file from many many years go.

    However, the one in the Video SDK is modified to work through a
    dynamic link mechanism which we don’t really want to use, so the
    next change will modify the files to just declare functions in
    the normal way.

    I’ve split the changes so it’s clear to see what changed between
    the original files and ones that work for us.

    • [DH] compat/cuda/cuviddec.h
    • [DH] compat/cuda/nvcuvid.h