Recherche avancée

Médias (1)

Mot : - Tags -/livre électronique

Autres articles (42)

  • Taille des images et des logos définissables

    9 février 2011, par

    Dans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
    Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)

  • Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur

    8 février 2011, par

    La visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
    Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
    Configuration de la boite multimédia
    Dès (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (3830)

  • Fast green screen video processing on android device

    17 mars 2015, par Si-N

    I have written an app in iOS that takes two video sources, one with moving character on a green screen and any other video. The program then uses the GPUImage framework to add a chroma key shader via OpenGL ES 2 and then merges each frame (so the bottom frame now shows where the green pixels are) and outputs to a new video file. This happens very quickly, faster than real time.

    I have now been tasked with porting the app to Android. I thought it would be fairly straightforward. After doing some research I think I am wrong. There is an Android port of GPUImage but it does not handle video at the moment. I have done some research and come up with a very basic idea.

    I was wondering if you think this approach is feasible :

    Convert one video file to match resolution and type of other video using ffmpeg or JavaCV wrappers.

    Read frame by frame of each video using ffmpeg as MediaMetadataRetriever is very slow and convert into some RGB format. Use shader to apply chroma key effect so both frames are merged.

    Use ffmpeg to output result to a new file.

    This sounds slow, but if it sounds feasible I will try it out. I am not at all sure about making sure the 2 video resolutions / bitrate etc match. One video will be fixed at 1280 * 720 and the other video source will come from the camera on the device so will be variable. Also I think ffmpeg means using NDK which is a whole world of pain I wanted to avoid.

    I have a headache thinking about it. Any advice would be greatly appreciated.

  • WebRTC predictions for 2016

    17 février 2016, par silvia

    I wrote these predictions in the first week of January and meant to publish them as encouragement to think about where WebRTC still needs some work. I’d like to be able to compare the state of WebRTC in the browser a year from now. Therefore, without further ado, here are my thoughts.

    WebRTC Browser support

    I’m quite optimistic when it comes to browser support for WebRTC. We have seen Edge bring in initial support last year and Apple looking to hire engineers to implement WebRTC. My prediction is that we will see the following developments in 2016 :

    • Edge will become interoperable with Chrome and Firefox, i.e. it will publish VP8/VP9 and H.264/H.265 support
    • Firefox of course continues to support both VP8/VP9 and H.264/H.265
    • Chrome will follow the spec and implement H.264/H.265 support (to add to their already existing VP8/VP9 support)
    • Safari will enter the WebRTC space but only with H.264/H.265 support

    Codec Observations

    With Edge and Safari entering the WebRTC space, there will be a larger focus on H.264/H.265. It will help with creating interoperability between the browsers.

    However, since there are so many flavours of H.264/H.265, I expect that when different browsers are used at different endpoints, we will get poor quality video calls because of having to negotiate a common denominator. Certainly, baseline will work interoperably, but better encoding quality and lower bandwidth will only be achieved if all endpoints use the same browser.

    Thus, we will get to the funny situation where we buy ourselves interoperability at the cost of video quality and bandwidth. I’d call that a “degree of interoperability” and not the best possible outcome.

    I’m going to go out on a limb and say that at this stage, Google is going to consider strongly to improve the case of VP8/VP9 by improving its bandwidth adaptability : I think they will buy themselves some SVC capability and make VP9 the best quality codec for live video conferencing. Thus, when Safari eventually follows the standard and also implements VP8/VP9 support, the interoperability win of H.264/H.265 will become only temporary overshadowed by a vastly better video quality when using VP9.

    The Enterprise Boundary

    Like all video conferencing technology, WebRTC is having a hard time dealing with the corporate boundary : firewalls and proxies get in the way of setting up video connections from within an enterprise to people outside.

    The telco world has come up with the concept of SBCs (session border controller). SBCs come packed with functionality to deal with security, signalling protocol translation, Quality of Service policing, regulatory requirements, statistics, billing, and even media service like transcoding.

    SBCs are a total overkill for a world where a large number of Web applications simply want to add a WebRTC feature – probably mostly to provide a video or audio customer support service, but it could be a live training session with call-in, or an interest group conference all.

    We cannot install a custom SBC solution for every WebRTC service provider in every enterprise. That’s like saying we need a custom Web proxy for every Web server. It doesn’t scale.

    Cloud services thrive on their ability to sell directly to an individual in an organisation on their credit card without that individual having to ask their IT department to put special rules in place. WebRTC will not make progress in the corporate environment unless this is fixed.

    We need a solution that allows all WebRTC services to get through an enterprise firewall and enterprise proxy. I think the WebRTC standards have done pretty well with firewalls and connecting to a TURN server on port 443 will do the trick most of the time. But enterprise proxies are the next frontier.

    What it takes is some kind of media packet forwarding service that sits on the firewall or in a proxy and allows WebRTC media packets through – maybe with some configuration that is necessary in the browsers or the Web app to add this service as another type of TURN server.

    I don’t have a full understanding of the problems involved, but I think such a solution is vital before WebRTC can go mainstream. I expect that this year we will see some clever people coming up with a solution for this and a new type of product will be born and rolled out to enterprises around the world.

    Summary

    So these are my predictions. In summary, they address the key areas where I think WebRTC still has to make progress : interoperability between browsers, video quality at low bitrates, and the enterprise boundary. I’m really curious to see where we stand with these a year from now.

    It’s worth mentioning Philipp Hancke’s tweet reply to my post :

    — we saw some clever people come up with a solution already. Now it needs to be implemented 🙂

    The post WebRTC predictions for 2016 first appeared on ginger’s thoughts.

  • Zipping Conda Environment Breaks Audioread's Backend (Python/Pyspark)

    25 octobre 2017, par Tim

    I have previously build pyspark environments using conda to package all dependancies and ship them to all the nodes at runtime. Here’s how I create the environment :

    `conda/bin/conda create -p conda_env --copy -y python=2  \
    numpy scipy ffmpeg gcc libsndfile gstreamer pygobject audioread librosa`

    `zip -r conda_env.zip conda_env`

    Then sourcing conda_env and running pyspark shell I can successfully execute :

    `import librosa
    y, sr = librosa.load("test.m4a")`

    Note without the environment sourced this script results in an error as ffmpeg/gstreamer are NOT installed on my locally.

    Submitting a script to the cluster results in a librosa.load error which traces back to audioread indicating the backend (either gstreamer or ffmpeg) can no longer be found in the zipped archive environment. The stacktrace is below :

    Submit :

    `PYSPARK_PYTHON=./NODE/conda_env/bin/python spark-submit --verbose \
           --conf spark.yarn.appMasterEnv.PYSPARK_PYTHON=./NODE/conda_env/bin/python \
           --conf spark.yarn.appMasterEnv.PYTHON_EGG_CACHE=/tmp \
           --conf spark.executorEnv.PYTHON_EGG_CACHE=/tmp \
           --conf spark.yarn.executor.memoryOverhead=1024 \
           --conf spark.hadoop.validateOutputSpecs=false \
           --conf spark.driver.cores=5 \
           --conf spark.driver.maxResultSize=0 \
           --master yarn --deploy-mode cluster --queue production \
           --num-executors 20 --executor-cores 5 --executor-memory 40G \
           --driver-memory 20G --archives conda_env.zip#NODE \
           --jars /data/environments/sqljdbc41.jar \
           script.py`

    Trace :

    `Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
     File "/mnt/yarn/usercache/user/appcache/application_1506634200253_39889/container_1506634200253_39889_01_000003/pyspark.zip/pyspark/worker.py", line 172, in main
       process()
     File "/mnt/yarn/usercache/user/appcache/application_1506634200253_39889/container_1506634200253_39889_01_000003/pyspark.zip/pyspark/worker.py", line 167, in process
       serializer.dump_stream(func(split_index, iterator), outfile)
     File "/mnt/yarn/usercache/user/appcache/application_1506634200253_39889/container_1506634200253_39889_01_000003/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream
       vs = list(itertools.islice(iterator, batch))
     File "script.py", line 245, in <lambda>
     File "script.py", line 119, in download_audio
     File "/mnt/yarn/usercache/user/appcache/application_1506634200253_39889/container_1506634200253_39889_01_000003/NODE/conda_env/lib/python2.7/site-packages/librosa/core/audio.py", line 107, in load
       with audioread.audio_open(os.path.realpath(path)) as input_file:
     File "/mnt/yarn/usercache/user/appcache/application_1506634200253_39889/container_1506634200253_39889_01_000003/NODE/conda_env/lib/python2.7/site-packages/audioread/__init__.py", line 114, in audio_open
       raise NoBackendError()
    NoBackendError`
    </lambda>

    My question is : How can I package this archive so that librosa (really audioread) is able to find the backend and load .m4a files ?