Recherche avancée

Médias (3)

Mot : - Tags -/plugin

Autres articles (67)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Le plugin : Gestion de la mutualisation

    2 mars 2010, par

    Le plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
    Installation basique
    On installe les fichiers de SPIP sur le serveur.
    On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
    On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
    < ?php (...)

Sur d’autres sites (9106)

  • java.lang.UnsatisfiedLinkError : Couldn't load ffmpeg library findLibrary returned null

    22 février 2017, par Muthukumar Subramaniam

    I am new to live streaming from android to youtube in android. I mentioned my Project Structrue and build the gradle file in android. FFmpeg library could not load in runtime.

    Note : I am working in windows 10 and android studio 2.1.2

       apply plugin: 'com.android.application'

    android {
       compileSdkVersion 23
       buildToolsVersion "23.0.1"

       packagingOptions {
           exclude 'META-INF/DEPENDENCIES'
           exclude 'META-INF/NOTICE'
           exclude 'META-INF/NOTICE.txt'
           exclude 'META-INF/LICENSE'
           exclude 'META-INF/LICENSE.txt'
       }

       defaultConfig {
           applicationId "com.ephron.mobilizerapp"
           minSdkVersion 14
           targetSdkVersion 23
           versionCode 1
           versionName "1.4"
           multiDexEnabled true
       }
       dexOptions {
           javaMaxHeapSize "4g"
       }


       buildTypes {
           release {
               minifyEnabled false
               proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
           }
       }
       sourceSets {
           main {
               assets.srcDirs = ['src/main/assets', 'src/main/assets/']

           }
       }
       sourceSets { main {
           jni.srcDirs = ['src/main/jni', 'src/main/jni/libs']
           jni.srcDirs = ['libs']
           jni.srcDirs = []
       } }
    }

    dependencies {
       testCompile 'junit:junit:4.12'
       compile files('libs/httpclient-4.5.2.jar')
       compile files('libs/httpcore-4.4.4.jar')
       compile files('libs/httpmime-4.2.1.jar')
       compile files('libs/YouTubeAndroidPlayerApi.jar')
       compile 'com.android.support:appcompat-v7:23.1.0'
       compile 'com.google.android.gms:play-services:10.0.1'
       compile 'com.google.code.gson:gson:2.2.2'
       compile 'com.google.firebase:firebase-messaging:9.2.0'
       compile 'testfairy:testfairy-android-sdk:1.+@aar'
       compile 'com.android.support:multidex:1.0.0'
       compile 'com.mcxiaoke.volley:library:1.0.19'
       compile 'com.squareup.picasso:picasso:2.5.2'
       compile 'cn.aigestudio.wheelpicker:WheelPicker:1.1.2'
       compile 'com.google.android.gms:play-services-maps:10.0.1'
       compile 'com.google.apis:google-api-services-youtube:v3-rev182-1.22.0'
       compile 'com.google.api-client:google-api-client-android:1.22.0'
       compile 'com.google.http-client:google-http-client-gson:1.19.0'
       compile 'com.google.android.gms:play-services-ads:10.0.1'
       compile 'com.google.android.gms:play-services-auth:10.0.1'
       compile 'com.google.android.gms:play-services-gcm:10.0.1'
       compile files('libs/ffmpeg-android.jar')
    }
    apply plugin: 'com.google.gms.google-services'

    My Project Structure mentioned below Link

    https://www.screencast.com/t/E0TFsMUi1

    Application.mk file

       APP_OPTIM := release
    APP_ABI := all
    APP_STL := gnustl_static
    APP_CPPFLAGS := -frtti -fexceptions

    Android.mk file

      #
    # Copyright (c) 2014 Google Inc.
    #
    # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
    # in compliance with the License. You may obtain a copy of the License at
    #
    # http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software distributed under the License
    # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
    # or implied. See the License for the specific language governing permissions and limitations under
    # the License.


    # set NDK_PROJECT_PATH := D:/MobilizerApp/app/src/main/jni

    WORKING_DIR := $(call my-dir)

    include $(CLEAR_VARS)
    LOCAL_PATH := $(WORKING_DIR)/../third_party/lame/libmp3lame
    LOCAL_MODULE    := lame
    LOCAL_C_INCLUDES := $(WORKING_DIR)/../third_party/lame/libmp3lame \
                       $(WORKING_DIR)/../third_party/lame/include
    LOCAL_CFLAGS := -DSTDC_HEADERS -std=c99
    LOCAL_ARM_MODE := arm
    APP_OPTIM := release

    LOCAL_SRC_FILES := VbrTag.c \
                      bitstream.c \
                      encoder.c \
                      fft.c \
                      gain_analysis.c \
                      id3tag.c \
                      lame.c \
                      mpglib_interface.c \
                      newmdct.c \
                      presets.c \
                      psymodel.c \
                      quantize.c \
                      quantize_pvt.c \
                      reservoir.c \
                      set_get.c \
                      tables.c \
                      takehiro.c \
                      util.c \
                      vbrquantize.c \
                      version.c


    #include $(BUILD_STATIC_LIBRARY)

    include $(BUILD_STATIC_LIBRARY)

    #include $(CLEAR_VARS)
    #LOCAL_MODULE  := mp3lame_a
    #LOCAL_STATIC_LIBRARIES := lame

    #include $(BUILD_EXECUTABLE)

    include $(CLEAR_VARS)
    LOCAL_PATH := $(WORKING_DIR)
    LOCAL_MODULE    := ffmpeg
    LOCAL_CFLAGS := -DHAVE_AV_CONFIG_H -std=c99 -D__STDC_CONSTANT_MACROS -DSTDC_HEADERS -Wno-deprecated-declarations
    LOCAL_SRC_FILES := ffmpeg-jni.c
    LOCAL_C_INCLUDES := $(WORKING_DIR)/libavcodec $(WORKING_DIR)/libavcodec/arm $(WORKING_DIR)/libavformat $(WORKING_DIR)/libavutil $(WORKING_DIR)/libavutil/arm

    LOCAL_STATIC_LIBRARIES := lame

    LOCAL_LDLIBS := -llog -lm -lz $(WORKING_DIR)/../third_party/lib/libavformat.a $(WORKING_DIR)/../third_party/lib/libavcodec.a $(WORKING_DIR)/../third_party/lib/libavfilter.a $(WORKING_DIR)/../third_party/lib/libavresample.a $(WORKING_DIR)/../third_party/lib/libswscale.a $(WORKING_DIR)/../third_party/lib/libavutil.a $(WORKING_DIR)/../third_party/lib/libx264.a $(WORKING_DIR)/../third_party/lib/libpostproc.a $(WORKING_DIR)/../third_party/lib/libswresample.a $(WORKING_DIR)/../third_party/lib/libfdk-aac.a

    APP_OPTIM := release

    include $(BUILD_SHARED_LIBRARY)

    FFMPEG.Java

    package com.ephronsystem.mobilizerapp;


    public class Ffmpeg {


       static {
           System.loadLibrary("ffmpeg");
       }

       public static native boolean init(int width, int height, int audio_sample_rate, String rtmpUrl);

       public static native void shutdown();

       // Returns the size of the encoded frame.
       public static native int encodeVideoFrame(byte[] yuv_image);

       public static native int encodeAudioFrame(short[] audio_data, int length);
    }
  • aaccoder : Implement Perceptual Noise Substitution for AAC

    15 avril 2015, par Rostislav Pehlivanov
    aaccoder : Implement Perceptual Noise Substitution for AAC
    

    This commit implements the perceptual noise substitution AAC extension. This is a proof of concept
    implementation, and as such, is not enabled by default. This is the fourth revision of this patch,
    made after some problems were noted out. Any changes made since the previous revisions have been indicated.

    In order to extend the encoder to use an additional codebook, the array holding each codebook has been
    modified with two additional entries - 13 for the NOISE_BT codebook and 12 which has a placeholder function.
    The cost system was modified to skip the 12th entry using an array to map the input and outputs it has. It
    also does not accept using the 13th codebook for any band which is not marked as containing noise, thereby
    restricting its ability to arbitrarily choose it for bands. The use of arrays allows the system to be easily
    extended to allow for intensity stereo encoding, which uses additional codebooks.

    The 12th entry in the codebook function array points to a function which stops the execution of the program
    by calling an assert with an always ’false’ argument. It was pointed out in an email discussion with
    Claudio Freire that having a ’NULL’ entry can result in unexpected behaviour and could be used as
    a security hole. There is no danger of this function being called during encoding due to the codebook maps introduced.

    Another change from version 1 of the patch is the addition of an argument to the encoder, ’-aac_pns’ to
    enable and disable the PNS. This currently defaults to disable the PNS, as it is experimental.
    The switch will be removed in the future, when the algorithm to select noise bands has been improved.
    The current algorithm simply compares the energy to the threshold (multiplied by a constant) to determine
    noise, however the FFPsyBand structure contains other useful figures to determine which bands carry noise more accurately.

    Some of the sample files provided triggered an assertion when the parameter to tune the threshold was set to
    a value of ’2.2’. Claudio Freire reported the problem’s source could be in the range of the scalefactor
    indices for noise and advised to measure the minimal index and clip anything above the maximum allowed
    value. This has been implemented and all the files which used to trigger the asserion now encode without error.

    The third revision of the problem also removes unneded variabes and comparisons. All of them were
    redundant and were of little use for when the PNS implementation would be extended.

    The fourth revision moved the clipping of the noise scalefactors outside the second loop of the two-loop
    algorithm in order to prevent their redundant calculations. Also, freq_mult has been changed to a float
    variable due to the fact that rounding errors can prove to be a problem at low frequencies.
    Considerations were taken whether the entire expression could be evaluated inside the expression
    , but in the end it was decided that it would be for the best if just the type of the variable were
    to change. Claudio Freire reported the two problems. There is no change of functionality
    (except for low sampling frequencies) so the spectral demonstrations at the end of this commit’s message were not updated.

    Finally, the way energy values are converted to scalefactor indices has changed since the first commit,
    as per the suggestion of Claudio Freire. This may still have some drawbacks, but unlike the first commit
    it works without having redundant offsets and outputs what the decoder expects to have, in terms of the
    ranges of the scalefactor indices.

    Some spectral comparisons : https://trac.ffmpeg.org/attachment/wiki/Encode/AAC/Original.png (original),
    https://trac.ffmpeg.org/attachment/wiki/Encode/AAC/PNS_NO.png (encoded without PNS),
    https://trac.ffmpeg.org/attachment/wiki/Encode/AAC/PNS1.2.png (encoded with PNS, const = 1.2),
    https://trac.ffmpeg.org/attachment/wiki/Encode/AAC/Difference1.png (spectral difference).
    The constant is the value which multiplies the threshold when it gets compared to the energy, larger
    values means more noise will be substituded by PNS values. Example when const = 2.2 :
    https://trac.ffmpeg.org/attachment/wiki/Encode/AAC/PNS_2.2.png

    Reviewed-by : Claudio Freire <klaussfreire@gmail.com>
    Signed-off-by : Michael Niedermayer <michaelni@gmx.at>

    • [DH] libavcodec/aaccoder.c
    • [DH] libavcodec/aacenc.c
    • [DH] libavcodec/aacenc.h
  • Superimposing two videos onto a static image ?

    15 décembre 2014, par Archagon

    I have two videos that I’d like to combine into a single video, in which both videos would sit on top of a static background image. (Think something like this.) My requirements are that the software I use is free, that it runs on OSX, and that I don’t have to re-encode my videos an excessive number of times. I’d also like to be able to perform this operation from the command line or via script, since I’ll be doing it a lot. (But this isn’t strictly necessary.)

    I tried fiddling with ffmpeg for a couple of hours, but it just doesn’t seem very well suited for post-processing. I could potentially hack something together via the overlay feature, but so far I haven’t figured out how to do it, aside from pain-stakingly converting the image to a video (which takes 2x as long as the length of my videos !) and then superimposing the two videos onto it in another rendering step.

    Any tips ? Thank you !


    Update :

    Thanks to LordNeckbeard’s help, I was able to achieve my desired result with a single ffmpeg call ! Unfortunately, encoding is quite slow, taking 6 seconds to encode 1 second of video. I believe this is caused by the background image. Any tips on speeding up encoding ? Here’s the ffmpeg log :

    MacBook-Pro:Video archagon$ ffmpeg -loop 1 -i underlay.png -i test-slide-video-short.flv -i test-speaker-video-short.flv -filter_complex "[1:0]scale=400:-1[a];[2:0]scale=320:-1[b];[0:0][a]overlay=0:0[c];[c][b]overlay=0:0" -shortest -t 5 -an output.mp4
    ffmpeg version 1.0 Copyright (c) 2000-2012 the FFmpeg developers
     built on Nov 14 2012 16:18:58 with Apple clang version 4.0 (tags/Apple/clang-421.0.60) (based on LLVM 3.1svn)
     configuration: --prefix=/opt/local --enable-swscale --enable-avfilter --enable-libmp3lame --enable-libvorbis --enable-libopus --enable-libtheora --enable-libschroedinger --enable-libopenjpeg --enable-libmodplug --enable-libvpx --enable-libspeex --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/clang --arch=x86_64 --enable-yasm --enable-gpl --enable-postproc --enable-libx264 --enable-libxvid
     libavutil      51. 73.101 / 51. 73.101
     libavcodec     54. 59.100 / 54. 59.100
     libavformat    54. 29.104 / 54. 29.104
     libavdevice    54.  2.101 / 54.  2.101
     libavfilter     3. 17.100 /  3. 17.100
     libswscale      2.  1.101 /  2.  1.101
     libswresample   0. 15.100 /  0. 15.100
     libpostproc    52.  0.100 / 52.  0.100
    Input #0, image2, from 'underlay.png':
     Duration: 00:00:00.04, start: 0.000000, bitrate: N/A
       Stream #0:0: Video: png, rgb24, 1024x768, 25 fps, 25 tbr, 25 tbn, 25 tbc
    Input #1, flv, from 'test-slide-video-short.flv':
     Metadata:
       author          :
       copyright       :
       description     :
       keywords        :
       rating          :
       title           :
       presetname      : Custom
       videodevice     : VGA2USB Pro V3U30343
       videokeyframe_frequency: 5
       canSeekToEnd    : false
       createdby       : FMS 3.5
       creationdate    : Mon Aug 16 16:35:34 2010
       encoder         : Lavf54.29.104
     Duration: 00:50:32.75, start: 0.000000, bitrate: 90 kb/s
       Stream #1:0: Video: vp6f, yuv420p, 640x480, 153 kb/s, 8 tbr, 1k tbn, 1k tbc
    Input #2, flv, from 'test-speaker-video-short.flv':
     Metadata:
       author          :
       copyright       :
       description     :
       keywords        :
       rating          :
       title           :
       presetname      : Custom
       videodevice     : Microsoft DV Camera and VCR
       videokeyframe_frequency: 5
       audiodevice     : Microsoft DV Camera and VCR
       audiochannels   : 1
       audioinputvolume: 75
       canSeekToEnd    : false
       createdby       : FMS 3.5
       creationdate    : Mon Aug 16 16:35:34 2010
       encoder         : Lavf54.29.104
     Duration: 00:50:38.05, start: 0.000000, bitrate: 238 kb/s
       Stream #2:0: Video: vp6f, yuv420p, 320x240, 204 kb/s, 25 tbr, 1k tbn, 1k tbc
       Stream #2:1: Audio: mp3, 22050 Hz, mono, s16, 32 kb/s
    File 'output.mp4' already exists. Overwrite ? [y/N] y
    using cpu capabilities: none!
    [libx264 @ 0x7fa84c02f200] profile High, level 3.1
    [libx264 @ 0x7fa84c02f200] 264 - core 119 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to 'output.mp4':
     Metadata:
       encoder         : Lavf54.29.104
       Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 1024x768, q=-1--1, 25 tbn, 25 tbc
    Stream mapping:
     Stream #0:0 (png) -> overlay:main
     Stream #1:0 (vp6f) -> scale
     Stream #2:0 (vp6f) -> scale
     overlay -> Stream #0:0 (libx264)
    Press [q] to stop, [?] for help

    Update 2 :

    It works ! One important tweak was to move the underlay.png input to the end of the input list. This increased performance substantially. Here’s my final ffmpeg call. (The maps at the end aren’t required for this particular arrangement, but I sometimes have a few extra audio inputs that I want to map to my output.)

    ffmpeg
       -i VideoOne.flv
       -i VideoTwo.flv
       -loop 1 -i Underlay.png
       -filter_complex "[2:0] [0:0] overlay=20:main_h/2-overlay_h/2 [overlay];[overlay] [1:0] overlay=main_w-overlay_w-20:main_h/2-overlay_h/2 [output]"
       -map [output]:v
       -map 0:a
       OutputVideo.m4v