
Recherche avancée
Médias (2)
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (60)
-
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)
Sur d’autres sites (10138)
-
ffmpeg on iOS 5.1 Undefined Symbols Error
25 avril 2012, par AndyDunnSo I've spent several hours now working through the scant amount of information available online about building ffmpeg for iOS. The building process seems to work well and I end up with fat files for armv6 and armv7 which I drag over into my project.
However, whenever I try to use the "avcodec_init()" command I get the following error :
Undefined symbols for architecture armv7 :
"_avcodec_init", referenced from :-[FirstViewController viewDidLoad] in FirstViewController.o
ld : symbol(s) not found for architecture armv7
clang : error : linker command failed with exit code 1 (use -v to see invocation)
The library files are included in the 'Link Binary with Libraries' of the project settings, so they are definitely compiled into the app. I just can't for the life of me work out why I'm getting an error on this.
I've tried several different projects, and downloaded some existing project files from the web and get the same error.
This is the build script I used :
export PLATFORM="iPhoneOS" export MIN_VERSION="4.0" export
MAX_VERSION="5.1" export
DEVROOT=/Volumes/Lion/Applications/Xcode.app/Contents/Developer/Platforms/$PLATFORM.platform/Developer
export SDKROOT=$DEVROOT/SDKs/$PLATFORM$MAX_VERSION.sdk export
CC=$DEVROOT/usr/bin/llvm-gcc export LD=$DEVROOT/usr/bin/ld export
CPP=$DEVROOT/usr/bin/cpp export CXX=$DEVROOT/usr/bin/llvm-g++ export
AR=$DEVROOT/usr/bin/ar export LIBTOOL=$DEVROOT/usr/bin/libtool export
NM=$DEVROOT/usr/bin/nm export CXXCPP=$DEVROOT/usr/bin/cpp export
RANLIB=$DEVROOT/usr/bin/ranlibCOMMONFLAGS="-pipe -gdwarf-2 -no-cpp-precomp -isysroot $SDKROOT
-marm -fPIC" export LDFLAGS="$COMMONFLAGS -fPIC" export CFLAGS="$COMMONFLAGS -fvisibility=hidden" export
CXXFLAGS="$COMMONFLAGS -fvisibility=hidden
-fvisibility-inlines-hidden"FFMPEG_LIBS="libavcodec libavdevice libavformat libavutil libswscale"
echo "Building armv6..."
make clean ./configure \
—cpu=arm1176jzf-s \
—extra-cflags='-arch armv6 -miphoneos-version-min=$MIN_VERSION -mthumb' \
—extra-ldflags='-arch armv6 -miphoneos-version-min=$MIN_VERSION' \
—enable-cross-compile \
—arch=arm \
—target-os=darwin \
—cc=$CC \
—sysroot=$SDKROOT \
—prefix=installed \
—disable-network \
—disable-decoders \
—disable-muxers \
—disable-demuxers \
—disable-devices \
—disable-parsers \
—disable-encoders \
—disable-protocols \
—disable-filters \
—disable-bsfs \
—enable-decoder=h264 \
—enable-decoder=svq3 \
—enable-gpl \
—enable-pic \
—disable-doc perl -pi -e 's/HAVE_INLINE_ASM 1/HAVE_INLINE_ASM 0/' config.h make -j3mkdir -p build.armv6 for i in $FFMPEG_LIBS ; do cp ./$i/$i.a
./build.armv6/ ; doneecho "Building armv7..."
make clean ./configure \
—cpu=cortex-a8 \
—extra-cflags='-arch armv7 -miphoneos-version-min=$MIN_VERSION -mthumb' \
—extra-ldflags='-arch armv7 -miphoneos-version-min=$MIN_VERSION' \
—enable-cross-compile \
—arch=arm \
—target-os=darwin \
—cc=$CC \
—sysroot=$SDKROOT \
—prefix=installed \
—disable-network \
—disable-decoders \
—disable-muxers \
—disable-demuxers \
—disable-devices \
—disable-parsers \
—disable-encoders \
—disable-protocols \
—disable-filters \
—disable-bsfs \
—enable-decoder=h264 \
—enable-decoder=svq3 \
—enable-gpl \
—enable-pic \
—disable-doc perl -pi -e 's/HAVE_INLINE_ASM 1/HAVE_INLINE_ASM 0/' config.h make -j3mkdir -p build.armv7 for i in $FFMPEG_LIBS ; do cp ./$i/$i.a
./build.armv7/ ; donemkdir -p build.universal for i in $FFMPEG_LIBS ; do lipo -create
./build.armv7/$i.a ./build.armv6/$i.a -output ./build.universal/$i.a ;
donefor i in $FFMPEG_LIBS ; do cp ./build.universal/$i.a ./$i/$i.a ; done
make install
-
Add mp3 file sound in the middle of the mp4/avi/mpeg [on hold]
17 novembre 2014, par user1018697I tryed to do something, and i meet some diffulties to do :
1- i have a little movie file of 1 minutes ( movie.mp4)
2- i have an mp3 file of 10 seconds (voice.mp3)
I whish to add this mp3 file in the middle of my film (at 30 seconds).
My question is :
Is it possible create simply a new video (movie2.mp4) with my mp3 added in the middle of the film (at 30 s) ?
Or maybe is exist a c# library or c++ to do this ?Thanks
-
How To Write An Oscilloscope
I’m trying to figure out how to write a software oscilloscope audio visualization. It’s made more frustrating by the knowledge that I am certain that I have accomplished this task before.
In this context, the oscilloscope is used to draw the time-domain samples of an audio wave form. I have written such a plugin as part of the xine project. However, for that project, I didn’t have to write the full playback pipeline— my plugin was just handed some PCM data and drew some graphical data in response. Now I’m trying to write the entire engine in a standalone program and I’m wondering how to get it just right.
This is an SDL-based oscilloscope visualizer and audio player for Game Music Emu library. My approach is to have an audio buffer that holds a second of audio (44100 stereo 16-bit samples). The player updates the visualization at 30 frames per second. The o-scope is 512 pixels wide. So, at every 1/30th second interval, the player dips into the audio buffer at position ((frame_number % 30) * 44100 / 30) and takes the first 512 stereo frames for plotting on the graph.
It seems to be working okay, I guess. The only problem is that the A/V sync seems to be slightly misaligned. I am just wondering if this is the correct approach. Perhaps the player should be performing some slightly more complicated calculation over those (44100/30) audio frames during each update in order to obtain a more accurate graph ? I described my process to an electrical engineer friend of mine and he insisted that I needed to apply something called hysteresis to the output or I would never get accurate A/V sync in this scenario.
Further, I know that some schools of thought on these matters require that the dots in those graphs be connected, that the scattered points simply won’t do. I guess it’s a stylistic choice.
Still, I think I have a reasonable, workable approach here. I might just be starting the visualization 1/30th of a second too late.