
Recherche avancée
Médias (33)
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (51)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...) -
Selection of projects using MediaSPIP
2 mai 2011, parThe examples below are representative elements of MediaSPIP specific uses for specific projects.
MediaSPIP farm @ Infini
The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)
Sur d’autres sites (6720)
-
FFMPEG iOS 7 Library
15 septembre 2017, par Destiny DawnI’ve tried reading many tutorials.
I’ve spent hours on google, and stackoverflow trying answer.
So far I’ve read : Trying to compile the FFMPEG libraries for iPhoneOS platform with armv6 and arv7 architecture FFMPEG integration on iphone/ ipad project and https://github.com/lajos/iFrameExtractor few of the many.I’m trying to build this library for iOS 7/Xcode 5 compatibility but it’s not working.
A common error I’d get is :Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
yasm/nasm not found or too old. Use --disable-yasm for a crippled build.
If you think configure made a mistake, make sure you are using the latest
version from Git. If the latest version fails, report the problem to the
ffmpeg-user@ffmpeg.org mailing list or IRC #ffmpeg on irc.freenode.net.
Include the log file "config.log" produced by configure as this will help
solving the problem.I’d also get many more once that is finished. Such as :
rm: illegal option -- .
usage: rm [-f | -i] [-dPRrvW] file ...
unlink file
make: *** [clean] Error 64I’ve mostly tried using this command to start, but it always crashes on "make clean" :
./configure \
--cc=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc \
--as='/usr/local/bin/gas-preprocessor.pl /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc' \
--sysroot=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk \
--target-os=darwin \
--arch=arm \
--cpu=cortex-a8 \
--extra-cflags='-arch armv7' \
--extra-ldflags='-arch armv7 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk' \
--prefix=compiled/armv7 \
--enable-cross-compile \
--enable-nonfree \
--enable-gpl \
--disable-armv5te \
--disable-swscale-alpha \
--disable-doc \
--disable-ffmpeg \
--disable-ffplay \
--disable-ffprobe \
--disable-ffserver \
--disable-asm \
--disable-debug -
Connect external cameras to iOS and decompress to a usable form
27 septembre 2017, par Ping ChenI want to create a 2 camera setup which can send 1 of the camera views out as an RTMP stream depending on the motion intensity detected. The chosen camera view can change if motion intensity on the views changes.
I imagine that I could use an iPhone/iPad as encoding/streaming hub as well as 1 of the cameras. And connect a WiFi camera to the iPad/iPhone to feed the 2nd camera view.
My goals for the iOS side are :
- Connect with a WiFi camera on the local network
- Decode the data and run motion intensity detection on the WiFi camera feed AND the iPhone/iPad’s own camera feed with Brad Larson’s GPUImage framework https://github.com/BradLarson/GPUImage
- Stream out the chosen camera view. depending on motion detected
Larson’s GPUImage framework works with an AVCaptureSession subclass. I’m only familiar with AVFoundation objects, but am a complete noob with it comes to VideoToolbox and some of the lower level iOS video stuff. Through googling, I kind of know that VTDecompressionSession is what I’d get from the WiFi camera. I have no clue how I can manipulate that to a usable form for my purposes.
I’ve dug through stackoverflow answers such as : https://stackoverflow.com/a/29525001/7097455
Very informative, but maybe I don’t even know to ask the correct questions
-
Windows(WASAPI):Converting IEEE float PCM to PCM 16bit
10 octobre 2017, par Michael IVI do a hook into some of WASAPI methods to read out audio buffer that contains system sound.
The source is WAV with audio
format = WAVE_FORMAT_IEEE_FLOAT
andsubformat = KSDATAFORMAT_SUBTYPE_IEEE_FLOAT
I am trying to convert it to PCM 16bit on the fly using FFMPEG libswresample. The resulting soundtrack has alot of forground noise with the original track’s data playing in the background.
Here is how I do it (I skip WASAPI init part) :
First,the source WAVEFORMATEX struct has the following properties :
Format tag:65534
Sample rate:44100
Num channels:2
CB size:22
Average bytes per sec:352800
Block align:8
Bits per sample:32Where 65534 stands for WAVE_FORMAT_IEEE_FLOAT.
WAVEFORMATEX *w;//filled with the props above
int samples = 0;
int bufferSize = 0;
int buffer_frames = 0;
int bWritten = 0;
char* bufferIn = NULL;
char* bufferOut = NULL;
BYTE* wavBuffer = NULL;
void InitSWRContext()
{
swrctx = swr_alloc_set_opts(NULL,
AV_CH_LAYOUT_STEREO, //stereo layout
AV_SAMPLE_FMT_S16, // sample out format
44100, // out sample rate
2, //number of channels
AV_SAMPLE_FMT_FLT, //sample in format
w->nSamplesPerSec ,//in sample rate
0,
NULL);
swr_init(swrctx);
//also tried like this:
/*
swrctx = swr_alloc();
av_opt_set_int(swrctx, "in_channel_layout", CA2SWR_chlayout(w->nChannels), 0);
av_opt_set_int(swrctx, "out_channel_layout", AV_CH_LAYOUT_STEREO, 0);
av_opt_set_int(swrctx, "in_sample_rate", 44100, 0);
av_opt_set_int(swrctx, "out_sample_rate", 44100, 0);
av_opt_set_sample_fmt(swrctx, "in_sample_fmt", AV_SAMPLE_FMT_FLT, 0);
av_opt_set_sample_fmt(swrctx, "out_sample_fmt", AV_SAMPLE_FMT_S16, 0);
*/
samples = (int)av_rescale_rnd(CA_MAX_SAMPLES, 44100, w->nSamplesPerSec,
AV_ROUND_UP);
bufferSize = av_samples_get_buffer_size(NULL,2, samples * 2,AV_SAMPLE_FMT_S16, 1/*no-alignment*/);
bufferOut = ( char *)malloc(bufferSize ))
}These two methods are invoked by the system and I hook them to access audio buffer :
DEXPORT HRESULT __stdcall
_GetBuffer(
IAudioRenderClient *thiz,
UINT32 NumFramesRequested,
BYTE **ppData)
{
HRESULT hr;
hr = orig_GetBuffer(thiz, NumFramesRequested, ppData);
bufferIn = (char*)*ppData;
buffer_frames = NumFramesRequested;
return S_OK;
}Then on buffer release I perform the conversion and readout.
DEXPORT HRESULT __stdcall
_ReleaseBuffer(IAudioRenderClient *thiz,
UINT32 NumFramesWritten, DWORD dwFlags)
{
const unsigned char *srcplanes[2];
unsigned char *dstplanes[2];
int samples;
srcplanes[0] = (unsigned char*)bufferIn;
srcplanes[1] = NULL;
dstplanes[0] = (unsigned char*)bufferOut;
dstplanes[1] = NULL;
samples = (int)av_rescale_rnd(NumFramesWritten, 44100, 44100, AV_ROUND_UP);
int samplesConverted = swr_convert(swrctx, dstplanes, samples , srcplanes, NumFramesWritten);
int framesize = (/*PCM16*/16 / 8) * pwfx->nChannels * NumFramesWritten;
if (!wavBuffer)
{
wavBuffer = (BYTE*)malloc(4096 * 1000); //write out a few seconds
}
//copy converted buffer into wavBuffer
memcpy(wavBuffer + bWritten, bufferOut, framesize);
bWritten += framesize;
if (bWritten > 4096 * 1000)
{
pwfx->wFormatTag = WAVE_FORMAT_PCM;
pwfx->wBitsPerSample = 16;
pwfx->nBlockAlign = (pwfx->nChannels * pwfx->wBitsPerSample) / 8;
pwfx->cbSize = 0;
SaveWaveData(wavBuffer, bWritten, pwfx, L"test.wav");
exit(0);
}
}SaveWaveData
borrowed from here.Resulting sound file(WAV)
Original file : a2002011001-e02.wav
I tried to play and analyze the output in VLC,Audacity, and FMOD Studio.
The strange thing is that VLC show (codec info) that it is PCM16, while FMOD studio interprets the data as PCM32. I also tried to store without conversion the original buffer which also produces a sound with noise,though not as significant as when converting to PCM16