
Recherche avancée
Médias (2)
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
Autres articles (105)
-
L’agrémenter visuellement
10 avril 2011MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté. -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.
Sur d’autres sites (8051)
-
How do I make this render callback only provide a specific channel ?
4 décembre 2013, par awfulcodeI'm using the wonderful kxmovie as the base for an app I'm writing. Instead of only using the remote I/O, I added a mixer to this. The idea is to have each audio channel from a video file be connected to its own bus on the mixer.
I have two questions for you.
-
Is there a way of calling the render callback only once and yet feed each bus only one channel ?
-
If I need to call separate callbacks for different busses, how can I change the original code so it only renders a specific channel ? Maybe pass the inOutputBusNumber value to the render callback ?
Here's the code for the original render callback.
As always, thank you so much.
P.S. : Does anyone have any idea why it's using _outdata+iChannel in the FFT operation ?
- (BOOL) renderFrames: (UInt32) numFrames
ioData: (AudioBufferList *) ioData
{
for (int iBuffer=0; iBuffer < ioData->mNumberBuffers; ++iBuffer) {
memset(ioData->mBuffers[iBuffer].mData, 0, ioData->mBuffers[iBuffer].mDataByteSize);
}
if (_playing && _outputBlock ) {
// Collect data to render from the callbacks
_outputBlock(_outData, numFrames, _numOutputChannels);
// Put the rendered data into the output buffer
if (_numBytesPerSample == 4) // then we've already got floats
{
float zero = 0.0;
for (int iBuffer=0; iBuffer < ioData->mNumberBuffers; ++iBuffer) {
int thisNumChannels = ioData->mBuffers[iBuffer].mNumberChannels;
for (int iChannel = 0; iChannel < thisNumChannels; ++iChannel) {
vDSP_vsadd(_outData+iChannel, _numOutputChannels, &zero, (float *)ioData->mBuffers[iBuffer].mData, thisNumChannels, numFrames);
}
}
}
else if (_numBytesPerSample == 2) // then we need to convert SInt16 -> Float (and also scale)
{
float scale = (float)INT16_MAX;
vDSP_vsmul(_outData, 1, &scale, _outData, 1, numFrames*_numOutputChannels);
for (int iBuffer=0; iBuffer < ioData->mNumberBuffers; ++iBuffer) {
int thisNumChannels = ioData->mBuffers[iBuffer].mNumberChannels;
for (int iChannel = 0; iChannel < thisNumChannels; ++iChannel) {
vDSP_vfix16(_outData+iChannel, _numOutputChannels, (SInt16 *)ioData->mBuffers[iBuffer].mData+iChannel, thisNumChannels, numFrames);
}
}
}
}
return noErr;
} -
-
Horizontally show two videos side by side and export to one video
15 juin 2015, par Michael LibermanWhat I am trying to achieve :
Programmatically (using some kind of script) combine two different videos to one video where the videos are shown side by side and sound from both videos is played simultaneously.I found at least two ways I could display two videos side by side but in each one I have a different problem that makes it difficult to complete the process :
-
Avisyth and Avs2Avi :
Avisyth really makes the process easy. With a simple Avisyth script like this you should be able to play two videos side by side :video1=AVISource("d:\file1.avi")<br />
video2=AVISource("d:\file2.avi")<br />
StackHorizontal(video1,video2)The creation of a new output file can be done with avs2avi.
The problem here is that i am not able to play even one avi file with avisyth because I get "AviSource couldn’t locate a decompressor for fourcc xvid" message. I googled for the solutions to that problem but nothing helped. GSpot says that i have all the codecs needed and it seems I can not do anything. Because I can not get to running real videos, I dont know if the final video plays sounds from both videos. Avisyth installed correctly and I am able to run the following script StackHorizontal(version ,version). -
ffmpeg
Works like a charm with two videos and the final file BUT i do not get the sounds from both videos but only the first one. I found out that the solution to add sound from the second video will be by using the libfaac in the command line, like that :
ffmpeg.exe -i video1.mp4 -vf "[in] scale=iw/2:ih/2, pad=2*iw:ih [left]; movie=video2.mp4, scale=iw/3:ih/3 [right]; [left][right] overlay=main_w/2:0 [out]" -c:a libfaac -b:v 768k Output.mp4
but i always get an error that the encoder libfaac can not be found. I downloaded the libfaac.dll but still no result.
Is there a solution to any of these problems ? Is there another way to programmatically make one video from two videos that are played side by side ? Thanks in advance.
-
-
Anomalie #2618 : le bouton "ajouter un document" est mensonger
28 mars 2012, par cedric -Discussion du 27/03/2011 sur spip-trad intitulée "télécharger vs. upload / download"