
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (48)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)
Sur d’autres sites (7384)
-
FFmpeg - 2 combined video's (splitscreen) removes sound
10 juillet 2014, par JoeyI am stuck on this problem where i am trying to put 2 video’s next to each other, like splitscreen. Found the command to do this here on StackOverflow but it removes my sound. I can’t figure out how to keep the sound from the 2 video’s.
The command i use :
exec("ffmpeg -i ".$video1." -i ".$video2." -filter_complex '[0:v]pad=iw*2:ih[int];[int][1:v]overlay=W/2:0[vid]' -map [vid] -c:v libx264 -crf 23 /tmp/output_file.flv", $output, $return);
The output is exactly how i want but i want the two sound streams too.
EDIT :
For anyone having the same problem, i ended up doing this in 3 steps :
# Combine the two audio streams into 1 temp file
exec("ffmpeg -i ".$video1." -i ".$video2." -filter_complex amix=inputs=2:duration=first:dropout_transition=2 ".$outputSound, $output, $return);
# Set the two video's as splitscreen
exec("ffmpeg -i ".$video1." -i ".$video2." -filter_complex '[0:v]pad=iw*2:ih[int];[int][1:v]overlay=W/2:0[vid]' -map [vid] -c:v libx264 -crf 23 ".$outputVideo, $output, $return);
# Combine merged audio file with splitscreen video
exec("ffmpeg -i ".$outputVideo." -i ".$outputSound." -map 0 -map 1 -codec copy -shortest ".$outputCombined, $output, $return);Solved !
-
samplerate conversion function fails to produce an audible sound but only small pieces of audio
2 juillet 2014, par user3749290playmp3() using libmpg123
if (isPaused==0 && mpg123_read(mh, buffer, buffer_size, &done) == MPG123_OK)
{
char * resBuffer=&buffer[0]; //22100=0,5s
buffer = resample(resBuffer,22050,22050); // I think the result is 1/2 of audio speed
if((ao_play(dev, (char*)buffer, done)==0)){
return 1;
}resample() Using avcodec from ffmpeg
#define LENGTH_MS 500 // how many milliseconds of speech to store 0,5s:x=1:44100 x=22050 sample to store
#define RATE 44100 // the sampling rate (input)
#define FORMAT PA_SAMPLE_S16NE // sample size: 8 or 16 bits
#define CHANNELS 2 // 1 = mono 2 = stereo
struct AVResampleContext* audio_cntx = 0;
//(LENGTH_MS*RATE*16*CHANNELS)/8000
void resample(char in_buffer[],int out_rate,int nsamples,char out_buffer[])
{
//char out_buffer[ sizeof( in_buffer ) * 4];
audio_cntx = av_resample_init( out_rate, //out rate
RATE, //in rate
16, //filter length
10, //phase count
0, //linear FIR filter
1.0 ); //cutoff frequency
assert( audio_cntx && "Failed to create resampling context!");
int samples_consumed;
//*out_buffer = malloc(sizeof(in_buffer));
int samples_output = av_resample( audio_cntx, //resample context
(short*)out_buffer, //buffout
(short*)in_buffer, //buffin
&samples_consumed, //&consumed
nsamples, //nb_samples
sizeof(out_buffer)/2,//lenout sizeof(out_buffer)/2
0);//is_last
assert( samples_output > 0 && "Error calling av_resample()!" );
av_resample_close( audio_cntx );
}When I run this code, the application part, the problem is that I hear the sound jerky, why ?
The size of the array I think is right, I calculated considering that in the second half should be 22050 samples from store. -
speex decoding make wrong sound (FFmpeg on iOS)
2 juillet 2014, par user3796700I’m trying to use FFmpeg on iOS to play live streams.
One with NellyMoser, as below : (success)avformat_open_input(&formatContext, "rtmp://my/nellymoser/stream/url", NULL, &options);
Now I tried to play the same stream but encoded in Speex format.
So I followed some tutorials, compiled "ogg.a, speex.a, speexdsp.a" for iOS ;
Then re-compiled FFmpeg, linking to those three .a files.However the output is wrong :
avformat_open_input(&formatContext, "rtmp://my/speex/stream/url", NULL, &options);
The output sound is 2x faster than normal. It seems like only half of the data is decoded.
Does anyone have tried similar things before ?
I’ve stocked several days, really need help here..
Thanks !As for reference, here is how I compile FFmpeg :
./configure \
--enable-libspeex \
--disable-doc \
--disable-ffmpeg \
--disable-ffserver \
--enable-cross-compile \
--arch=armv7 \
--target-os=darwin \
--cc=clang \
--as='gas-preprocessor/gas-preprocessor.pl clang' \
--sysroot=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.1.sdk \
--cpu=cortex-a8 \
--extra-cflags='-arch armv7 -I ../speex/armv7/include -I ../libogg/armv7/include' \
--extra-ldflags='-arch armv7 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.1.sdk' \
--extra-ldflags='-L ../speex/armv7/lib -lspeexdsp -lspeex -L ../libogg/armv7/lib -logg' \
--enable-pic \
--prefix=/Users/chienlo/Desktop/speexLibrary/ffmpeg-2.2.4/armv7