
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (97)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;
Sur d’autres sites (9758)
-
MediaCodec AV Sync when decoding
12 juin 2020, par ClassAAll of the questions regarding syncing audio and video, when decoding using
MediaCodec
, suggests that we should use an "AV Sync" mechanism to sync the video and audio using their timestamps.


Here is what I do to achieve this :



I have 2 threads, one for decoding video and one for audio. To sync the video and audio I'm using
Extractor.getSampleTime()
to determine if I should release the audio or video buffers, please see below :


//This is called after configuring MediaCodec(both audio and video)
private void startPlaybackThreads(){
 //Audio playback thread
 mAudioWorkerThread = new Thread("AudioThread") {
 @Override
 public void run() {
 if (!Thread.interrupted()) {
 try {
 //Check info below
 if (shouldPushAudio()) {
 workLoopAudio();
 }
 } catch (Exception e) {
 e.printStackTrace();
 }
 }
 }
 };
 mAudioWorkerThread.start();

 //Video playback thread
 mVideoWorkerThread = new Thread("VideoThread") {
 @Override
 public void run() {
 if (!Thread.interrupted()) {
 try {
 //Check info below
 if (shouldPushVideo()) {
 workLoopVideo();
 }
 } catch (Exception e) {
 e.printStackTrace();
 }
 }
 }
 };
 mVideoWorkerThread.start();
}

//Check if more buffers should be sent to the audio decoder
private boolean shouldPushAudio(){
 int audioTime =(int) mAudioExtractor.getSampleTime();
 int videoTime = (int) mExtractor.getSampleTime();
 return audioTime <= videoTime;
}
//Check if more buffers should be sent to the video decoder
private boolean shouldPushVideo(){
 int audioTime =(int) mAudioExtractor.getSampleTime();
 int videoTime = (int) mExtractor.getSampleTime();
 return audioTime > videoTime;
}




Inside
workLoopAudio()
andworkLoopVideo()
is all myMediaCodec
logic (I decided not to post it because it's not relevant).


So what I do is, I get the sample time of the video and the audio tracks, I then check which one is bigger(further ahead). If the video is "ahead" then I pass more buffers to my audio decoder and visa versa.



This seems to be working fine - The video and audio are playing in sync.





My question :


I would like to know if my approach is correct(is this how we should be doing it, or is there another/better way) ? I could not find any working examples of this(written in java/kotlin), thus the question.




EDIT 1 :



I've found that the audio trails behind the video (very slightly) when I decode/play a video that was encoded using
FFmpeg
. If I use a video that was not encoded usingFFmpeg
then the video and audio syncs perfectly.


The
FFmpeg
command is nothing out of the ordinary :


-i inputPath -crf 18 -c:v libx264 -preset ultrafast OutputPath




I will be providing additional information below :



I initialize/create
AudioTrack
like this :


//Audio
mAudioExtractor = new MediaExtractor();
mAudioExtractor.setDataSource(mSource);
int audioTrackIndex = selectAudioTrack(mAudioExtractor);
if (audioTrackIndex < 0){
 throw new IOException("Can't find Audio info!");
}
mAudioExtractor.selectTrack(audioTrackIndex);
mAudioFormat = mAudioExtractor.getTrackFormat(audioTrackIndex);
mAudioMime = mAudioFormat.getString(MediaFormat.KEY_MIME);

mAudioChannels = mAudioFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT);
mAudioSampleRate = mAudioFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE);

final int min_buf_size = AudioTrack.getMinBufferSize(mAudioSampleRate, (mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO), AudioFormat.ENCODING_PCM_16BIT);
final int max_input_size = mAudioFormat.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE);
mAudioInputBufSize = min_buf_size > 0 ? min_buf_size * 4 : max_input_size;
if (mAudioInputBufSize > max_input_size) mAudioInputBufSize = max_input_size;
final int frameSizeInBytes = mAudioChannels * 2;
mAudioInputBufSize = (mAudioInputBufSize / frameSizeInBytes) * frameSizeInBytes;

mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
 mAudioSampleRate,
 (mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO),
 AudioFormat.ENCODING_PCM_16BIT,
 AudioTrack.getMinBufferSize(mAudioSampleRate, mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT),
 AudioTrack.MODE_STREAM);

try {
 mAudioTrack.play();
} catch (final Exception e) {
 Log.e(TAG, "failed to start audio track playing", e);
 mAudioTrack.release();
 mAudioTrack = null;
}




And I write to the
AudioTrack
like this :


//Called from within workLoopAudio, when releasing audio buffers
if (bufferAudioIndex >= 0) {
 if (mAudioBufferInfo.size > 0) {
 internalWriteAudio(mAudioOutputBuffers[bufferAudioIndex], mAudioBufferInfo.size);
 }
 mAudioDecoder.releaseOutputBuffer(bufferAudioIndex, false);
}

private boolean internalWriteAudio(final ByteBuffer buffer, final int size) {
 if (mAudioOutTempBuf.length < size) {
 mAudioOutTempBuf = new byte[size];
 }
 buffer.position(0);
 buffer.get(mAudioOutTempBuf, 0, size);
 buffer.clear();
 if (mAudioTrack != null)
 mAudioTrack.write(mAudioOutTempBuf, 0, size);
 return true;
}




"NEW" Question :



The audio trails about 200ms behind the video if I use a video that was encoded using
FFmpeg
, is there a reason why this could be happening ?

-
Applying same filter_complex many times before output [duplicate]
19 août 2019, par FabiánThis question already has an answer here :
It’s not a duplicate. This is about using
filter_complex
, not -vf.In my video there’s an object that has shades of yellow (more orange-like) and a solid yellow as background.
I need to output all frames into a png sequence, using a color key filter to replace the yellow from the background :
ffmpeg -ss 4 -i original.mp4 -t 2 -filter_complex "[0:v]colorkey=0xfff31b:0.125:0[ckout]" -map "[ckout]" colorkey-%d.png
This removes the specific color, but leaves some pints behind, and some items are yellow-themed, so blending value is a no-no for this scenario.
I need to get rid of 4 specific yellow-colors from the frames :
0xfff31b
,0xfae56b
,0xfaec46
and0xeee2a0
, and I plan to run the same filter for specific colors before getting the final result.So first I tried this :
ffmpeg -ss 4 -i original.mp4 -t 2 -filter_complex "[0:v]colorkey=0xfff31b:0.4:0[ckout1];[0:v]colorkey=0xfae56b:0.4:0[ckout2];[0:v]colorkey=0xfaec46:0.4:0[ckout3];[0:v]colorkey=0xeee2a0:0.4:0[ckout4]" -map "[ckout4]" colorkeyrefined-%d.png
Then this :
ffmpeg -ss 4 -i original.mp4 -t 2 -filter_complex "[0:v]colorkey=0xfff31b:0.4:0[ckout]" -filter_complex "[0:v]colorkey=0xfae56b:0.4:0[ckout]" -filter_complex "[0:v]colorkey=0xfaec46:0.4:0[ckout]" -filter_complex "[0:v]colorkey=0xeee2a0:0.4:0[ckout]" -map "[ckout]" colorkeyrefined-%d.png
But both display the same error :
Filter colorkey has an unconnected output.
Is there a way to apply the colorkey feature 4 times (with the mentioned values) in one go ?
-
Build ffmpeg on a build machine
18 juillet 2019, par RDIBuild ffmpeg on build PC using libx264 and shared libraries (not static).
I am building on a Red Hat 6.6 Server and final target machine is CentOS 6.6.
I am trying, as said, to build ffmpeg with encoding enabled (with libx264) and shared libraries ; of course I do not want to install the libraries on the build PC, they should be only extracted and then delivered together with the final RPM.
After the "./configure" I get all RPMs (related to ffmpeg) but when trying to installing ffmpeg-libs on the build pc it fails because the libx264.so.157 is not found, even if as test I installed it (configure/make/make install) and present at /usr/local/lib.Where am I wrong ?
Thanks
This is my SPEC file at the moment :
ldconfig /usr/local/lib
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
# configure
./configure \
--enable-gpl --disable-static --enable-shared --extra-cflags="-I/usr/local/include" --extra-ldflags="-L/usr/local/lib" --extra-libs=-ldl --disable-autodetect --disable-doc --disable-postproc --disable-ffplay --disable-everything --enable-encoder=aac --enable-encoder=png --enable-encoder=mjpeg --enable-encoder=libx264 --enable-decoder=aac --enable-decoder=h264 --enable-decoder=mpeg4 --enable-decoder=rawvideo --enable-decoder=png --enable-muxer=mp4 --enable-muxer=stream_segment --enable-muxer=image2 --enable-demuxer=aac --enable-demuxer=h264 --enable-demuxer=mov --enable-demuxer=rtp --enable-parser=aac --enable-parser=h264 --enable-parser=mpeg4video --enable-bsf=aac_adtstoasc --enable-protocol=file --enable-protocol=http --enable-protocol=tcp --enable-protocol=rtp --enable-protocol=udp --enable-indev=xcbgrab --disable-alsa --enable-libxcb --enable-libxcb-xfixes --enable-libxcb-shape --enable-zlib --prefix=%{_prefix} --bindir=%{_bindir} --datadir=%{_datadir}/%{name} --shlibdir=%{_libdir} --enable-alsa --enable-avfilter --enable-avresample --enable-libx264 --enable-filter=scale \