
Recherche avancée
Médias (1)
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (53)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...)
Sur d’autres sites (8095)
-
MediaCodec AV Sync when decoding
12 juin 2020, par ClassAAll of the questions regarding syncing audio and video, when decoding using
MediaCodec
, suggests that we should use an "AV Sync" mechanism to sync the video and audio using their timestamps.


Here is what I do to achieve this :



I have 2 threads, one for decoding video and one for audio. To sync the video and audio I'm using
Extractor.getSampleTime()
to determine if I should release the audio or video buffers, please see below :


//This is called after configuring MediaCodec(both audio and video)
private void startPlaybackThreads(){
 //Audio playback thread
 mAudioWorkerThread = new Thread("AudioThread") {
 @Override
 public void run() {
 if (!Thread.interrupted()) {
 try {
 //Check info below
 if (shouldPushAudio()) {
 workLoopAudio();
 }
 } catch (Exception e) {
 e.printStackTrace();
 }
 }
 }
 };
 mAudioWorkerThread.start();

 //Video playback thread
 mVideoWorkerThread = new Thread("VideoThread") {
 @Override
 public void run() {
 if (!Thread.interrupted()) {
 try {
 //Check info below
 if (shouldPushVideo()) {
 workLoopVideo();
 }
 } catch (Exception e) {
 e.printStackTrace();
 }
 }
 }
 };
 mVideoWorkerThread.start();
}

//Check if more buffers should be sent to the audio decoder
private boolean shouldPushAudio(){
 int audioTime =(int) mAudioExtractor.getSampleTime();
 int videoTime = (int) mExtractor.getSampleTime();
 return audioTime <= videoTime;
}
//Check if more buffers should be sent to the video decoder
private boolean shouldPushVideo(){
 int audioTime =(int) mAudioExtractor.getSampleTime();
 int videoTime = (int) mExtractor.getSampleTime();
 return audioTime > videoTime;
}




Inside
workLoopAudio()
andworkLoopVideo()
is all myMediaCodec
logic (I decided not to post it because it's not relevant).


So what I do is, I get the sample time of the video and the audio tracks, I then check which one is bigger(further ahead). If the video is "ahead" then I pass more buffers to my audio decoder and visa versa.



This seems to be working fine - The video and audio are playing in sync.





My question :


I would like to know if my approach is correct(is this how we should be doing it, or is there another/better way) ? I could not find any working examples of this(written in java/kotlin), thus the question.




EDIT 1 :



I've found that the audio trails behind the video (very slightly) when I decode/play a video that was encoded using
FFmpeg
. If I use a video that was not encoded usingFFmpeg
then the video and audio syncs perfectly.


The
FFmpeg
command is nothing out of the ordinary :


-i inputPath -crf 18 -c:v libx264 -preset ultrafast OutputPath




I will be providing additional information below :



I initialize/create
AudioTrack
like this :


//Audio
mAudioExtractor = new MediaExtractor();
mAudioExtractor.setDataSource(mSource);
int audioTrackIndex = selectAudioTrack(mAudioExtractor);
if (audioTrackIndex < 0){
 throw new IOException("Can't find Audio info!");
}
mAudioExtractor.selectTrack(audioTrackIndex);
mAudioFormat = mAudioExtractor.getTrackFormat(audioTrackIndex);
mAudioMime = mAudioFormat.getString(MediaFormat.KEY_MIME);

mAudioChannels = mAudioFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT);
mAudioSampleRate = mAudioFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE);

final int min_buf_size = AudioTrack.getMinBufferSize(mAudioSampleRate, (mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO), AudioFormat.ENCODING_PCM_16BIT);
final int max_input_size = mAudioFormat.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE);
mAudioInputBufSize = min_buf_size > 0 ? min_buf_size * 4 : max_input_size;
if (mAudioInputBufSize > max_input_size) mAudioInputBufSize = max_input_size;
final int frameSizeInBytes = mAudioChannels * 2;
mAudioInputBufSize = (mAudioInputBufSize / frameSizeInBytes) * frameSizeInBytes;

mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
 mAudioSampleRate,
 (mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO),
 AudioFormat.ENCODING_PCM_16BIT,
 AudioTrack.getMinBufferSize(mAudioSampleRate, mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT),
 AudioTrack.MODE_STREAM);

try {
 mAudioTrack.play();
} catch (final Exception e) {
 Log.e(TAG, "failed to start audio track playing", e);
 mAudioTrack.release();
 mAudioTrack = null;
}




And I write to the
AudioTrack
like this :


//Called from within workLoopAudio, when releasing audio buffers
if (bufferAudioIndex >= 0) {
 if (mAudioBufferInfo.size > 0) {
 internalWriteAudio(mAudioOutputBuffers[bufferAudioIndex], mAudioBufferInfo.size);
 }
 mAudioDecoder.releaseOutputBuffer(bufferAudioIndex, false);
}

private boolean internalWriteAudio(final ByteBuffer buffer, final int size) {
 if (mAudioOutTempBuf.length < size) {
 mAudioOutTempBuf = new byte[size];
 }
 buffer.position(0);
 buffer.get(mAudioOutTempBuf, 0, size);
 buffer.clear();
 if (mAudioTrack != null)
 mAudioTrack.write(mAudioOutTempBuf, 0, size);
 return true;
}




"NEW" Question :



The audio trails about 200ms behind the video if I use a video that was encoded using
FFmpeg
, is there a reason why this could be happening ?

-
Can VideoView be detach and reattached without stopping the it ?
31 octobre 2013, par Thierry-Dimitri RoyI'm building an app where the user clicks on a button to show a video full screen. Initially the video is attached to a view inside a ViewPager. To be able to show it fullscreen I detach it from its parent and reattach it to the root view. This works fine, except when the video is switched to fullscreen while playing. When I detach a playing VideoView it just stop and I need to restart it. This is not acceptable since the video starts buffering before resume. Here the part of the code where the detach is done :
final ViewGroup parent = (ViewGroup) findViewById(R.id.parent);
final ViewGroup root = (ViewGroup) findViewById(R.id.root);
Button b = (Button) findViewById(R.id.button);
b.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
parent.removeView(mVideoView);
LayoutParams lp = new FrameLayout.LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT);
root.addView(mVideoView, lp);
}
});Depending of the device, I have a different log error. Probably because the actual video player is provided by the manufacturer and not the Android SDK. Here are the error logs for a Nexus 7 :
10-30 20:26:18.618 : D/NvOsDebugPrintf(124) : NvMMDecTVMRDestroyParser Begin
10-30 20:26:18.618 : D/NvOsDebugPrintf(124) : --------- Closing TVMR Frame Delivery Thread -------------
10-30 20:26:18.678 : D/NvOsDebugPrintf(124) : ------- NvAvpClose -------
10-30 20:26:18.678 : D/NvOsDebugPrintf(124) : NvMMDecTVMRDestroyParser Done
10-30 20:26:18.678 : D/NvOsDebugPrintf(124) : NvMMLiteTVMRDecPrivateClose DoneI haven't been able to detach the video without stopping it. I tried using SurfaceView or TextureView without success.
I also tried finding a third party video player. I found a commercial one (http://www.vitamio.org/) that I can't really use for business reason. I found an open source one, that hasn't been updated in the last year (https://code.google.com/p/dolphin-player/).
I'm currently targeting Android 4.2 or better on tablet only.
-
Font size messes up when I try to hardsub
7 septembre 2020, par かかし9000I have used the following command for a hardsub and everything worked but the subtitle size increased :


ffmpeg -vsync 0 -i input.mkv -vf "ass=subs.ass" -c:a copy -c:v h264_nvenc -b:v 700k final.mp4



i used a
srt
type file before using ass but that command made the text size take up almost half the screen and the force_style filter shows it executed properly but there was no change in text size at all

though the ass sub style gets me a proper subtitle size i'd very much like it if the size was appropriate