
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (45)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (7923)
-
MediaCodec AV Sync when decoding
12 juin 2020, par ClassAAll of the questions regarding syncing audio and video, when decoding using
MediaCodec
, suggests that we should use an "AV Sync" mechanism to sync the video and audio using their timestamps.


Here is what I do to achieve this :



I have 2 threads, one for decoding video and one for audio. To sync the video and audio I'm using
Extractor.getSampleTime()
to determine if I should release the audio or video buffers, please see below :


//This is called after configuring MediaCodec(both audio and video)
private void startPlaybackThreads(){
 //Audio playback thread
 mAudioWorkerThread = new Thread("AudioThread") {
 @Override
 public void run() {
 if (!Thread.interrupted()) {
 try {
 //Check info below
 if (shouldPushAudio()) {
 workLoopAudio();
 }
 } catch (Exception e) {
 e.printStackTrace();
 }
 }
 }
 };
 mAudioWorkerThread.start();

 //Video playback thread
 mVideoWorkerThread = new Thread("VideoThread") {
 @Override
 public void run() {
 if (!Thread.interrupted()) {
 try {
 //Check info below
 if (shouldPushVideo()) {
 workLoopVideo();
 }
 } catch (Exception e) {
 e.printStackTrace();
 }
 }
 }
 };
 mVideoWorkerThread.start();
}

//Check if more buffers should be sent to the audio decoder
private boolean shouldPushAudio(){
 int audioTime =(int) mAudioExtractor.getSampleTime();
 int videoTime = (int) mExtractor.getSampleTime();
 return audioTime <= videoTime;
}
//Check if more buffers should be sent to the video decoder
private boolean shouldPushVideo(){
 int audioTime =(int) mAudioExtractor.getSampleTime();
 int videoTime = (int) mExtractor.getSampleTime();
 return audioTime > videoTime;
}




Inside
workLoopAudio()
andworkLoopVideo()
is all myMediaCodec
logic (I decided not to post it because it's not relevant).


So what I do is, I get the sample time of the video and the audio tracks, I then check which one is bigger(further ahead). If the video is "ahead" then I pass more buffers to my audio decoder and visa versa.



This seems to be working fine - The video and audio are playing in sync.





My question :


I would like to know if my approach is correct(is this how we should be doing it, or is there another/better way) ? I could not find any working examples of this(written in java/kotlin), thus the question.




EDIT 1 :



I've found that the audio trails behind the video (very slightly) when I decode/play a video that was encoded using
FFmpeg
. If I use a video that was not encoded usingFFmpeg
then the video and audio syncs perfectly.


The
FFmpeg
command is nothing out of the ordinary :


-i inputPath -crf 18 -c:v libx264 -preset ultrafast OutputPath




I will be providing additional information below :



I initialize/create
AudioTrack
like this :


//Audio
mAudioExtractor = new MediaExtractor();
mAudioExtractor.setDataSource(mSource);
int audioTrackIndex = selectAudioTrack(mAudioExtractor);
if (audioTrackIndex < 0){
 throw new IOException("Can't find Audio info!");
}
mAudioExtractor.selectTrack(audioTrackIndex);
mAudioFormat = mAudioExtractor.getTrackFormat(audioTrackIndex);
mAudioMime = mAudioFormat.getString(MediaFormat.KEY_MIME);

mAudioChannels = mAudioFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT);
mAudioSampleRate = mAudioFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE);

final int min_buf_size = AudioTrack.getMinBufferSize(mAudioSampleRate, (mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO), AudioFormat.ENCODING_PCM_16BIT);
final int max_input_size = mAudioFormat.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE);
mAudioInputBufSize = min_buf_size > 0 ? min_buf_size * 4 : max_input_size;
if (mAudioInputBufSize > max_input_size) mAudioInputBufSize = max_input_size;
final int frameSizeInBytes = mAudioChannels * 2;
mAudioInputBufSize = (mAudioInputBufSize / frameSizeInBytes) * frameSizeInBytes;

mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
 mAudioSampleRate,
 (mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO),
 AudioFormat.ENCODING_PCM_16BIT,
 AudioTrack.getMinBufferSize(mAudioSampleRate, mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT),
 AudioTrack.MODE_STREAM);

try {
 mAudioTrack.play();
} catch (final Exception e) {
 Log.e(TAG, "failed to start audio track playing", e);
 mAudioTrack.release();
 mAudioTrack = null;
}




And I write to the
AudioTrack
like this :


//Called from within workLoopAudio, when releasing audio buffers
if (bufferAudioIndex >= 0) {
 if (mAudioBufferInfo.size > 0) {
 internalWriteAudio(mAudioOutputBuffers[bufferAudioIndex], mAudioBufferInfo.size);
 }
 mAudioDecoder.releaseOutputBuffer(bufferAudioIndex, false);
}

private boolean internalWriteAudio(final ByteBuffer buffer, final int size) {
 if (mAudioOutTempBuf.length < size) {
 mAudioOutTempBuf = new byte[size];
 }
 buffer.position(0);
 buffer.get(mAudioOutTempBuf, 0, size);
 buffer.clear();
 if (mAudioTrack != null)
 mAudioTrack.write(mAudioOutTempBuf, 0, size);
 return true;
}




"NEW" Question :



The audio trails about 200ms behind the video if I use a video that was encoded using
FFmpeg
, is there a reason why this could be happening ?

-
Can VideoView be detach and reattached without stopping the it ?
31 octobre 2013, par Thierry-Dimitri RoyI'm building an app where the user clicks on a button to show a video full screen. Initially the video is attached to a view inside a ViewPager. To be able to show it fullscreen I detach it from its parent and reattach it to the root view. This works fine, except when the video is switched to fullscreen while playing. When I detach a playing VideoView it just stop and I need to restart it. This is not acceptable since the video starts buffering before resume. Here the part of the code where the detach is done :
final ViewGroup parent = (ViewGroup) findViewById(R.id.parent);
final ViewGroup root = (ViewGroup) findViewById(R.id.root);
Button b = (Button) findViewById(R.id.button);
b.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
parent.removeView(mVideoView);
LayoutParams lp = new FrameLayout.LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT);
root.addView(mVideoView, lp);
}
});Depending of the device, I have a different log error. Probably because the actual video player is provided by the manufacturer and not the Android SDK. Here are the error logs for a Nexus 7 :
10-30 20:26:18.618 : D/NvOsDebugPrintf(124) : NvMMDecTVMRDestroyParser Begin
10-30 20:26:18.618 : D/NvOsDebugPrintf(124) : --------- Closing TVMR Frame Delivery Thread -------------
10-30 20:26:18.678 : D/NvOsDebugPrintf(124) : ------- NvAvpClose -------
10-30 20:26:18.678 : D/NvOsDebugPrintf(124) : NvMMDecTVMRDestroyParser Done
10-30 20:26:18.678 : D/NvOsDebugPrintf(124) : NvMMLiteTVMRDecPrivateClose DoneI haven't been able to detach the video without stopping it. I tried using SurfaceView or TextureView without success.
I also tried finding a third party video player. I found a commercial one (http://www.vitamio.org/) that I can't really use for business reason. I found an open source one, that hasn't been updated in the last year (https://code.google.com/p/dolphin-player/).
I'm currently targeting Android 4.2 or better on tablet only.
-
Font size messes up when I try to hardsub
7 septembre 2020, par かかし9000I have used the following command for a hardsub and everything worked but the subtitle size increased :


ffmpeg -vsync 0 -i input.mkv -vf "ass=subs.ass" -c:a copy -c:v h264_nvenc -b:v 700k final.mp4



i used a
srt
type file before using ass but that command made the text size take up almost half the screen and the force_style filter shows it executed properly but there was no change in text size at all

though the ass sub style gets me a proper subtitle size i'd very much like it if the size was appropriate