Recherche avancée

Médias (0)

Mot : - Tags -/organisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (46)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (9833)

  • Font size messes up when I try to hardsub

    7 septembre 2020, par かかし9000

    I have used the following command for a hardsub and everything worked but the subtitle size increased :

    


    ffmpeg -vsync 0 -i input.mkv -vf "ass=subs.ass" -c:a copy -c:v h264_nvenc -b:v 700k final.mp4


    


    i used a srt type file before using ass but that command made the text size take up almost half the screen and the force_style filter shows it executed properly but there was no change in text size at all

    


    though the ass sub style gets me a proper subtitle size i'd very much like it if the size was appropriate

    


  • MediaCodec AV Sync when decoding

    12 juin 2020, par ClassA

    All of the questions regarding syncing audio and video, when decoding using MediaCodec, suggests that we should use an "AV Sync" mechanism to sync the video and audio using their timestamps.

    



    Here is what I do to achieve this :

    



    I have 2 threads, one for decoding video and one for audio. To sync the video and audio I'm using Extractor.getSampleTime() to determine if I should release the audio or video buffers, please see below :

    



    //This is called after configuring MediaCodec(both audio and video)
private void startPlaybackThreads(){
    //Audio playback thread
    mAudioWorkerThread = new Thread("AudioThread") {
        @Override
        public void run() {
            if (!Thread.interrupted()) {
                try {
                    //Check info below
                    if (shouldPushAudio()) {
                        workLoopAudio();
                    }
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        }
    };
    mAudioWorkerThread.start();

    //Video playback thread
    mVideoWorkerThread = new Thread("VideoThread") {
        @Override
        public void run() {
            if (!Thread.interrupted()) {
                try {
                    //Check info below
                    if (shouldPushVideo()) {
                        workLoopVideo();
                    }
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        }
    };
    mVideoWorkerThread.start();
}

//Check if more buffers should be sent to the audio decoder
private boolean shouldPushAudio(){
    int audioTime =(int) mAudioExtractor.getSampleTime();
    int videoTime = (int) mExtractor.getSampleTime();
    return audioTime <= videoTime;
}
//Check if more buffers should be sent to the video decoder
private boolean shouldPushVideo(){
    int audioTime =(int) mAudioExtractor.getSampleTime();
    int videoTime = (int) mExtractor.getSampleTime();
    return audioTime > videoTime;
}


    



    Inside workLoopAudio() and workLoopVideo() is all my MediaCodec logic (I decided not to post it because it's not relevant).

    



    So what I do is, I get the sample time of the video and the audio tracks, I then check which one is bigger(further ahead). If the video is "ahead" then I pass more buffers to my audio decoder and visa versa.

    



    This seems to be working fine - The video and audio are playing in sync.

    




    



    My question :

    



    I would like to know if my approach is correct(is this how we should be doing it, or is there another/better way) ? I could not find any working examples of this(written in java/kotlin), thus the question.

    




    



    EDIT 1 :

    



    I've found that the audio trails behind the video (very slightly) when I decode/play a video that was encoded using FFmpeg. If I use a video that was not encoded using FFmpeg then the video and audio syncs perfectly.

    



    The FFmpeg command is nothing out of the ordinary :

    



    -i inputPath -crf 18 -c:v libx264 -preset ultrafast OutputPath


    



    I will be providing additional information below :

    



    I initialize/create AudioTrack like this :

    



    //Audio
mAudioExtractor = new MediaExtractor();
mAudioExtractor.setDataSource(mSource);
int audioTrackIndex = selectAudioTrack(mAudioExtractor);
if (audioTrackIndex < 0){
    throw new IOException("Can't find Audio info!");
}
mAudioExtractor.selectTrack(audioTrackIndex);
mAudioFormat = mAudioExtractor.getTrackFormat(audioTrackIndex);
mAudioMime = mAudioFormat.getString(MediaFormat.KEY_MIME);

mAudioChannels = mAudioFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT);
mAudioSampleRate = mAudioFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE);

final int min_buf_size = AudioTrack.getMinBufferSize(mAudioSampleRate, (mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO), AudioFormat.ENCODING_PCM_16BIT);
final int max_input_size = mAudioFormat.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE);
mAudioInputBufSize =  min_buf_size > 0 ? min_buf_size * 4 : max_input_size;
if (mAudioInputBufSize > max_input_size) mAudioInputBufSize = max_input_size;
final int frameSizeInBytes = mAudioChannels * 2;
mAudioInputBufSize = (mAudioInputBufSize / frameSizeInBytes) * frameSizeInBytes;

mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
    mAudioSampleRate,
    (mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO),
    AudioFormat.ENCODING_PCM_16BIT,
    AudioTrack.getMinBufferSize(mAudioSampleRate, mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT),
    AudioTrack.MODE_STREAM);

try {
    mAudioTrack.play();
} catch (final Exception e) {
    Log.e(TAG, "failed to start audio track playing", e);
    mAudioTrack.release();
    mAudioTrack = null;
}


    



    And I write to the AudioTrack like this :

    



    //Called from within workLoopAudio, when releasing audio buffers
if (bufferAudioIndex >= 0) {
    if (mAudioBufferInfo.size > 0) {
        internalWriteAudio(mAudioOutputBuffers[bufferAudioIndex], mAudioBufferInfo.size);
    }
    mAudioDecoder.releaseOutputBuffer(bufferAudioIndex, false);
}

private boolean internalWriteAudio(final ByteBuffer buffer, final int size) {
    if (mAudioOutTempBuf.length < size) {
        mAudioOutTempBuf = new byte[size];
    }
    buffer.position(0);
    buffer.get(mAudioOutTempBuf, 0, size);
    buffer.clear();
    if (mAudioTrack != null)
        mAudioTrack.write(mAudioOutTempBuf, 0, size);
    return true;
}


    



    "NEW" Question :

    



    The audio trails about 200ms behind the video if I use a video that was encoded using FFmpeg, is there a reason why this could be happening ?

    


  • Applying same filter_complex many times before output [duplicate]

    19 août 2019, par Fabián

    It’s not a duplicate. This is about using filter_complex, not -vf.

    In my video there’s an object that has shades of yellow (more orange-like) and a solid yellow as background.

    I need to output all frames into a png sequence, using a color key filter to replace the yellow from the background :

    ffmpeg -ss 4 -i original.mp4 -t 2 -filter_complex "[0:v]colorkey=0xfff31b:0.125:0[ckout]" -map "[ckout]" colorkey-%d.png

    This removes the specific color, but leaves some pints behind, and some items are yellow-themed, so blending value is a no-no for this scenario.

    I need to get rid of 4 specific yellow-colors from the frames : 0xfff31b, 0xfae56b, 0xfaec46 and 0xeee2a0, and I plan to run the same filter for specific colors before getting the final result.

    So first I tried this :

    ffmpeg -ss 4 -i original.mp4 -t 2 -filter_complex "[0:v]colorkey=0xfff31b:0.4:0[ckout1];[0:v]colorkey=0xfae56b:0.4:0[ckout2];[0:v]colorkey=0xfaec46:0.4:0[ckout3];[0:v]colorkey=0xeee2a0:0.4:0[ckout4]" -map "[ckout4]" colorkeyrefined-%d.png

    Then this :

    ffmpeg -ss 4 -i original.mp4 -t 2 -filter_complex "[0:v]colorkey=0xfff31b:0.4:0[ckout]" -filter_complex "[0:v]colorkey=0xfae56b:0.4:0[ckout]" -filter_complex "[0:v]colorkey=0xfaec46:0.4:0[ckout]" -filter_complex "[0:v]colorkey=0xeee2a0:0.4:0[ckout]" -map "[ckout]" colorkeyrefined-%d.png

    But both display the same error :

    Filter colorkey has an unconnected output.

    Is there a way to apply the colorkey feature 4 times (with the mentioned values) in one go ?