Recherche avancée

Médias (91)

Autres articles (38)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (5874)

  • MediaCodec AV Sync when decoding

    12 juin 2020, par ClassA

    All of the questions regarding syncing audio and video, when decoding using MediaCodec, suggests that we should use an "AV Sync" mechanism to sync the video and audio using their timestamps.

    



    Here is what I do to achieve this :

    



    I have 2 threads, one for decoding video and one for audio. To sync the video and audio I'm using Extractor.getSampleTime() to determine if I should release the audio or video buffers, please see below :

    



    //This is called after configuring MediaCodec(both audio and video)
private void startPlaybackThreads(){
    //Audio playback thread
    mAudioWorkerThread = new Thread("AudioThread") {
        @Override
        public void run() {
            if (!Thread.interrupted()) {
                try {
                    //Check info below
                    if (shouldPushAudio()) {
                        workLoopAudio();
                    }
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        }
    };
    mAudioWorkerThread.start();

    //Video playback thread
    mVideoWorkerThread = new Thread("VideoThread") {
        @Override
        public void run() {
            if (!Thread.interrupted()) {
                try {
                    //Check info below
                    if (shouldPushVideo()) {
                        workLoopVideo();
                    }
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        }
    };
    mVideoWorkerThread.start();
}

//Check if more buffers should be sent to the audio decoder
private boolean shouldPushAudio(){
    int audioTime =(int) mAudioExtractor.getSampleTime();
    int videoTime = (int) mExtractor.getSampleTime();
    return audioTime <= videoTime;
}
//Check if more buffers should be sent to the video decoder
private boolean shouldPushVideo(){
    int audioTime =(int) mAudioExtractor.getSampleTime();
    int videoTime = (int) mExtractor.getSampleTime();
    return audioTime > videoTime;
}


    



    Inside workLoopAudio() and workLoopVideo() is all my MediaCodec logic (I decided not to post it because it's not relevant).

    



    So what I do is, I get the sample time of the video and the audio tracks, I then check which one is bigger(further ahead). If the video is "ahead" then I pass more buffers to my audio decoder and visa versa.

    



    This seems to be working fine - The video and audio are playing in sync.

    




    



    My question :

    



    I would like to know if my approach is correct(is this how we should be doing it, or is there another/better way) ? I could not find any working examples of this(written in java/kotlin), thus the question.

    




    



    EDIT 1 :

    



    I've found that the audio trails behind the video (very slightly) when I decode/play a video that was encoded using FFmpeg. If I use a video that was not encoded using FFmpeg then the video and audio syncs perfectly.

    



    The FFmpeg command is nothing out of the ordinary :

    



    -i inputPath -crf 18 -c:v libx264 -preset ultrafast OutputPath


    



    I will be providing additional information below :

    



    I initialize/create AudioTrack like this :

    



    //Audio
mAudioExtractor = new MediaExtractor();
mAudioExtractor.setDataSource(mSource);
int audioTrackIndex = selectAudioTrack(mAudioExtractor);
if (audioTrackIndex < 0){
    throw new IOException("Can't find Audio info!");
}
mAudioExtractor.selectTrack(audioTrackIndex);
mAudioFormat = mAudioExtractor.getTrackFormat(audioTrackIndex);
mAudioMime = mAudioFormat.getString(MediaFormat.KEY_MIME);

mAudioChannels = mAudioFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT);
mAudioSampleRate = mAudioFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE);

final int min_buf_size = AudioTrack.getMinBufferSize(mAudioSampleRate, (mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO), AudioFormat.ENCODING_PCM_16BIT);
final int max_input_size = mAudioFormat.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE);
mAudioInputBufSize =  min_buf_size > 0 ? min_buf_size * 4 : max_input_size;
if (mAudioInputBufSize > max_input_size) mAudioInputBufSize = max_input_size;
final int frameSizeInBytes = mAudioChannels * 2;
mAudioInputBufSize = (mAudioInputBufSize / frameSizeInBytes) * frameSizeInBytes;

mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
    mAudioSampleRate,
    (mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO),
    AudioFormat.ENCODING_PCM_16BIT,
    AudioTrack.getMinBufferSize(mAudioSampleRate, mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT),
    AudioTrack.MODE_STREAM);

try {
    mAudioTrack.play();
} catch (final Exception e) {
    Log.e(TAG, "failed to start audio track playing", e);
    mAudioTrack.release();
    mAudioTrack = null;
}


    



    And I write to the AudioTrack like this :

    



    //Called from within workLoopAudio, when releasing audio buffers
if (bufferAudioIndex >= 0) {
    if (mAudioBufferInfo.size > 0) {
        internalWriteAudio(mAudioOutputBuffers[bufferAudioIndex], mAudioBufferInfo.size);
    }
    mAudioDecoder.releaseOutputBuffer(bufferAudioIndex, false);
}

private boolean internalWriteAudio(final ByteBuffer buffer, final int size) {
    if (mAudioOutTempBuf.length < size) {
        mAudioOutTempBuf = new byte[size];
    }
    buffer.position(0);
    buffer.get(mAudioOutTempBuf, 0, size);
    buffer.clear();
    if (mAudioTrack != null)
        mAudioTrack.write(mAudioOutTempBuf, 0, size);
    return true;
}


    



    "NEW" Question :

    



    The audio trails about 200ms behind the video if I use a video that was encoded using FFmpeg, is there a reason why this could be happening ?

    


  • Is there a way using ffprobe (fluent-ffmpeg) input with a read stream in node.js ?

    25 mars 2023, par Andy Huang

    I am using fluent-ffmpeg in my code, my main goal is to get the audio/video duration,
I need to use stream as my input.

    


    According the document,
https://github.com/fluent-ffmpeg/node-fluent-ffmpeg#reading-video-metadata

    


    ffmpeg('/path/to/file1.avi')
  .input('/path/to/file2.avi')
  .ffprobe(function(err, data) {
    console.log('file2 metadata:');
    console.dir(data);
  });

ffmpeg('/path/to/file1.avi')
  .input('/path/to/file2.avi')
  .ffprobe(0, function(err, data) {
    console.log('file1 metadata:');
    console.dir(data);
  });


    


    I have tried these

    


    const ffmpeg = require('fluent-ffmpeg')
const fs = require('fs')

filepath = './scratch_file/assets_audios_10000.wav'
stream = fs.createReadStream(filepath)
ffmpeg(stream)
.input(filepath) // have to put a file path here, possible path dependent
.ffprobe(function (err, metadata) {
    if (err){throw err}
    console.log(metadata.format.duration);
}) //success printing the duration 


    


    Above successfully returned the duration

    


    ffmpeg(stream)
.input(stream) //
.ffprobe(function (err, metadata) {
    if (err){throw err}
    console.log(metadata.format.duration);
}) // failed


    


    Above failed.

    


    ffmpeg(stream)
.ffprobe(function (err, metadata) {
    if (err){throw err}
    console.log(metadata.format.duration);
}) //returned "N/A"


    


    Returned N/A

    


    Can nyone help ? I would need something like

    


    ffmpeg.ffprobe(stream, (metadata) => {console.log(metadata.format.duration)} )

    


    Thank you.

    


  • JavaCV FFmpegFrameGrabber & Java2DFrameConverter creating weird looking image

    4 juin 2020, par mega12345mega

    I'm new to JavaCV, so the issue is probably very obvious. I'm trying to do the easy said, difficult done, task of getting the images and audio from a video so I can start making a video editor. After lots of confusion and errors, I am finally getting a result, but it is as odd as the errors. The image appears to be squished in the x direction, with the extra space to the right being transparent (so the image size matches the video's size). Additionally, it has a lot of extra transparent pixels and is multicolored in an odd way.

    



    What the image should look like : https://gofile.io/d/1lQnNd

    



    What the image looks like : https://gofile.io/d/kc09G7

    



    Here is my code :

    



    try {
    FFmpegFrameGrabber frameGrabber = new FFmpegFrameGrabber("C:/Users/mega12345mega/Desktop/Test Files/video.mp4");
    frameGrabber.setFormat("mp4");
    frameGrabber.start();

    int width = frameGrabber.getImageWidth();
    int height = frameGrabber.getImageHeight();
    System.out.println("width: " + width + ", height: " + height);

    Frame frame = frameGrabber.grabImage();
    if (frame == null)
        throw new Exception("Frame is NULL!");
    if (frame.image == null)
        throw new Exception("Frame Image is NULL!");

    BufferedImage bi = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
    if (bi == null)
        throw new Exception("bi is NULL!");

    Java2DFrameConverter.copy(frame, bi);
    ImageIO.write(bi, "png", new File("C:/Users/mega12345mega/Desktop/Test Files/img.png"));

    frameGrabber.stop();
    frameGrabber.close();
} catch (Exception e) {
    throw new Exception("Error Getting Image", e);
}


    



    If you are interested, here is the console :

    



    width: 1280, height: 720
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:/Users/mega12345mega/Desktop/Test Files/video.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.29.100
  Duration: 00:00:04.95, start: 0.000000, bitrate: 16375 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 16401 kb/s, 60 fps, 60 tbr, 90k tbn, 120 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 2 kb/s (default)
    Metadata:
      handler_name    : SoundHandler


    



    Additionally, if I just do frameGrabber.grab() instead of frameGrabber.grabImage() (which I believe leaves out the audio), the frame's image property is null (that is what the frame.image == null statement is there for). I am not sure if that belongs in a new question, but help on that is also appreciated.

    



    Thanks in advance !