Recherche avancée

Médias (1)

Mot : - Tags -/blender

Autres articles (86)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (11017)

  • Decode audio using ffmpeg (packet-by-packet) in Java

    27 mai 2022, par quad478

    In my application, I receive an audio stream from an IP-camera via RTP using NETTY.
The stream from IP-camera comes in the "G.711 mulaw" format, I would like to recode it to the "AAC" format.
I can't use files in this task as it's a live stream, and each packet needs to be decoded and delivered to the client (browser) immediately.
For this task, I wanted to use the ffmpeg child process :
when connecting to the camera, create a ffmpeg process and send to stdin (ffmpeg) each received packet, then I would like to receive the decoded packet from stdout.
Here is the command I run ffmeg with :

    


    "ffmpeg.exe -f mulaw -re -i - -f adts -"


    


    I'm not sure if "-re" should be used here, but without this option, ffmpeg outputs the decode result only after stdin is closed and the process exits.
The problem is that I don't get anything on stdout after sending the packet to ffmpeg.

    


    Decoder code :

    


    package ru.ngslab.insentry.web.video.protocols.rtp;&#xA;&#xA;import io.netty.buffer.ByteBuf;&#xA;import io.netty.channel.ChannelHandlerContext;&#xA;import io.netty.handler.codec.MessageToMessageDecoder;&#xA;&#xA;import java.io.Closeable;&#xA;import java.io.IOException;&#xA;import java.io.InputStream;&#xA;import java.io.OutputStream;&#xA;import java.util.List;&#xA;import java.util.concurrent.ExecutorService;&#xA;import java.util.concurrent.Executors;&#xA;import java.util.concurrent.Future;&#xA;&#xA;public class RtpFfmpegDecoder extends MessageToMessageDecoder<rtppacket> implements Closeable {&#xA;&#xA;    private final Process ffmegProcess;&#xA;    private final OutputStream ffmpegOutput;&#xA;    private final InputStream ffmegInput;&#xA;    private final ExecutorService ffmpegInputReaderService = Executors.newSingleThreadExecutor();&#xA;&#xA;    public RtpFfmpegDecoder() {&#xA;&#xA;        //Start Ffmpeg process&#xA;        ProcessBuilder ffmpegBuilder = new ProcessBuilder("ffmpeg.exe", "-f", "mulaw",&#xA;                "-re", "-i", "-", "-f", "adts", "-").redirectError(ProcessBuilder.Redirect.INHERIT);&#xA;        try {&#xA;            ffmegProcess = ffmpegBuilder.start();&#xA;            ffmpegOutput = ffmegProcess.getOutputStream();&#xA;            ffmegInput = ffmegProcess.getInputStream();&#xA;        } catch (IOException e) {&#xA;            throw new IllegalStateException(e);&#xA;        }&#xA;    }&#xA;&#xA;    @Override&#xA;    protected void decode(ChannelHandlerContext channelHandlerContext, RtpPacket rtpPacket, List list) throws Exception {&#xA;&#xA;        //start read ffmpeg output in another thread&#xA;        Future future = ffmpegInputReaderService.submit(this::startReadFFmpegOutput);&#xA;&#xA;        //write rtp- packet bytes to ffmpeg-input&#xA;        ByteBuf data = rtpPacket.getData();&#xA;        byte[] rtpData = new byte[data.readableBytes()];&#xA;        data.getBytes(data.readerIndex(), rtpData);&#xA;        ffmpegOutput.write(rtpData);&#xA;        ffmpegOutput.flush();&#xA;&#xA;        //waiting here for the decoding result from ffmpeg&#xA;        //blocks here&#xA;        byte[] result = future.get();&#xA;        //then process result...&#xA;    }&#xA;&#xA;    private byte[] startReadFFmpegOutput() {&#xA;        try {&#xA;            //I don&#x27;t know how many bytes to expect here, for test purposes I use 1024&#xA;            var bytes = new byte[1024];&#xA;            ffmegInput.read(bytes);&#xA;            return bytes;&#xA;        } catch (IOException e) {&#xA;            throw new IllegalStateException(e);&#xA;        }&#xA;    }&#xA;&#xA;    @Override&#xA;    public void close() throws IOException {&#xA;        //Close streams code...&#xA;    }&#xA;}&#xA;</rtppacket>

    &#xA;

    This doesn't work because ffmpeg doesn't send anything after receiving the packet.&#xA;No errors in log, no output data.&#xA;Just wait for result here :

    &#xA;

    byte[] result = future.get();&#xA;

    &#xA;

    Normally, ffmpeg only outputs after stdin is closed and the process stops.&#xA;It may be necessary to run ffmpeg with some special&#xA;parameters so that it outputs each received packet at once ?

    &#xA;

    I would be very grateful for any help

    &#xA;

  • Build ffmpeg on a build machine

    18 juillet 2019, par RDI

    Build ffmpeg on build PC using libx264 and shared libraries (not static).
    I am building on a Red Hat 6.6 Server and final target machine is CentOS 6.6.
    I am trying, as said, to build ffmpeg with encoding enabled (with libx264) and shared libraries ; of course I do not want to install the libraries on the build PC, they should be only extracted and then delivered together with the final RPM.
    After the "./configure" I get all RPMs (related to ffmpeg) but when trying to installing ffmpeg-libs on the build pc it fails because the libx264.so.157 is not found, even if as test I installed it (configure/make/make install) and present at /usr/local/lib.

    Where am I wrong ?

    Thanks

    This is my SPEC file at the moment :

    ldconfig /usr/local/lib
    export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH


    # configure
    ./configure \
    --enable-gpl --disable-static --enable-shared --extra-cflags="-I/usr/local/include" --extra-ldflags="-L/usr/local/lib" --extra-libs=-ldl --disable-autodetect --disable-doc --disable-postproc --disable-ffplay --disable-everything --enable-encoder=aac --enable-encoder=png --enable-encoder=mjpeg --enable-encoder=libx264 --enable-decoder=aac --enable-decoder=h264 --enable-decoder=mpeg4 --enable-decoder=rawvideo --enable-decoder=png --enable-muxer=mp4 --enable-muxer=stream_segment --enable-muxer=image2 --enable-demuxer=aac --enable-demuxer=h264 --enable-demuxer=mov --enable-demuxer=rtp --enable-parser=aac --enable-parser=h264 --enable-parser=mpeg4video --enable-bsf=aac_adtstoasc --enable-protocol=file --enable-protocol=http --enable-protocol=tcp --enable-protocol=rtp --enable-protocol=udp --enable-indev=xcbgrab --disable-alsa --enable-libxcb --enable-libxcb-xfixes --enable-libxcb-shape --enable-zlib --prefix=%{_prefix} --bindir=%{_bindir} --datadir=%{_datadir}/%{name} --shlibdir=%{_libdir} --enable-alsa --enable-avfilter --enable-avresample --enable-libx264 --enable-filter=scale \
  • MediaCodec AV Sync when decoding

    12 juin 2020, par ClassA

    All of the questions regarding syncing audio and video, when decoding using MediaCodec, suggests that we should use an "AV Sync" mechanism to sync the video and audio using their timestamps.

    &#xA;&#xA;

    Here is what I do to achieve this :

    &#xA;&#xA;

    I have 2 threads, one for decoding video and one for audio. To sync the video and audio I'm using Extractor.getSampleTime() to determine if I should release the audio or video buffers, please see below :

    &#xA;&#xA;

    //This is called after configuring MediaCodec(both audio and video)&#xA;private void startPlaybackThreads(){&#xA;    //Audio playback thread&#xA;    mAudioWorkerThread = new Thread("AudioThread") {&#xA;        @Override&#xA;        public void run() {&#xA;            if (!Thread.interrupted()) {&#xA;                try {&#xA;                    //Check info below&#xA;                    if (shouldPushAudio()) {&#xA;                        workLoopAudio();&#xA;                    }&#xA;                } catch (Exception e) {&#xA;                    e.printStackTrace();&#xA;                }&#xA;            }&#xA;        }&#xA;    };&#xA;    mAudioWorkerThread.start();&#xA;&#xA;    //Video playback thread&#xA;    mVideoWorkerThread = new Thread("VideoThread") {&#xA;        @Override&#xA;        public void run() {&#xA;            if (!Thread.interrupted()) {&#xA;                try {&#xA;                    //Check info below&#xA;                    if (shouldPushVideo()) {&#xA;                        workLoopVideo();&#xA;                    }&#xA;                } catch (Exception e) {&#xA;                    e.printStackTrace();&#xA;                }&#xA;            }&#xA;        }&#xA;    };&#xA;    mVideoWorkerThread.start();&#xA;}&#xA;&#xA;//Check if more buffers should be sent to the audio decoder&#xA;private boolean shouldPushAudio(){&#xA;    int audioTime =(int) mAudioExtractor.getSampleTime();&#xA;    int videoTime = (int) mExtractor.getSampleTime();&#xA;    return audioTime &lt;= videoTime;&#xA;}&#xA;//Check if more buffers should be sent to the video decoder&#xA;private boolean shouldPushVideo(){&#xA;    int audioTime =(int) mAudioExtractor.getSampleTime();&#xA;    int videoTime = (int) mExtractor.getSampleTime();&#xA;    return audioTime > videoTime;&#xA;}&#xA;

    &#xA;&#xA;

    Inside workLoopAudio() and workLoopVideo() is all my MediaCodec logic (I decided not to post it because it's not relevant).

    &#xA;&#xA;

    So what I do is, I get the sample time of the video and the audio tracks, I then check which one is bigger(further ahead). If the video is "ahead" then I pass more buffers to my audio decoder and visa versa.

    &#xA;&#xA;

    This seems to be working fine - The video and audio are playing in sync.

    &#xA;&#xA;


    &#xA;&#xA;

    My question :

    &#xA;&#xA;

    I would like to know if my approach is correct(is this how we should be doing it, or is there another/better way) ? I could not find any working examples of this(written in java/kotlin), thus the question.

    &#xA;&#xA;


    &#xA;&#xA;

    EDIT 1 :

    &#xA;&#xA;

    I've found that the audio trails behind the video (very slightly) when I decode/play a video that was encoded using FFmpeg. If I use a video that was not encoded using FFmpeg then the video and audio syncs perfectly.

    &#xA;&#xA;

    The FFmpeg command is nothing out of the ordinary :

    &#xA;&#xA;

    -i inputPath -crf 18 -c:v libx264 -preset ultrafast OutputPath&#xA;

    &#xA;&#xA;

    I will be providing additional information below :

    &#xA;&#xA;

    I initialize/create AudioTrack like this :

    &#xA;&#xA;

    //Audio&#xA;mAudioExtractor = new MediaExtractor();&#xA;mAudioExtractor.setDataSource(mSource);&#xA;int audioTrackIndex = selectAudioTrack(mAudioExtractor);&#xA;if (audioTrackIndex &lt; 0){&#xA;    throw new IOException("Can&#x27;t find Audio info!");&#xA;}&#xA;mAudioExtractor.selectTrack(audioTrackIndex);&#xA;mAudioFormat = mAudioExtractor.getTrackFormat(audioTrackIndex);&#xA;mAudioMime = mAudioFormat.getString(MediaFormat.KEY_MIME);&#xA;&#xA;mAudioChannels = mAudioFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT);&#xA;mAudioSampleRate = mAudioFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE);&#xA;&#xA;final int min_buf_size = AudioTrack.getMinBufferSize(mAudioSampleRate, (mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO), AudioFormat.ENCODING_PCM_16BIT);&#xA;final int max_input_size = mAudioFormat.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE);&#xA;mAudioInputBufSize =  min_buf_size > 0 ? min_buf_size * 4 : max_input_size;&#xA;if (mAudioInputBufSize > max_input_size) mAudioInputBufSize = max_input_size;&#xA;final int frameSizeInBytes = mAudioChannels * 2;&#xA;mAudioInputBufSize = (mAudioInputBufSize / frameSizeInBytes) * frameSizeInBytes;&#xA;&#xA;mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,&#xA;    mAudioSampleRate,&#xA;    (mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO),&#xA;    AudioFormat.ENCODING_PCM_16BIT,&#xA;    AudioTrack.getMinBufferSize(mAudioSampleRate, mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT),&#xA;    AudioTrack.MODE_STREAM);&#xA;&#xA;try {&#xA;    mAudioTrack.play();&#xA;} catch (final Exception e) {&#xA;    Log.e(TAG, "failed to start audio track playing", e);&#xA;    mAudioTrack.release();&#xA;    mAudioTrack = null;&#xA;}&#xA;

    &#xA;&#xA;

    And I write to the AudioTrack like this :

    &#xA;&#xA;

    //Called from within workLoopAudio, when releasing audio buffers&#xA;if (bufferAudioIndex >= 0) {&#xA;    if (mAudioBufferInfo.size > 0) {&#xA;        internalWriteAudio(mAudioOutputBuffers[bufferAudioIndex], mAudioBufferInfo.size);&#xA;    }&#xA;    mAudioDecoder.releaseOutputBuffer(bufferAudioIndex, false);&#xA;}&#xA;&#xA;private boolean internalWriteAudio(final ByteBuffer buffer, final int size) {&#xA;    if (mAudioOutTempBuf.length &lt; size) {&#xA;        mAudioOutTempBuf = new byte[size];&#xA;    }&#xA;    buffer.position(0);&#xA;    buffer.get(mAudioOutTempBuf, 0, size);&#xA;    buffer.clear();&#xA;    if (mAudioTrack != null)&#xA;        mAudioTrack.write(mAudioOutTempBuf, 0, size);&#xA;    return true;&#xA;}&#xA;

    &#xA;&#xA;

    "NEW" Question :

    &#xA;&#xA;

    The audio trails about 200ms behind the video if I use a video that was encoded using FFmpeg, is there a reason why this could be happening ?

    &#xA;