Recherche avancée

Médias (1)

Mot : - Tags -/swfupload

Autres articles (97)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (13000)

  • MediaCodec AV Sync when decoding

    12 juin 2020, par ClassA

    All of the questions regarding syncing audio and video, when decoding using MediaCodec, suggests that we should use an "AV Sync" mechanism to sync the video and audio using their timestamps.

    



    Here is what I do to achieve this :

    



    I have 2 threads, one for decoding video and one for audio. To sync the video and audio I'm using Extractor.getSampleTime() to determine if I should release the audio or video buffers, please see below :

    



    //This is called after configuring MediaCodec(both audio and video)
private void startPlaybackThreads(){
    //Audio playback thread
    mAudioWorkerThread = new Thread("AudioThread") {
        @Override
        public void run() {
            if (!Thread.interrupted()) {
                try {
                    //Check info below
                    if (shouldPushAudio()) {
                        workLoopAudio();
                    }
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        }
    };
    mAudioWorkerThread.start();

    //Video playback thread
    mVideoWorkerThread = new Thread("VideoThread") {
        @Override
        public void run() {
            if (!Thread.interrupted()) {
                try {
                    //Check info below
                    if (shouldPushVideo()) {
                        workLoopVideo();
                    }
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        }
    };
    mVideoWorkerThread.start();
}

//Check if more buffers should be sent to the audio decoder
private boolean shouldPushAudio(){
    int audioTime =(int) mAudioExtractor.getSampleTime();
    int videoTime = (int) mExtractor.getSampleTime();
    return audioTime <= videoTime;
}
//Check if more buffers should be sent to the video decoder
private boolean shouldPushVideo(){
    int audioTime =(int) mAudioExtractor.getSampleTime();
    int videoTime = (int) mExtractor.getSampleTime();
    return audioTime > videoTime;
}


    



    Inside workLoopAudio() and workLoopVideo() is all my MediaCodec logic (I decided not to post it because it's not relevant).

    



    So what I do is, I get the sample time of the video and the audio tracks, I then check which one is bigger(further ahead). If the video is "ahead" then I pass more buffers to my audio decoder and visa versa.

    



    This seems to be working fine - The video and audio are playing in sync.

    




    



    My question :

    



    I would like to know if my approach is correct(is this how we should be doing it, or is there another/better way) ? I could not find any working examples of this(written in java/kotlin), thus the question.

    




    



    EDIT 1 :

    



    I've found that the audio trails behind the video (very slightly) when I decode/play a video that was encoded using FFmpeg. If I use a video that was not encoded using FFmpeg then the video and audio syncs perfectly.

    



    The FFmpeg command is nothing out of the ordinary :

    



    -i inputPath -crf 18 -c:v libx264 -preset ultrafast OutputPath


    



    I will be providing additional information below :

    



    I initialize/create AudioTrack like this :

    



    //Audio
mAudioExtractor = new MediaExtractor();
mAudioExtractor.setDataSource(mSource);
int audioTrackIndex = selectAudioTrack(mAudioExtractor);
if (audioTrackIndex < 0){
    throw new IOException("Can't find Audio info!");
}
mAudioExtractor.selectTrack(audioTrackIndex);
mAudioFormat = mAudioExtractor.getTrackFormat(audioTrackIndex);
mAudioMime = mAudioFormat.getString(MediaFormat.KEY_MIME);

mAudioChannels = mAudioFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT);
mAudioSampleRate = mAudioFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE);

final int min_buf_size = AudioTrack.getMinBufferSize(mAudioSampleRate, (mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO), AudioFormat.ENCODING_PCM_16BIT);
final int max_input_size = mAudioFormat.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE);
mAudioInputBufSize =  min_buf_size > 0 ? min_buf_size * 4 : max_input_size;
if (mAudioInputBufSize > max_input_size) mAudioInputBufSize = max_input_size;
final int frameSizeInBytes = mAudioChannels * 2;
mAudioInputBufSize = (mAudioInputBufSize / frameSizeInBytes) * frameSizeInBytes;

mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
    mAudioSampleRate,
    (mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO),
    AudioFormat.ENCODING_PCM_16BIT,
    AudioTrack.getMinBufferSize(mAudioSampleRate, mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT),
    AudioTrack.MODE_STREAM);

try {
    mAudioTrack.play();
} catch (final Exception e) {
    Log.e(TAG, "failed to start audio track playing", e);
    mAudioTrack.release();
    mAudioTrack = null;
}


    



    And I write to the AudioTrack like this :

    



    //Called from within workLoopAudio, when releasing audio buffers
if (bufferAudioIndex >= 0) {
    if (mAudioBufferInfo.size > 0) {
        internalWriteAudio(mAudioOutputBuffers[bufferAudioIndex], mAudioBufferInfo.size);
    }
    mAudioDecoder.releaseOutputBuffer(bufferAudioIndex, false);
}

private boolean internalWriteAudio(final ByteBuffer buffer, final int size) {
    if (mAudioOutTempBuf.length < size) {
        mAudioOutTempBuf = new byte[size];
    }
    buffer.position(0);
    buffer.get(mAudioOutTempBuf, 0, size);
    buffer.clear();
    if (mAudioTrack != null)
        mAudioTrack.write(mAudioOutTempBuf, 0, size);
    return true;
}


    



    "NEW" Question :

    



    The audio trails about 200ms behind the video if I use a video that was encoded using FFmpeg, is there a reason why this could be happening ?

    


  • avcodec_send_packet causing memory leak

    23 juin 2020, par AleksaJanjatovic

    I'm trying to fetch a frame from an Rtsp stream, but it seems that I'm forgetting to free some element, causing my RAM to rapidly fill. Here are the code snippets :

    


    Here is the initialization before frame fetching

    


    bool CRtspStream::Init(void* _userData) {
    InitParam* param = (CBaseStream::InitParam*)_userData;
    m_InputPath = param->m_InputPath;
    m_OutputPath = param->m_OutputPath;
    m_Logger.SetLogLevel(param->m_LoggerLevel);

    av_register_all();
    avformat_network_init();

    int ffmpegFatalLogLevel = 8;
    av_log_set_level(ffmpegFatalLogLevel);

    AVCodec* codec = avcodec_find_decoder(param->m_CodecID);
    if(!codec)
        throw DetectionUtility::StreamException("Unable to open codec.");

    m_FormatContext = avformat_alloc_context();
    m_CodecContext = avcodec_alloc_context3(codec);

    if(avformat_open_input(&m_FormatContext, param->m_InputPath.c_str(), nullptr, nullptr) < 0)
        throw DetectionUtility::StreamException("Unable to open stream: " + param->m_InputPath);

    if(avformat_find_stream_info(m_FormatContext, nullptr) < 0)
        throw DetectionUtility::StreamException("Unable to get stream info into format context.");

    for(unsigned int i = 0;i < m_FormatContext->nb_streams; i++){
        if(m_FormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
            m_VideoStreamIndex = i;
    }

    if (avcodec_open2(m_CodecContext, codec, nullptr) < 0)
        throw DetectionUtility::StreamException("Unable to open codec.");

    return m_InitSuccesful = true;
}


    


    And here is the actual frame fetching, i made sure that retVal frame is being freed afterward. Main evidence that this is what causes memory leaks is that valgrind is also pointing to the avcodec_send_packet() line : valgrind_results

    


    AVFrame* CRtspStream::RequestNewFrame() {
    AVPacket packet;
    AVFrame* retVal = av_frame_alloc();
    bool frameAvailable = false;

//    avformat_flush(m_FormatContext);
//    avcodec_flush_buffers(m_CodecContext);
    av_read_play(m_FormatContext);
    int readFrameRes;
    while(!frameAvailable) {
        if((readFrameRes = av_read_frame(m_FormatContext, &packet)) >= 0) {
            if(packet.stream_index == m_VideoStreamIndex) {
                int res = 1;
                if((res = avcodec_send_packet(m_CodecContext, &packet)) < 0) {
                    av_packet_unref(&packet);
                    continue;
                }
                av_packet_unref(&packet);
                res = 1;
                while(res > 0) {
                    res = avcodec_receive_frame(m_CodecContext, retVal);
                    if(res == AVERROR(EAGAIN)) {
                        break;
                    } else if (res < 0) {
                        throw DetectionUtility::StreamException("Unable to receive a frame from the codec.");
                    } else {
                        frameAvailable = true;
                    }
                }
            }
        }
        av_packet_unref(&packet);
    }

    if(readFrameRes < 0) {
        av_frame_free(&retVal);
        av_packet_unref(&packet);
        throw DetectionUtility::StreamException("Unable to fetcha a new frame.");
    } else {
        try {
            if(CheckOutputSupported())
                DetectionUtility::AVFrameHelper::SaveFrame(retVal, "Rtsp" + m_OutputPath, m_CurrentFrameNumber);
            ++m_CurrentFrameNumber;
        } catch (const Utility::BaseException& e) {
            m_Logger.Error(e.GetMessage());
        }
    }
    return retVal;
}


    


  • How to not allow upload of HD content ?More than 1920 x 1080 || 1080 x 1920 resolution files are not allowed since hardware reasons

    23 juillet 2020, par azaono

    Struggling with making limitation to uploade HD file/content. Intent is to have possibility to rotate content. Limits are required due to hardware reasons.

    


            val ffmpeg = FFmpeg("ffmpeg")
        val ffprobe = FFprobe("ffprobe")
        val probeResult = ffprobe.probe("$targetLocation")
        val stream: FFmpegStream = probeResult.getStreams()[0]
        val aspectRatio = stream.width.toDouble() / stream.height
        
        if (stream.width > 1920) {
            Files.delete(targetLocation)
            throw IncorrectResolutionFileException()
        } else if (stream.height > 1080) {
            Files.delete(targetLocation)
            throw IncorrectResolutionFileException()
        }

        if (type == "image") {
            part.transferTo(thumbnailLocation)
        }

        val builder: FFmpegBuilder = FFmpegBuilder()
            .setInput("$targetLocation")
            .addOutput("$thumbnailLocation")
            .setFrames(1)
            .setVideoFilter("select='gte(n\\,10)',scale=200:-1")
            .done()
        val executor = FFmpegExecutor(ffmpeg)
        executor.createJob(builder).run()

        return aspectRatio
    } catch (ex: Exception) {
        throw FileStorageException("Could not store file $cleanPath. Please try again!", ex)
    }
}