Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (64)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (13976)

  • Real time livestreaming - RPI

    24 avril 2022, par Victor

    I work at a telehealth company and we are using connected medical devices in order to provide the doctor with real time information from these equipements, the equipements are used by a trained health Professional.

    


    Those devices work with video and audio. Right now, we are using them with peerjs (so peer to peer connection) but we are trying to move away from that and have a RPI with his only job to stream data (so streaming audio and video).

    


    Because the equipements are supposed to be used with instructions from a doctor we need the doctor to receive the data in real time.

    


    But we also need the trained health professional to see what he is doing (so we need a local feed from the equipement)

    


    How do we capture audio and video

    


    We are using ffmpeg with a go client that is in charge of managing the ffmpeg clients and stream them to a SRS server.
This works but we are having a 2-3 sec delay when streaming the data. (rtmp from ffmpeg and flv on the front end)

    


    ffmpeg settings :

    


    ("ffmpeg", "-f", "v4l2", `-i`, "*/video0", "-f", "flv", "-vcodec", "libx264", "-x264opts", "keyint=15", "-preset", "ultrafast", "-tune", "zerolatency", "-fflags", "nobuffer", "-b:a", "160k", "-threads", "0", "-g", "0", "rtmp://srs-url")


    


    My questions

    


      

    • Is there a way for this set up to achieve low latency (<1 sec) (for the nurse and for the doctor) ?
    • &#xA;

    • Is the way I want to achieve this good ? Is there a batter way ?
    • &#xA;

    &#xA;

    Flow schema

    &#xA;

    Data exchange and use case flow

    &#xA;

  • ffmpeg : New text after a few seconds without having both texts at the same time for one frame

    21 avril 2022, par principal-ideal-domain

    This answer was very usefull for me to learn how to include texts in a video using ffmepg. Now at one position of the video I want to always show a text but it changes. So what I did is for one text I used for example enable=&#x27;between(t,0,5)&#x27; and for the next enable=&#x27;between(t,5,10)&#x27;. Now at the frame at 5 seconds the video shows both texts. That is not what I want. I guess the reason is that enable=&#x27;between(t,0,5)&#x27; includes the boundaries of the interval. So I somehow need ffmpeg to stop showing the first text one frame earlier. How do I do that ?

    &#xA;

  • Why my code that based on ffmpeg can't sync video' time and audio's time ?

    6 juillet 2021, par ZSpirytus

    Background

    &#xA;

    Recently, I use ffmpeg to write my first Android video player. But video channel's time is faster than audio channel's time about 2 3 times.

    &#xA;

    Code

    &#xA;

    In short, I use PacketDispatcher to read AVPacket from http hlv source :

    &#xA;

    void PacketDispatcher::RealDispatch() {&#xA;    while (GetStatus() != DISPATCHER_STOP) {&#xA;        while (GetStatus() == DISPATCHER_PAUSE) {&#xA;            LOGD(TAG, "wait signal");&#xA;            pthread_mutex_lock(&amp;mutex);&#xA;            pthread_cond_wait(&amp;cond, &amp;mutex);&#xA;            pthread_mutex_unlock(&amp;mutex);&#xA;        }&#xA;&#xA;        AVPacket *av_packet = av_packet_alloc();&#xA;        int ret = av_read_frame(av_format_context, av_packet);&#xA;        if (ret) {&#xA;            LOGE(TAG, "av_read_frame ret=%d", ret);&#xA;            break;&#xA;        }&#xA;&#xA;        // PacketDispatcher is who read the AVPacket from http hlv source &#xA;        // and dispatch to decoder by its stream index.&#xA;        decoder_map[av_packet->stream_index]->Push(av_packet);&#xA;    }&#xA;}&#xA;

    &#xA;

    And then, Decoder written by Producer-Consumer Pattern, Decoder maintain a queue that store all the AVPacket received from PacketDispatcher. The code like this :

    &#xA;

    // write to the queue&#xA;void BaseDecoder::Push(AVPacket *av_packet) {&#xA;    pthread_mutex_lock(&amp;av_packet_queue_mutex);&#xA;    av_packet_queue.push(av_packet);&#xA;    pthread_cond_signal(&amp;av_packet_queue_cond);&#xA;    pthread_mutex_unlock(&amp;av_packet_queue_mutex);&#xA;}&#xA;&#xA;// real decode logic&#xA;void BaseDecoder::RealDecode() {&#xA;    SetDecoderState(START);&#xA;    LOGI(LogSpec(), "start decode");&#xA;&#xA;    while (true) {&#xA;        // 1. check decoder status and queue size to decide if call thread.wait&#xA;&#xA;        // 2. send packet to codec&#xA;        AVPacket* av_packet = av_packet_queue.front();&#xA;        int ret = avcodec_send_packet(av_codec_ctx, av_packet);&#xA;&#xA;        // 3. read frame from codec&#xA;        AVFrame *av_frame = av_frame_alloc();&#xA;        ret = avcodec_receive_frame(av_codec_ctx, av_frame);&#xA;&#xA;        if (m_render) {&#xA;            // 3. custom decode logic overrided by child class&#xA;            void *decode_result = DecodeFrame(av_frame);&#xA;            if (decode_result) {&#xA;                // 4. dispatch to render&#xA;                m_render->Render(decode_result);&#xA;            } else {&#xA;                LOGD("BaseDecoder", "decode_result=nullptr");&#xA;            }&#xA;        }&#xA;    }&#xA;}&#xA;

    &#xA;

    Finally, I do rendering logic in Render. Render also written by Producer-Consumer Pattern, it maintain a queue that store AVFrame received from Decoder, the code like this :

    &#xA;

    // write AVFrame&#xA;void BaseRender::Render(void *frame_data) {&#xA;    Lock();&#xA;    frame_queue.push(frame_data);&#xA;    Signal();&#xA;    UnLock();&#xA;}&#xA;&#xA;// render to surface or Open SL&#xA;void BaseRender::RealRender() {&#xA;    // frame data that contain frame pts and other metadata&#xA;    frame_data->pts = av_frame->pts = av_frame->best_effort_timestamp * av_q2d(GetTimeBase());&#xA;    // video only&#xA;    frame_data->video_extra_delay = av_frame->repeat_pict * 1.0 / fps * 2.0;&#xA;    if (m_render_synchronizer &amp;&amp; m_render_synchronizer->Sync(frame_data)) {&#xA;        continue;&#xA;    }&#xA;}&#xA;

    &#xA;

    And then, the synchronizer will decide to sleep time or drop video frame according to the frame pts, frame pts is :

    &#xA;

    frame_data->pts = av_frame->best_effort_timestamp * av_q2d(GetTimeBase());&#xA;

    &#xA;

    Also, video extra delay is :

    &#xA;

    frame_data->video_extra_delay = av_frame->repeat_pict * 1.0 / fps * 2.0;&#xA;

    &#xA;

    RenderSynchronizer code like this :

    &#xA;

    bool RenderSynchronizer::Sync(void *frame_data) {&#xA;    auto base_frame_data = static_cast<baseframedata>(frame_data);&#xA;    if (base_frame_data->media_type == AVMEDIA_TYPE_AUDIO) {&#xA;        return ReceiveAudioFrame(static_cast<pcmdata>(frame_data));&#xA;    } else if (base_frame_data->media_type == AVMEDIA_TYPE_VIDEO) {&#xA;        return ReceiveVideoFrame(static_cast<rgbadata>(frame_data));&#xA;    }&#xA;    return false;&#xA;}&#xA;&#xA;bool RenderSynchronizer::ReceiveAudioFrame(PCMData *pcm_data) {&#xA;    audio_pts = pcm_data->pts;&#xA;    return false;&#xA;}&#xA;&#xA;bool RenderSynchronizer::ReceiveVideoFrame(RGBAData *rgba_data) {&#xA;    video_pts = rgba_data->pts;&#xA;&#xA;    if (audio_pts &lt;= 0 || video_pts &lt;= 0) {&#xA;        return false;&#xA;    }&#xA;&#xA;    double diff = video_pts - audio_pts;&#xA;    if (diff > 0) {&#xA;        if (diff > 1) {&#xA;            av_usleep((unsigned int) (rgba_data->extra_delay * 1000000.0));&#xA;        } else {&#xA;            av_usleep((unsigned int) ((diff &#x2B; rgba_data->extra_delay) * 1000000.0));&#xA;        }&#xA;        return false;&#xA;    } else if (diff &lt; 0) {&#xA;        LOGD(TAG, "drop video frame");&#xA;        return true;&#xA;    } else {&#xA;        return false;&#xA;    }&#xA;}&#xA;</rgbadata></pcmdata></baseframedata>

    &#xA;

    Why my code can not sync video time and audio time ? Thanks for your reading and answers.

    &#xA;