
Recherche avancée
Autres articles (64)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (13976)
-
Real time livestreaming - RPI
24 avril 2022, par VictorI work at a telehealth company and we are using connected medical devices in order to provide the doctor with real time information from these equipements, the equipements are used by a trained health Professional.


Those devices work with video and audio. Right now, we are using them with peerjs (so peer to peer connection) but we are trying to move away from that and have a RPI with his only job to stream data (so streaming audio and video).


Because the equipements are supposed to be used with instructions from a doctor we need the doctor to receive the data in real time.


But we also need the trained health professional to see what he is doing (so we need a local feed from the equipement)


How do we capture audio and video


We are using ffmpeg with a go client that is in charge of managing the ffmpeg clients and stream them to a SRS server.
This works but we are having a 2-3 sec delay when streaming the data. (rtmp from ffmpeg and flv on the front end)


ffmpeg settings :


("ffmpeg", "-f", "v4l2", `-i`, "*/video0", "-f", "flv", "-vcodec", "libx264", "-x264opts", "keyint=15", "-preset", "ultrafast", "-tune", "zerolatency", "-fflags", "nobuffer", "-b:a", "160k", "-threads", "0", "-g", "0", "rtmp://srs-url")



My questions


- 

- Is there a way for this set up to achieve low latency (<1 sec) (for the nurse and for the doctor) ?
- Is the way I want to achieve this good ? Is there a batter way ?






Flow schema


Data exchange and use case flow


-
ffmpeg : New text after a few seconds without having both texts at the same time for one frame
21 avril 2022, par principal-ideal-domainThis answer was very usefull for me to learn how to include texts in a video using ffmepg. Now at one position of the video I want to always show a text but it changes. So what I did is for one text I used for example
enable='between(t,0,5)'
and for the nextenable='between(t,5,10)'
. Now at the frame at 5 seconds the video shows both texts. That is not what I want. I guess the reason is thatenable='between(t,0,5)'
includes the boundaries of the interval. So I somehow need ffmpeg to stop showing the first text one frame earlier. How do I do that ?

-
Why my code that based on ffmpeg can't sync video' time and audio's time ?
6 juillet 2021, par ZSpirytusBackground


Recently, I use ffmpeg to write my first Android video player. But video channel's time is faster than audio channel's time about 2 3 times.


Code


In short, I use PacketDispatcher to read AVPacket from http hlv source :


void PacketDispatcher::RealDispatch() {
 while (GetStatus() != DISPATCHER_STOP) {
 while (GetStatus() == DISPATCHER_PAUSE) {
 LOGD(TAG, "wait signal");
 pthread_mutex_lock(&mutex);
 pthread_cond_wait(&cond, &mutex);
 pthread_mutex_unlock(&mutex);
 }

 AVPacket *av_packet = av_packet_alloc();
 int ret = av_read_frame(av_format_context, av_packet);
 if (ret) {
 LOGE(TAG, "av_read_frame ret=%d", ret);
 break;
 }

 // PacketDispatcher is who read the AVPacket from http hlv source 
 // and dispatch to decoder by its stream index.
 decoder_map[av_packet->stream_index]->Push(av_packet);
 }
}



And then, Decoder written by Producer-Consumer Pattern, Decoder maintain a queue that store all the AVPacket received from PacketDispatcher. The code like this :


// write to the queue
void BaseDecoder::Push(AVPacket *av_packet) {
 pthread_mutex_lock(&av_packet_queue_mutex);
 av_packet_queue.push(av_packet);
 pthread_cond_signal(&av_packet_queue_cond);
 pthread_mutex_unlock(&av_packet_queue_mutex);
}

// real decode logic
void BaseDecoder::RealDecode() {
 SetDecoderState(START);
 LOGI(LogSpec(), "start decode");

 while (true) {
 // 1. check decoder status and queue size to decide if call thread.wait

 // 2. send packet to codec
 AVPacket* av_packet = av_packet_queue.front();
 int ret = avcodec_send_packet(av_codec_ctx, av_packet);

 // 3. read frame from codec
 AVFrame *av_frame = av_frame_alloc();
 ret = avcodec_receive_frame(av_codec_ctx, av_frame);

 if (m_render) {
 // 3. custom decode logic overrided by child class
 void *decode_result = DecodeFrame(av_frame);
 if (decode_result) {
 // 4. dispatch to render
 m_render->Render(decode_result);
 } else {
 LOGD("BaseDecoder", "decode_result=nullptr");
 }
 }
 }
}



Finally, I do rendering logic in Render. Render also written by Producer-Consumer Pattern, it maintain a queue that store AVFrame received from Decoder, the code like this :


// write AVFrame
void BaseRender::Render(void *frame_data) {
 Lock();
 frame_queue.push(frame_data);
 Signal();
 UnLock();
}

// render to surface or Open SL
void BaseRender::RealRender() {
 // frame data that contain frame pts and other metadata
 frame_data->pts = av_frame->pts = av_frame->best_effort_timestamp * av_q2d(GetTimeBase());
 // video only
 frame_data->video_extra_delay = av_frame->repeat_pict * 1.0 / fps * 2.0;
 if (m_render_synchronizer && m_render_synchronizer->Sync(frame_data)) {
 continue;
 }
}



And then, the synchronizer will decide to sleep time or drop video frame according to the frame pts, frame pts is :


frame_data->pts = av_frame->best_effort_timestamp * av_q2d(GetTimeBase());



Also, video extra delay is :


frame_data->video_extra_delay = av_frame->repeat_pict * 1.0 / fps * 2.0;



RenderSynchronizer code like this :


bool RenderSynchronizer::Sync(void *frame_data) {
 auto base_frame_data = static_cast<baseframedata>(frame_data);
 if (base_frame_data->media_type == AVMEDIA_TYPE_AUDIO) {
 return ReceiveAudioFrame(static_cast<pcmdata>(frame_data));
 } else if (base_frame_data->media_type == AVMEDIA_TYPE_VIDEO) {
 return ReceiveVideoFrame(static_cast<rgbadata>(frame_data));
 }
 return false;
}

bool RenderSynchronizer::ReceiveAudioFrame(PCMData *pcm_data) {
 audio_pts = pcm_data->pts;
 return false;
}

bool RenderSynchronizer::ReceiveVideoFrame(RGBAData *rgba_data) {
 video_pts = rgba_data->pts;

 if (audio_pts <= 0 || video_pts <= 0) {
 return false;
 }

 double diff = video_pts - audio_pts;
 if (diff > 0) {
 if (diff > 1) {
 av_usleep((unsigned int) (rgba_data->extra_delay * 1000000.0));
 } else {
 av_usleep((unsigned int) ((diff + rgba_data->extra_delay) * 1000000.0));
 }
 return false;
 } else if (diff < 0) {
 LOGD(TAG, "drop video frame");
 return true;
 } else {
 return false;
 }
}
</rgbadata></pcmdata></baseframedata>


Why my code can not sync video time and audio time ? Thanks for your reading and answers.