
Recherche avancée
Autres articles (58)
-
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)
Sur d’autres sites (12551)
-
SVT-AV1 : After encoding the video seeking is very bad in any video player (even HTML5)
18 juin 2020, par Viktor Machniki am using SVT-AV1 and FFMPEG to encode Videos into the AV1 video and opus audio codec (.webm), it works fine, except that the video seeking do not work really (extremly bad). When i seek, the CPU useage jumps up and it takes up to minutes until the seeking process finishes.



Here is how i encode the videos :



- 

- Convert any video into yuv :
ffmpeg -i -preset veryslow -level 6.2 .yuv
- First AV1 run
svt-av1 -i '.yuv' -w -h --fps --rc 0 -q 30 --preset 8 -b '.\output1.ivf' --output-stat-file '.\stat_file.stat' --keyint 1 --enable-restoration-filtering 1
- Second AV1 run
svt-av1 -i '.yuv' -w -h --fps --rc 0 -q 30 --preset 3 -b '.\output.ivf' --input-stat-file '.\stat_file.stat' --keyint 1 --enable-restoration-filtering 1
- Get source video audio in opus codec
ffmpeg -i -c:a libopus -vn -preset veryslow -level 6.2 output.ogg
- Get final .webm video
ffmpeg -i output.ivf -i output.ogg -c copy output.webm













I have allready tryed to play with the —keyin option, also just letting it away and use the encoder default, but the results are always the same. (
--keyin 1
seems to work better than without this option, but also very very bad)


What am i doing wrong ?



Extras : I am using Windows 10 with downloaded version of SVT-AV1 and FFMPEG (I just renamed the SVT-AV1 Encoder .exe file to be
svt-av1.exe
). Used CPU is Ryzen 9 3900X

- Convert any video into yuv :
-
Why android video player that based on ffmpeg can't sync video' time and audio's time ?
6 juillet 2021, par ZSpirytusBackground


Recently, I use ffmpeg to write my first Android video player. But video channel's time is faster than audio channel's time about 2 3 times.


Question


Why android video player audio and video is out of sync ? Video is about faster than audio 2 3 times. Thanks for your reading and answers.


Code


In short, I use PacketDispatcher to read AVPacket from http hlv source :


void PacketDispatcher::RealDispatch() {
 while (GetStatus() != DISPATCHER_STOP) {
 ...

 AVPacket *av_packet = av_packet_alloc();
 int ret = av_read_frame(av_format_context, av_packet);

 // PacketDispatcher is who read the AVPacket from http hlv source 
 // and dispatch to decoder by its stream index.
 decoder_map[av_packet->stream_index]->Push(av_packet);
 }
}



And then, Decoder written by Producer-Consumer Pattern, Decoder maintain a queue that store all the AVPacket received from PacketDispatcher. The code like this :


// write to the queue
void BaseDecoder::Push(AVPacket *av_packet) {
 ...
 av_packet_queue.push(av_packet);
 ...
}

// real decode logic
void BaseDecoder::RealDecode() {
 ...
 while (true) {
 // read frame from codec
 AVFrame *av_frame = av_frame_alloc();
 ret = avcodec_receive_frame(av_codec_ctx, av_frame);

 void *decode_result = DecodeFrame(av_frame);
 // send to render
 m_render->Render(decode_result);
 }
}



And then, I do rendering logic in Render. Render also written by Producer-Consumer Pattern, it maintain a queue that store AVFrame received from Decoder, the code like this :


// write AVFrame
void BaseRender::Render(void *frame_data) {
 ...
 frame_queue.push(frame_data);
 ...
}

// render to surface or Open SL
void BaseRender::RealRender() {
 if (m_render_synchronizer && m_render_synchronizer->Sync(frame_data)) {
 continue;
 }
}



Finally, the synchronizer will decide to sleep time or drop video frame according to the frame pts, frame pts is :


frame_data->pts = av_frame->best_effort_timestamp * av_q2d(GetTimeBase());



Also, video extra delay is :


frame_data->video_extra_delay = av_frame->repeat_pict * 1.0 / fps * 2.0;



RenderSynchronizer code like this :


bool RenderSynchronizer::Sync(void *frame_data) {
 auto base_frame_data = static_cast<baseframedata>(frame_data);
 if (base_frame_data->media_type == AVMEDIA_TYPE_AUDIO) {
 audio_pts = pcm_data->pts;
 return false;
 } else if (base_frame_data->media_type == AVMEDIA_TYPE_VIDEO) {
 video_pts = rgba_data->pts;
 return ReceiveVideoFrame(static_cast<rgbadata>(frame_data));
 }
 return false;
}

bool RenderSynchronizer::ReceiveVideoFrame(RGBAData *rgba_data) {
 if (audio_pts <= 0 || video_pts <= 0) {
 return false;
 }

 double diff = video_pts - audio_pts;
 if (diff > 0) {
 if (diff > 1) {
 av_usleep((unsigned int) (rgba_data->extra_delay * 1000000.0));
 } else {
 av_usleep((unsigned int) ((diff + rgba_data->extra_delay) * 1000000.0));
 }
 return false;
 } else if (diff < 0) {
 LOGD(TAG, "drop video frame");
 return true;
 } else {
 return false;
 }
}
</rgbadata></baseframedata>


-
Getting color mismatch while converting from NV12 raw data to H264 using FFMPEG
29 janvier 2019, par Harshil MakwanaI am trying to convert NV12 raw data to H264 using hw encoder of FFMPEG.
to pass raw data to encoder I am passing AVFrame struct using below logic :
uint8_t * buf;
buf = (uint8_t *)dequeue();
frame->data[0] = buf;
frame->data[1] = buf + size;
frame->data[2] = buf + size;
frame->pts = frameCount;
frameCount++;but using this logic, I am getting, color mismatched H264 data,
Can someone tell me , How to pass buffer to AVFrame data ?
Thanks in Advance,
Harshil