
Recherche avancée
Médias (2)
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (24)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (3176)
-
H264 decoding using ffmpeg with strange points in the decoded image
16 septembre 2017, par ShaoQingI’m trying to decode H264 stream to YUV data using FFmpeg, then convert YUV data to RGB data by LibYUV, finally paint it. But some strange points (dots) appear. How to fix it ?
av_init_packet(&m_avpkt);
int cur_size;
uint8_t *cur_ptr;
while (true)
{
memset(outSampleBuffer, 0, IMAGE_BUFFER);
long ret = m_pInputPortInfo->pPacketPool->receivePacket(outSampleBuffer, info);
cur_size = info.size;
cur_ptr = (uint8_t *)outSampleBuffer;
if (m_de_context == NULL)
continue;
while (cur_size > 0)
{
int len = av_parser_parse2(
pCodecParser, m_de_context,
&m_avpkt.data, &m_avpkt.size,
cur_ptr, cur_size,
AV_NOPTS_VALUE, AV_NOPTS_VALUE, AV_NOPTS_VALUE);
cur_ptr += len;
cur_size -= len;
if (m_avpkt.size == 0)
continue;
int got_picture = 0;
int iOutLen = avcodec_decode_video2(m_de_context, m_de_frame, &got_picture, &m_avpkt);
if (got_picture != 0)
{
//YUV to RGB
//...
}
}In order to find which step is wrong, I save the H264 data before using
avcodec_decode_video2
then save the YUV data after decoding. I found all the H264 data is correct and the YUV data is wrong, which is decoded by P frames, I frames decoded data is ok. Here is how i save the YUV data :int got_picture = 0;
int iOutLen = avcodec_decode_video2(m_de_context, m_de_frame, &got_picture, &m_avpkt);
if (got_picture != 0)
{
if (m_de_frame->format == AVPixelFormat::AV_PIX_FMT_YUV420P)
{
int y = m_de_frame->width*m_de_frame->height;
int u = y / 4;
int v = y / 4;
uint8_t* y_ptr = m_de_frame->data[0];
uint8_t* u_ptr = m_de_frame->data[1];
uint8_t* v_ptr = m_de_frame->data[2];
int yuvbufsize = y + u + v;
char *buf = new char[yuvbufsize];
memcpy(buf, y_ptr, y);
memcpy(buf + y, u_ptr, u);
memcpy(buf + y+u, v_ptr, v);
static int count = 0;
char yuvimgPath[MAX_PATH] = { 0 };
sprintf(yuvimgPath, "d:\\images\\de_%d.yuv", count);
FILE *fp1 = fopen(yuvimgPath, "wb");
fwrite(buf, 1, yuvbufsize, fp1);
fclose(fp1);
count++;
delete[]buf;
}
} -
How can I recieve an RTP stream with JavaCV ?
11 janvier 2019, par c.bergerI’m trying to stream video from an RTP server (currently VLC for testing) and decode it in Java. To do this, I’m using JavaCV to decode the incoming stream. Here’s what I have so far :
try {
grabber = new FFmpegFrameGrabber("rtp://localhost:5004/test");
grabber.setFormat("h264");
grabber.setFrameRate(30.0);
grabber.start();
Java2DFrameConverter converter = new Java2DFrameConverter();
while (true) {
Frame frame = grabber.grab();
imageToDraw = frame != null ? converter.convert(frame) : null;
// goes off to paint a widget on a window, see https://git.io/fhZSr for more context
repaint();
}
} catch (Exception e) {
// TODO: Discover what circumstances cause this
e.printStackTrace(System.out);
}On VLC, my stream settings are set like this :
- Destination stream : RTP/TS (address
localhost
, port 5004, and stream nametest
.) - Transcoding active, set to "Video - H.264 + MP3 (TS)" preset :
- MPEG-TS encapsulation
- h.264 video with MPEG audio
- Stream all elementary streams is off.
I can get one VLC instance to stream to another with these settings (with the "client" VLC receiving from
rtp://localhost:5004/test
), and it works just fine. (The only issues arise from having a weak test machine not suited transcoding high res video.)Switch over to Java, and all I can see is gray frames with a spat of color here and there. The console is also screaming the whole way through. Some snippets (the full log is too long to be a reasonable post, but it can be found here if you really want it) :
[h264 @ 0x7f6c4c3502c0] cabac decode of qscale diff failed at 8 12
[h264 @ 0x7f6c4c3502c0] error while decoding MB 8 12, bytestream 670
[h264 @ 0x7f6c4c3502c0] concealing 421 DC, 421 AC, 421 MV errors in P frame[h264 @ 0x7f6c4c3502c0] Reference 4 >= 2
[h264 @ 0x7f6c4c3502c0] error while decoding MB 25 8, bytestream 416
[h264 @ 0x7f6c4c3502c0] concealing 556 DC, 556 AC, 556 MV errors in B frame[h264 @ 0x7f6c4c3502c0] Reference 5 >= 4
[h264 @ 0x7f6c4c3502c0] error while decoding MB 21 1, bytestream 6042
[h264 @ 0x7f6c4c3502c0] concealing 826 DC, 826 AC, 826 MV errors in P frame
[h264 @ 0x7f6c4c3502c0] Invalid NAL unit 8, skipping.
[above line repeats 5x]
[h264 @ 0x7f6c4c3502c0] top block unavailable for requested intra mode
[h264 @ 0x7f6c4c3502c0] error while decoding MB 3 0, bytestream 730
[h264 @ 0x7f6c4c3502c0] concealing 836 DC, 836 AC, 836 MV errors in P frameIs there something I am clearly doing wrong ?
- Destination stream : RTP/TS (address
-
the workstation' Image decode and render perform,use same code,worse than laptop
12 janvier 2024, par karma1995i'm working on Image decoding and rendering,trying to develop a software to process and display 4K TIFF and DPX sequence.Developing on my laptop and testing on both laptop and workstation.It's wired that workstaion's performance worse than laptop,my hardware and code information is here :
The information of both machine


- 

- Laptop
platform : windows 11
CPU : Intel I9-13900H 14C 20T
GPU : RTX 4060 Laptop GPU
Mem : 64G
Disk : SSD
- Workstaton
platform : windows 10
CPU : Intel Xeon 56C 112T
GPU : RTX A6000
Mem : 512
Disk : SSD
IDE
IDE is Qt, version 5.14.0, both on laptop and workstation.






For Image decoding, ffmpeg is used


void SeqDecodeTask::run()
{
 QElapsedTimer timer;
 timer.start();
 SwsContext* swsCtx = nullptr;
 AVPixelFormat srcFmt = AV_PIX_FMT_NONE;
 AVPixelFormat dstFmt = AV_PIX_FMT_NONE;
 AVFormatContext* fmtCtx;
 const AVCodec* codec;
 AVCodecContext* codecCtx;
 AVStream* stream;
 int index = -1;
 AVPacket pkt;
 AVFrame* frame = av_frame_alloc();
 AVFrame* output = av_frame_alloc();
 fmtCtx = avformat_alloc_context();

 if(avformat_open_input(&fmtCtx, file_.toUtf8(),
 nullptr, nullptr) < 0) {
 return;
 }
 if (avformat_find_stream_info(fmtCtx, nullptr) < 0) {
 return;
 }
 for (unsigned int i = 0; i < fmtCtx->nb_streams; i++) {
 if (fmtCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
 index = static_cast<int>(i);
 continue;
 }
 }
 if (index < 0) {
 return;
 }
 stream = fmtCtx->streams[index];
 fmtCtx->streams[index]->discard = AVDISCARD_DEFAULT;
 codecCtx = avcodec_alloc_context3(nullptr);
 if (avcodec_parameters_to_context(
 codecCtx,
 fmtCtx->streams[index]->codecpar) < 0) {
 return;
 }
 codec = avcodec_find_decoder(codecCtx->codec_id);
 if (codec == nullptr) {
 return;
 }
 if (avcodec_open2(codecCtx, codec, nullptr) < 0) {
 return;
 }
 av_read_frame(fmtCtx, &pkt);
 avcodec_send_packet(codecCtx, &pkt);
 avcodec_receive_frame(codecCtx, frame);

 if (srcFmt == AV_PIX_FMT_NONE) {
 srcFmt = static_cast<avpixelformat>(frame->format);
 }
 dstFmt = AV_PIX_FMT_RGBA64LE;
 cv::Mat mat(cv::Size(frame->width, frame->height), CV_16UC4);
 if (!(frame->width < 0 || frame->height < 0)) {
 if (swsCtx == nullptr) {
 swsCtx = sws_alloc_context();
 swsCtx = sws_getContext(frame->width,
 frame->height,
 srcFmt,
 frame->width,
 frame->height,
 dstFmt,
 SWS_BICUBIC, nullptr,
 nullptr, nullptr);
 }

 swsCtx = sws_getContext(frame->width, frame->height,
 srcFmt,
 frame->width, frame->height,
 dstFmt,
 SWS_BICUBIC, nullptr, nullptr, nullptr);
 av_image_fill_arrays(output->data, output->linesize,
 static_cast(mat.data),
 dstFmt,
 frame->width, frame->height, 1);
 sws_scale(swsCtx, static_cast<const>(frame->data),
 frame->linesize, 0, frame->height, output->data, output->linesize);
 }
 av_packet_unref(&pkt);
 av_frame_free(&output);
 av_frame_free(&frame);
 sws_freeContext(swsCtx);
 avcodec_free_context(&codecCtx);
 avformat_free_context(fmtCtx);

 buffer_->edit(true, item_, index_, file_, mat);

 emit decodeTaskFinished();
 qDebug() << "decode time " << timer.elapsed();
}
</const></avpixelformat></int>


Simply get the media information, decode, format transform and store in mat.It's a QRunnable class for multithread.


For rendering, have tried both QPainter and SceneGraph
QPainter :


void PaintedItemRender::paint(QPainter *painter)
{
 QElapsedTimer timer;
 timer.start();
 painter->setRenderHint(QPainter::Antialiasing, true);
 int width = this->width();
 int height = this->height();
// if (defaultWidth_ == NULL || defaultHeight_ == NULL) {
// defaultWidth_ = width;
// defaultHeight_ = height;
// }
 painter->setBrush(Qt::black);
 painter->drawRect(0, 0, width, height);

 QImage img = image_.scaled(QSize(width, height), Qt::KeepAspectRatio);
 /* calculate display position */
 int x = (this->width() - img.width()) / 2;
 int y = (this->height() - img.height()) / 2;

 painter->drawImage(QPoint(x, y), img);
 qDebug() << "paint time: " <code>


SceneGraph


QSGNode *SceneGraphRender::updatePaintNode(QSGNode* oldNode,
 QQuickItem::UpdatePaintNodeData* updatePaintNodeData)
{
 Q_UNUSED(updatePaintNodeData)
 QSGSimpleTextureNode* tex = nullptr;
 QSGTransformNode* trans = nullptr;
 if(!oldNode) {
 tex = new QSGSimpleTextureNode;
 tex->setFlag(QSGNode::OwnsMaterial, true);
 tex->setFiltering(QSGTexture::Linear);
 tex->setTexture(window()->createTextureFromImage(image_));
 tex->setRect(0, 0, width(), height());
 trans = new QSGTransformNode();
 if (!image_.isNull()) {
 float factorW = 1;
 float factorH = 1;
 if (image_.width() > width()) {
 factorH = factorW = width() / image_.width();
 }
 else if (image_.height() > height()) {
 factorW = factorH = height() / image_.height();
 }
 else if (image_.width() < width() && image_.height() < height()) {
 if (width() - image_.width() < image_.height() - height()) {
 factorH = factorW = width() / image_.width();
 }
 else {
 factorH = factorW = height() / image_.height();
 }
 }
 QMatrix4x4 mat;
 float scaledW = tex->rect().width() * factorW;
 float scaledH = tex->rect().height() * factorH;
 if (width() > scaledW) {
 mat.translate((width() - tex->rect().width() * factorW) / 2, 0);
 }
 if (height() > scaledH) {
 mat.translate(0, (height() - tex->rect().height() * factorH) / 2);
 }
 mat.scale(factorW, factorH);
 trans->setMatrix(mat);
 trans->markDirty(QSGNode::DirtyMatrix);
 }
 else {
 scaled_ = true;
 }
 trans->appendChildNode(tex);
 }
 else {
 trans = static_cast<qsgtransformnode>(oldNode);
 tex = static_cast<qsgsimpletexturenode>(trans->childAtIndex(0));
 QSGTexture* texture = tex->texture();
 tex->setTexture(window()->createTextureFromImage(image_));
 tex->setRect(0, 0, image_.width(), image_.height());
 texture->deleteLater();
 if(!image_.isNull() && scaled_) {
 float factorW = 1;
 float factorH = 1;
 if (image_.width() > width()) {
 factorH = factorW = width() / image_.width();
 }
 else if (image_.height() > height()) {
 factorW = factorH = height() / image_.height();
 }
 else if (image_.width() < width() && image_.height() < height()) {
 if (width() - image_.width() < image_.height() - height()) {
 factorH = factorW = width() / image_.width();
 }
 else {
 factorH = factorW = height() / image_.height();
 }

 }
 QMatrix4x4 mat;
 float scaledW = tex->rect().width() * factorW;
 float scaledH = tex->rect().height() * factorH;
 if (width() > scaledW) {
 mat.translate((width() - tex->rect().width() * factorW) / 2, 0);
 }
 if (height() > scaledH) {
 mat.translate(0, (height() - tex->rect().height() * factorH) / 2);
 }
 mat.scale(factorW, factorH);
 trans->setMatrix(mat);
 trans->markDirty(QSGNode::DirtyMatrix);
 scaled_ = false;
 }
 }
 return trans;
}
</qsgsimpletexturenode></qsgtransformnode>


Time Usage
Test image sequence is 4K DPX sequence, 12 bits, RGB.Decode only use one thread ;


- 

- Laptop
decoding : around 200ms per frame
rendering : around 20ms per frame
- Workstation
decoding : over 600ms per frame
rendering : around 80ms per frame






i'm tring to figure out the reason that make performance diffrent and fix it, appreciate for any advice, thank you.