在 Qt 中显示解码视频帧的最有效方法是什么?

时间:2023-01-02
本文介绍了在 Qt 中显示解码视频帧的最有效方法是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

将图像显示到 Qt 小部件的最快方法是什么?我已经使用 libavformat 和 libavcodec 解码了视频,所以我已经有了原始 RGB 或 YCbCr 4:2:0 帧.我目前正在使用 QGraphicsView 和包含 QGraphicsPixmapItem 的 QGraphicsScene 对象.我目前正在通过使用内存缓冲区中的 QImage 构造函数将帧数据获取到 QPixmap 中,并使用 QPixmap::fromImage() 将其转换为 QPixmap.

What is the fastest way to display images to a Qt widget? I have decoded the video using libavformat and libavcodec, so I already have raw RGB or YCbCr 4:2:0 frames. I am currently using a QGraphicsView with a QGraphicsScene object containing a QGraphicsPixmapItem. I am currently getting the frame data into a QPixmap by using the QImage constructor from a memory buffer and converting it to QPixmap using QPixmap::fromImage().

我喜欢这样的结果,而且看起来比较快,但我不禁想到一定有更有效的方法.我还听说 QImage 到 QPixmap 的转换很昂贵.我已经实现了一个在小部件上使用 SDL 覆盖的解决方案,但我想只使用 Qt,因为我能够使用 QGraphicsView 轻松捕获点击和其他用户与视频显示的交互.

I like the results of this and it seems relatively fast, but I can't help but think that there must be a more efficient way. I've also heard that the QImage to QPixmap conversion is expensive. I have implemented a solution that uses an SDL overlay on a widget, but I'd like to stay with just Qt since I am able to easily capture clicks and other user interaction with the video display using the QGraphicsView.

我正在使用 libswscale 进行任何所需的视频缩放或色彩空间转换,所以我只想知道是否有人有更有效的方法在执行完所有处理后显示图像数据.

I am doing any required video scaling or colorspace conversions with libswscale so I would just like to know if anyone has a more efficient way to display the image data after all processing has been performed.

谢谢.

推荐答案

感谢您的回答,但我终于重新审视了这个问题,并提出了一个相当简单的解决方案,可以提供良好的性能.它涉及从 QGLWidget 派生并覆盖 paintEvent() 函数.在paintEvent() 函数中,您可以调用QPainter::drawImage(...),它会使用硬件(如果可用)为您执行缩放到指定的矩形.所以它看起来像这样:

Thanks for the answers, but I finally revisited this problem and came up with a rather simple solution that gives good performance. It involves deriving from QGLWidget and overriding the paintEvent() function. Inside the paintEvent() function, you can call QPainter::drawImage(...) and it will perform the scaling to a specified rectangle for you using hardware if available. So it looks something like this:

class QGLCanvas : public QGLWidget
{
public:
    QGLCanvas(QWidget* parent = NULL);
    void setImage(const QImage& image);
protected:
    void paintEvent(QPaintEvent*);
private:
    QImage img;
};

QGLCanvas::QGLCanvas(QWidget* parent)
    : QGLWidget(parent)
{
}

void QGLCanvas::setImage(const QImage& image)
{
    img = image;
}

void QGLCanvas::paintEvent(QPaintEvent*)
{
    QPainter p(this);

    //Set the painter to use a smooth scaling algorithm.
    p.setRenderHint(QPainter::SmoothPixmapTransform, 1);

    p.drawImage(this->rect(), img);
}

有了这个,我仍然需要将 YUV 420P 转换为 RGB32,但是 ffmpeg 在 libswscale 中非常快速地实现了这种转换.主要收益来自两件事:

With this, I still have to convert the YUV 420P to RGB32, but ffmpeg has a very fast implementation of that conversion in libswscale. The major gains come from two things:

  • 无需软件缩放.缩放是在视频卡上完成的(如果有)
  • QImageQPixmap 的转换,在 QPainter::drawImage() 函数中发生的转换是在原始图像分辨率下执行的与升级的全屏分辨率相反.
  • No need for software scaling. Scaling is done on the video card (if available)
  • Conversion from QImage to QPixmap, which is happening in the QPainter::drawImage() function is performed at the original image resolution as opposed to the upscaled fullscreen resolution.

我用我以前的方法将我的处理器固定在显示器上(解码是在另一个线程中完成的).现在,我的显示线程仅使用大约 8-9% 的内核进行全屏 1920x1200 30fps 播放.我敢肯定,如果我可以将 YUV 数据直接发送到视频卡,它可能会变得更好,但现在已经足够了.

I was pegging my processor on just the display (decoding was being done in another thread) with my previous method. Now my display thread only uses about 8-9% of a core for fullscreen 1920x1200 30fps playback. I'm sure it could probably get even better if I could send the YUV data straight to the video card, but this is plenty good enough for now.

这篇关于在 Qt 中显示解码视频帧的最有效方法是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

上一篇:在特定时间在 qglwidget 上绘制一个矩形 下一篇:从 QML 访问 C++ 函数

相关文章

最新文章