我在 OpenCV 的帮助下从网络摄像机捕获多个流.当我尝试从 OpenCV 窗口(cv::namedWindow(...)
)显示这些流时,它可以正常工作(到目前为止我已经尝试了 4 个流).
I'm capturing multiple streams from ip cameras with the help of OpenCV. When i try to display these stream from an OpenCV window(cv::namedWindow(...)
), it works without any problem (i have tried up to 4 streams so far).
当我尝试在 Qt 小部件中显示这些流时出现问题.由于捕获是在另一个线程中完成的,我必须使用信号槽机制来更新 QWidget(在主线程中).
The problem arises when i try to show these streams inside a Qt widget. Since the capturing is done in another thread, i have to use the signal slot mechanism in order to update the QWidget(which is in main thread).
基本上,我从捕获线程发出新捕获的帧,GUI 线程中的一个插槽捕获它.当我打开 4 个流时,我无法像以前那样流畅地显示视频.
Basically, i emit the newly captured frame from the capture thread and a slot in the GUI thread catches it. When i open 4 streams, i can not display the videos smoothly like before.
这是发射器:
void capture::start_process() {
m_enable = true;
cv::Mat frame;
while(m_enable) {
if (!m_video_handle->read(frame)) {
break;
}
cv::cvtColor(frame, frame,CV_BGR2RGB);
qDebug() << "FRAME : " << frame.data;
emit image_ready(QImage(frame.data, frame.cols, frame.rows, frame.step, QImage::Format_RGB888));
cv::waitKey(30);
}
}
这是我的插槽:
void widget::set_image(QImage image) {
img = image;
qDebug() << "PARAMETER IMAGE: " << image.scanLine(0);
qDebug() << "MEMBER IMAGE: " << img.scanLine(0);
}
问题似乎是连续复制 QImages 的开销.虽然 QImage 使用了隐式共享,但是当我通过 qDebug()
消息比较图像的数据指针时,我看到了不同的地址.
The problem seems like the overhead of copying QImages continuously. Although QImage uses implicit sharing, when i compare the data pointers of images via qDebug()
messages, i see different addresses.
1- 有没有办法将 OpenCV 窗口直接嵌入到 QWidget 中?
1- Is there any way to embed OpenCV window directly into QWidget ?
2- 处理显示多个视频的最有效方法是什么?例如,视频管理系统如何同时显示多达 32 个摄像头?
2- What is the most efficient way to handle displaying multiple videos? For example, how video management systems show up to 32 cameras in the same time ?
3- 必须要走的路是什么?
3- What must be the way to go ?
使用 QImage::scanLine
强制深度复制,所以至少应该使用 constScanLine
,或者更好的是,将插槽的签名更改为:
Using QImage::scanLine
forces a deep copy, so at the minimum, you should use constScanLine
, or, better yet, change the slot's signature to:
void widget::set_image(const QImage & image);
当然,您的问题随后变成了其他问题:QImage
实例指向位于另一个线程中的帧的数据,并且可以(并且将)随时更改.
Of course, your problem then becomes something else: the QImage
instance points to the data of a frame that lives in another thread, and can (and will) change at any moment.
对此有一个解决方案:需要使用堆上分配的新帧,并且需要在 QImage
中捕获该帧.QScopedPointer
用于防止内存泄漏,直到 QImage
获得帧的所有权.
There is a solution for that: one needs to use fresh frames allocated on the heap, and the frame needs to be captured within QImage
. QScopedPointer
is used to prevent memory leaks until the QImage
takes ownership of the frame.
static void matDeleter(void* mat) { delete static_cast<cv::Mat*>(mat); }
class capture {
Q_OBJECT
bool m_enable;
...
public:
Q_SIGNAL void image_ready(const QImage &);
...
};
void capture::start_process() {
m_enable = true;
while(m_enable) {
QScopedPointer<cv::Mat> frame(new cv::Mat);
if (!m_video_handle->read(*frame)) {
break;
}
cv::cvtColor(*frame, *frame, CV_BGR2RGB);
// Here the image instance takes ownership of the frame.
const QImage image(frame->data, frame->cols, frame->rows, frame->step,
QImage::Format_RGB888, matDeleter, frame.take());
emit image_ready(image);
cv::waitKey(30);
}
}
当然,由于Qt默认在QThread
中提供了本地消息分发和一个Qt事件循环,使用QObject
很简单> 用于捕获过程.下面是一个完整的、经过测试的示例.
Of course, since Qt provides native message dispatch and a Qt event loop by default in a QThread
, it's a simple matter to use QObject
for the capture process. Below is a complete, tested example.
捕获、转换和查看器都在它们自己的线程中运行.由于 cv::Mat
是一个具有原子、线程安全访问的隐式共享类,所以它是这样使用的.
The capture, conversion and viewer all run in their own threads. Since cv::Mat
is an implicitly shared class with atomic, thread-safe access, it's used as such.
转换器具有不处理陈旧帧的选项 - 如果转换仅用于显示目的,则非常有用.
The converter has an option of not processing stale frames - useful if conversion is only done for display purposes.
查看器在 gui 线程中运行并正确丢弃过时的帧.观众永远没有理由处理陈旧的帧.
The viewer runs in the gui thread and correctly drops stale frames. There's never a reason for the viewer to deal with stale frames.
如果您要收集数据以保存到磁盘,则应以高优先级运行捕获线程.您还应该检查 OpenCV api,看看是否有办法将本机相机数据转储到磁盘.
If you were to collect data to save to disk, you should run the capture thread at high priority. You should also inspect OpenCV apis to see if there's a way of dumping the native camera data to disk.
为了加快转换速度,您可以使用 OpenCV 中的 GPU 加速类.
To speed up conversion, you could use the gpu-accelerated classes in OpenCV.
下面的示例确保除非复制需要,否则不会重新分配任何内存:Capture
类维护自己的帧缓冲区,该缓冲区可用于每个后续帧,Converter
,ImageViewer
也是如此.
The example below makes sure that in none of the memory is reallocated unless necessary for a copy: the Capture
class maintains its own frame buffer that is reused for each subsequent frame, so does the Converter
, and so does the ImageViewer
.
制作了两个图像数据的深层副本(除了 cv::VideoCatpure::read
内部发生的任何事情):
There are two deep copies of image data made (besides whatever happens internally in cv::VideoCatprure::read
):
复制到Converter
的QImage
.
复制到ImageViewer
的QImage
.
需要两个副本来确保线程之间的解耦并防止由于需要分离具有更高引用计数的 cv::Mat
或 QImage
而导致的数据重新分配比 1. 在现代架构上,内存复制非常快.
Both copies are needed to assure decoupling between the threads and prevent data reallocation due to the need to detach a cv::Mat
or QImage
that has the reference count higher than 1. On modern architectures, memory copies are very fast.
由于所有图像缓冲区都位于相同的内存位置,因此它们的性能是最佳的 - 它们保持分页和缓存状态.
Since all image buffers stay in the same memory locations, their performance is optimal - they stay paged in and cached.
AddressTracker
用于跟踪内存重新分配以进行调试.
The AddressTracker
is used to track memory reallocations for debugging purposes.
// https://github.com/KubaO/stackoverflown/tree/master/questions/opencv-21246766
#include <QtWidgets>
#include <algorithm>
#include <opencv2/opencv.hpp>
Q_DECLARE_METATYPE(cv::Mat)
struct AddressTracker {
const void *address = {};
int reallocs = 0;
void track(const cv::Mat &m) { track(m.data); }
void track(const QImage &img) { track(img.bits()); }
void track(const void *data) {
if (data && data != address) {
address = data;
reallocs ++;
}
}
};
Capture
类用捕获的帧填充内部帧缓冲区.它通知帧更改.框架是类的用户属性.
The Capture
class fills the internal frame buffer with the captured frame. It notifies of a frame change. The frame is the user property of the class.
class Capture : public QObject {
Q_OBJECT
Q_PROPERTY(cv::Mat frame READ frame NOTIFY frameReady USER true)
cv::Mat m_frame;
QBasicTimer m_timer;
QScopedPointer<cv::VideoCapture> m_videoCapture;
AddressTracker m_track;
public:
Capture(QObject *parent = {}) : QObject(parent) {}
~Capture() { qDebug() << __FUNCTION__ << "reallocations" << m_track.reallocs; }
Q_SIGNAL void started();
Q_SLOT void start(int cam = {}) {
if (!m_videoCapture)
m_videoCapture.reset(new cv::VideoCapture(cam));
if (m_videoCapture->isOpened()) {
m_timer.start(0, this);
emit started();
}
}
Q_SLOT void stop() { m_timer.stop(); }
Q_SIGNAL void frameReady(const cv::Mat &);
cv::Mat frame() const { return m_frame; }
private:
void timerEvent(QTimerEvent * ev) {
if (ev->timerId() != m_timer.timerId()) return;
if (!m_videoCapture->read(m_frame)) { // Blocks until a new frame is ready
m_timer.stop();
return;
}
m_track.track(m_frame);
emit frameReady(m_frame);
}
};
Converter
类将传入的帧转换为缩小的 QImage
用户属性.它通知图像更新.保留图像以防止内存重新分配.processAll
属性选择是转换所有帧,还是只有最近的帧应该有多个排队.
The Converter
class converts the incoming frame to a scaled-down QImage
user property. It notifies of the image update. The image is retained to prevent memory reallocations. The processAll
property selects whether all frames will be converted, or only the most recent one should more than one get queued up.
class Converter : public QObject {
Q_OBJECT
Q_PROPERTY(QImage image READ image NOTIFY imageReady USER true)
Q_PROPERTY(bool processAll READ processAll WRITE setProcessAll)
QBasicTimer m_timer;
cv::Mat m_frame;
QImage m_image;
bool m_processAll = true;
AddressTracker m_track;
void queue(const cv::Mat &frame) {
if (!m_frame.empty()) qDebug() << "Converter dropped frame!";
m_frame = frame;
if (! m_timer.isActive()) m_timer.start(0, this);
}
void process(const cv::Mat &frame) {
Q_ASSERT(frame.type() == CV_8UC3);
int w = frame.cols / 3.0, h = frame.rows / 3.0;
if (m_image.size() != QSize{w,h})
m_image = QImage(w, h, QImage::Format_RGB888);
cv::Mat mat(h, w, CV_8UC3, m_image.bits(), m_image.bytesPerLine());
cv::resize(frame, mat, mat.size(), 0, 0, cv::INTER_AREA);
cv::cvtColor(mat, mat, CV_BGR2RGB);
emit imageReady(m_image);
}
void timerEvent(QTimerEvent *ev) {
if (ev->timerId() != m_timer.timerId()) return;
process(m_frame);
m_frame.release();
m_track.track(m_frame);
m_timer.stop();
}
public:
explicit Converter(QObject * parent = nullptr) : QObject(parent) {}
~Converter() { qDebug() << __FUNCTION__ << "reallocations" << m_track.reallocs; }
bool processAll() const { return m_processAll; }
void setProcessAll(bool all) { m_processAll = all; }
Q_SIGNAL void imageReady(const QImage &);
QImage image() const { return m_image; }
Q_SLOT void processFrame(const cv::Mat &frame) {
if (m_processAll) process(frame); else queue(frame);
}
};
ImageViewer
小部件相当于存储像素图的 QLabel
.图像是查看器的用户属性.传入的图像被深度复制到用户属性中,以防止内存重新分配.
The ImageViewer
widget is the equivalent of a QLabel
storing a pixmap. The image is the user property of the viewer. The incoming image is deep-copied into the user property, to prevent memory reallocations.
class ImageViewer : public QWidget {
Q_OBJECT
Q_PROPERTY(QImage image READ image WRITE setImage USER true)
bool painted = true;
QImage m_img;
AddressTracker m_track;
void paintEvent(QPaintEvent *) {
QPainter p(this);
if (!m_img.isNull()) {
setAttribute(Qt::WA_OpaquePaintEvent);
p.drawImage(0, 0, m_img);
painted = true;
}
}
public:
ImageViewer(QWidget * parent = nullptr) : QWidget(parent) {}
~ImageViewer() { qDebug() << __FUNCTION__ << "reallocations" << m_track.reallocs; }
Q_SLOT void setImage(const QImage &img) {
if (!painted) qDebug() << "Viewer dropped frame!";
if (m_img.size() == img.size() && m_img.format() == img.format()
&& m_img.bytesPerLine() == img.bytesPerLine())
std::copy_n(img.bits(), img.sizeInBytes(), m_img.bits());
else
m_img = img.copy();
painted = false;
if (m_img.size() != size()) setFixedSize(m_img.size());
m_track.track(m_img);
update();
}
QImage image() const { return m_img; }
};
该演示实例化上述类并在专用线程中运行捕获和转换.
The demonstration instantiates the classes described above and runs the capture and conversion in dedicated threads.
class Thread final : public QThread { public: ~Thread() { quit(); wait(); } };
int main(int argc, char *argv[])
{
qRegisterMetaType<cv::Mat>();
QApplication app(argc, argv);
ImageViewer view;
Capture capture;
Converter converter;
Thread captureThread, converterThread;
// Everything runs at the same priority as the gui, so it won't supply useless frames.
converter.setProcessAll(false);
captureThread.start();
converterThread.start();
capture.moveToThread(&captureThread);
converter.moveToThread(&converterThread);
QObject::connect(&capture, &Capture::frameReady, &converter, &Converter::processFrame);
QObject::connect(&converter, &Converter::imageReady, &view, &ImageViewer::setImage);
view.show();
QObject::connect(&capture, &Capture::started, [](){ qDebug() << "Capture started."; });
QMetaObject::invokeMethod(&capture, "start");
return app.exec();
}
#include "main.moc"
完整示例到此结束.注意:此答案的先前修订版不必要地重新分配了图像缓冲区.
This concludes the complete example. Note: The previous revision of this answer unnecessarily reallocated the image buffers.
这篇关于如何在 Qt 中有效地显示 OpenCV 视频?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!