OpenCV中基于已知相机方向的透视变形

时间:2022-11-19
本文介绍了OpenCV中基于已知相机方向的透视变形的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

我正在开展一个项目,该项目试图根据相机的已知方向从图像中消除透视失真.我的想法是,我可以根据相机的已知 X、Y 和 Z 方向创建一个旋转矩阵.然后我可以通过 WarpPerspective 方法将这些矩阵应用于图像.

I am working on a project which attempts to remove the perspective distortion from an image based on the known orientation of the camera. My thinking is that I can create a rotational matrix based on the known X, Y, and Z orientations of the camera. I can then apply those matrices to the image via the WarpPerspective method.

在我的脚本(用 Python 编写)中,我创建了三个旋转矩阵,每个矩阵都基于一个方向角.我已经到了一个我被困在两个问题上的地步.首先,当我将每个单独的矩阵加载到 WarpPerspective 方法中时,它似乎无法正常工作.每当我在一个轴上扭曲图像时,它似乎都会显着过度扭曲图像.仅当我将方位角限制在 1 度或以下时,才能识别图像的内容.

In my script (written in Python) I have created three rotational matrices, each based on an orientation angle. I have gotten to a point where I am stuck on two issues. First, when I load each individual matrix into the WarpPerspective method, it doesn't seem to be working correctly. Whenever I warp an image on one axis it appears to significantly overwarp the image. The contents of the image are only recognizable if I limit the orientation angle to around 1 degree or less.

其次,如何将三个旋转矩阵组合成一个矩阵以加载到 WarpPerspective 方法中.我可以将 3x3 旋转矩阵导入该方法,还是必须创建一个 4x4 投影矩阵.下面是我正在处理的代码.

Secondly, how do I combine the three rotational matrices into a single matrix to be loaded into the WarpPerspective method. Can I import a 3x3 rotational matrix into that method, or do I have to create a 4x4 projective matrix. Below is the code that I am working on.

感谢您的帮助.

CR

from numpy import *
import cv

#Sets angle of camera and converts to radians
x =  -14 * (pi/180)
y = 20 * (pi/180)
z =  15 * (pi/180)

#Creates the Rotational Matrices
rX = array([[1, 0, 0], [0, cos(x), -sin(x)], [0, sin(x), cos(x)]])
rY = array([[cos(y), 0, -sin(y)], [0, 1, 0], [sin(y), 0, cos(y)]])
rZ = array([[cos(z), sin(z), 0], [-sin(z), cos(z), 0], [0, 0, 1]])

#Converts to CVMat format
X = cv.fromarray(rX)
Y = cv.fromarray(rY)
Z = cv.fromarray(rZ)

#Imports image file and creates destination filespace
im = cv.LoadImage("reference_image.jpg")
dst = cv.CreateImage(cv.GetSize(im), cv.IPL_DEPTH_8U, 3)

#Warps Image
cv.WarpPerspective(im, dst, X)

#Display
cv.NamedWindow("distorted")
cv.ShowImage("distorted", im)
cv.NamedWindow("corrected")
cv.ShowImage("corrected", dst)
cv.WaitKey(0)
cv.DestroyWindow("distorted")
cv.DestroyWindow("corrected")

推荐答案

你做错了几件事.首先,您不能在没有相机模型的情况下在 x 或 y 轴上旋转.想象一下具有令人难以置信的宽视野的相机.您可以将它非常靠近一个物体并看到整个物体,但是如果该物体旋转它的边缘似乎会很快飞向您,并带有强烈的透视失真.另一方面,小视野(想想望远镜)几乎没有透视失真.一个不错的起点是将您的图像平面设置为至少与相机的宽度一样远,并将您的对象放在图像平面上.这就是我在这个例子中所做的(c++ openCV)

You are doing several things wrong. First, you can't rotate on the x or y axis without a camera model. Imagine a camera with an incredibly wide field of view. You could hold it really close to an object and see the entire thing but if that object rotated its edges would seem to fly towards you very quickly with a strong perspective distortion. On the other hand a small field of view (think telescope) has very little perspective distortion. A nice place to start is setting your image plane at least as far from the camera as it is wide and putting your object right on the image plane. That is what I did in this example (c++ openCV)

步骤是

  1. 构造一个旋转矩阵
  2. 在原点居中图像
  3. 旋转图片
  4. 将图像沿 z 轴向下移动
  5. 乘以相机矩阵
  6. 扭曲视角

<小时>

//1
float x =  -14 * (M_PI/180);
float y =  20 * (M_PI/180);
float z =  15 * (M_PI/180);

cv::Matx31f rot_vec(x,y,z);
cv::Matx33f rot_mat;
cv::Rodrigues(rot_vec, rot_mat); //converts to a rotation matrix

cv::Matx33f translation1(1,0,-image.cols/2,
                        0,1,-image.rows/2,
                        0,0,1);
rot_mat(0,2) = 0;
rot_mat(1,2) = 0;
rot_mat(2,2) = 1;

//2 and 3
cv::Matx33f trans = rot_mat*translation1;
//4
trans(2,2) += image.rows;
cv::Matx33f camera_mat(image.rows,0,image.rows/2,
                       0,image.rows,image.rows/2,
                       0,0,1);
//5
cv::Matx33f transform = camera_mat*trans;
//6
cv::Mat final;
cv::warpPerspective(image, final, cv::Mat(transform),image.size());

这段代码给了我这个输出

This code gave me this output

在我发布此消息之前,我没有看到佛朗哥的回答.他是完全正确的,使用 FindHomography 可以为您节省所有这些步骤.我仍然希望这很有用.

I did not see Franco's answer until I posted this. He is completely correct, using FindHomography would save you all these steps. Still I hope this is useful.

这篇关于OpenCV中基于已知相机方向的透视变形的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

上一篇:如何使用 Python OpenCV 将灰度图像转换为热图图像 下一篇:OpenCV ORB 检测器发现的关键点很少

相关文章

最新文章