我正在使用 OpenCV 校准使用带鱼眼镜头的相机拍摄的图像.
我使用的功能是:
findChessboardCorners(...);
找到校准图案的角点.cornerSubPix(...);
细化找到的角点.fisheye::calibrate(...);
校准相机矩阵和失真系数.fisheye::undistortImage(...);
使用从校准中获得的相机信息使图像不失真.虽然生成的图像看起来不错(直线等),但我的问题是该函数切掉了太多的图像.
这是一个真正的问题,因为我使用了四个摄像头,它们之间成 90 度角,当切掉了这么多边时,它们之间没有重叠区域,因为我要拼接图像.
我考虑使用 fisheye::estimateNewCameraMatrixForUndistortRectify(...)
但我无法得到好的结果,因为我不知道我应该输入什么作为 R
输入,因为 fisheye::calibrate
的旋转向量输出是 3xN(其中 N 是校准图像的数量)并且 fisheye::estimateNewCameraMatrixForUndistortRectify
需要一个 1x3或 3x3.
下面的图像显示了我的未失真结果的图像,以及我理想中想要的那种结果的示例.
不失真:
想要的结果示例:
正如
源代码:
#include #include #include #include #include #include #include #include //- 编译://g++ -ggdb `pkg-config --cflags --libs opencv` fist2rect.cpp -o fist2rect//- 执行://fist2rect input.jpg output.jpg使用命名空间标准;使用命名空间 cv;#define PI 3.1415926536Point2f getInputPoint(int x, int y,int srcwidth, int srcheight){Point2f pfish;浮动 theta,pr, r2;Point3f psph;浮动 FOV =(浮动)PI/180 * 180;浮动 FOV2 =(浮动)PI/180 * 180;浮动宽度 = srcwidth;浮动高度 = srcheight;//极角theta = PI * (x/宽度 - 0.5);//-pi/2 到 pi/2phi = PI * (y/高度 - 0.5);//-pi/2 到 pi/2//3D空间中的向量psph.x = cos(phi) * sin(theta);psph.y = cos(phi) * cos(theta);psph.z = sin(phi) * cos(theta);//计算鱼眼角度和半径theta = atan2(psph.z,psph.x);phi = atan2(sqrt(psph.x*psph.x+psph.z*psph.z),psph.y);r = 宽度 * phi/FOV;r2 = 高度 * phi/FOV2;//鱼眼空间中的像素pfish.x = 0.5 * 宽度 + r * cos(theta);pfish.y = 0.5 * 高度 + r2 * sin(theta);返回 pfish;}int main(int argc, char **argv){如果(argc<3)返回0;Mat orignalImage = imread(argv[1]);if(orignalImage.empty()){cout<<"空图像
";返回0;}Mat outImage(orignalImage.rows,orignalImage.cols,CV_8UC3);namedWindow("result",CV_WINDOW_NORMAL);for(int i=0; i<outImage.cols; i++){for(int j=0; j(inP2);outImage.at(Point(i,j)) = 颜色;}}imwrite(argv[2],outImage);}
I am using OpenCV to calibrate images taken using cameras with fish-eye lenses.
The functions I am using are:
findChessboardCorners(...);
to find the corners of the calibration pattern.cornerSubPix(...);
to refine the found corners.fisheye::calibrate(...);
to calibrate the camera matrix and the distortion coefficients.fisheye::undistortImage(...);
to undistort the images using the camera info obtained from calibration.While the resulting image does appear to look good (straight lines and so on), my issue is that the function cut away too much of the image.
This is a real problem, as I am using four cameras with 90 degrees between them, and when so much of the sides are cut off, there is no overlapping area between them which is needed as I am going to stitch the images.
I looked into using fisheye::estimateNewCameraMatrixForUndistortRectify(...)
but I could not get it to give good results, as I do not know what I should put in as the R
input, as the rotation vector output of fisheye::calibrate
is 3xN (where N is the number of calibration images) and fisheye::estimateNewCameraMatrixForUndistortRectify
requires a 1x3 or 3x3.
The images below show an image of my undistortion result, and an example of the kind of result I would ideally want.
Undistortion:
Example of wanted result:
As mentioned by Paul Bourke here:
a fisheye projection is not a "distorted" image, and the process isn't a "dewarping". A fisheye like other projections is one of many ways of mapping a 3D world onto a 2D plane, it is no more or less "distorted" than other projections including a rectangular perspective projection
To get a projection without image cropping, (and your camera has ~180 degrees FOV) you can project the fisheye image in a square using something like this:
Source code:
#include <iostream>
#include <sstream>
#include <time.h>
#include <stdio.h>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/highgui/highgui.hpp>
// - compile with:
// g++ -ggdb `pkg-config --cflags --libs opencv` fist2rect.cpp -o fist2rect
// - execute:
// fist2rect input.jpg output.jpg
using namespace std;
using namespace cv;
#define PI 3.1415926536
Point2f getInputPoint(int x, int y,int srcwidth, int srcheight)
{
Point2f pfish;
float theta,phi,r, r2;
Point3f psph;
float FOV =(float)PI/180 * 180;
float FOV2 = (float)PI/180 * 180;
float width = srcwidth;
float height = srcheight;
// Polar angles
theta = PI * (x / width - 0.5); // -pi/2 to pi/2
phi = PI * (y / height - 0.5); // -pi/2 to pi/2
// Vector in 3D space
psph.x = cos(phi) * sin(theta);
psph.y = cos(phi) * cos(theta);
psph.z = sin(phi) * cos(theta);
// Calculate fisheye angle and radius
theta = atan2(psph.z,psph.x);
phi = atan2(sqrt(psph.x*psph.x+psph.z*psph.z),psph.y);
r = width * phi / FOV;
r2 = height * phi / FOV2;
// Pixel in fisheye space
pfish.x = 0.5 * width + r * cos(theta);
pfish.y = 0.5 * height + r2 * sin(theta);
return pfish;
}
int main(int argc, char **argv)
{
if(argc< 3)
return 0;
Mat orignalImage = imread(argv[1]);
if(orignalImage.empty())
{
cout<<"Empty image
";
return 0;
}
Mat outImage(orignalImage.rows,orignalImage.cols,CV_8UC3);
namedWindow("result",CV_WINDOW_NORMAL);
for(int i=0; i<outImage.cols; i++)
{
for(int j=0; j<outImage.rows; j++)
{
Point2f inP = getInputPoint(i,j,orignalImage.cols,orignalImage.rows);
Point inP2((int)inP.x,(int)inP.y);
if(inP2.x >= orignalImage.cols || inP2.y >= orignalImage.rows)
continue;
if(inP2.x < 0 || inP2.y < 0)
continue;
Vec3b color = orignalImage.at<Vec3b>(inP2);
outImage.at<Vec3b>(Point(i,j)) = color;
}
}
imwrite(argv[2],outImage);
}
这篇关于OpenCV 鱼眼校准削减了太多的结果图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!