OpenCV calibrateCamera - 断言失败(nimages > 0 && nimages == (int)imagePoints1.total()
Posted
技术标签:
【中文标题】OpenCV calibrateCamera - 断言失败(nimages > 0 && nimages == (int)imagePoints1.total()【英文标题】:OpenCV calibrateCamera - assertion failed (nimages > 0 && nimages == (int)imagePoints1.total() 【发布时间】:2015-07-10 12:14:06 【问题描述】:完整的错误:
OpenCV Error: Assertion failed (nimages > 0 && nimages ==
(int)imagePoints1.total() && (!imgPtMat2 || nimages ==
(int)imagePoints2.total())) in collectCalibrationData, file C:\OpenCV
\sources\modules\calib3d\src\calibration.cpp, line 3164
代码:
cv::VideoCapture kalibrowanyPlik; //the video
cv::Mat frame;
cv::Mat testTwo; //undistorted
cv::Mat cameraMatrix = (cv::Mat_<double>(3, 3) << 2673.579, 0, 1310.689, 0, 2673.579, 914.941, 0, 0, 1);
cv::Mat distortMat = (cv::Mat_<double>(1, 4) << -0.208143, 0.235290, 0.001005, 0.001339);
cv::Mat intrinsicMatrix = (cv::Mat_<double>(3, 3) << 1, 0, 0, 0, 1, 0, 0, 0, 1);
cv::Mat distortCoeffs = cv::Mat::zeros(8, 1, CV_64F);
//there are two sets for testing purposes. Values for the first two came from GML camera calibration app.
std::vector<cv::Mat> rvecs;
std::vector<cv::Mat> tvecs;
std::vector<std::vector<cv::Point2f> > imagePoints;
std::vector<std::vector<cv::Point3f> > objectPoints;
kalibrowanyPlik.open("625.avi");
//cv::namedWindow("Distorted", CV_WINDOW_AUTOSIZE); //gotta see things
//cv::namedWindow("Undistorted", CV_WINDOW_AUTOSIZE);
int maxFrames = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_COUNT);
int success = 0; //so we can do the calibration only after we've got a bunch
for(int i=0; i<maxFrames-1; i++)
kalibrowanyPlik.read(frame);
std::vector<cv::Point2f> corners; //creating these here so they're effectively reset each time
std::vector<cv::Point3f> objectCorners;
int sizeX = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_WIDTH); //imageSize
int sizeY = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_HEIGHT);
cv::cvtColor(frame, frame, CV_BGR2GRAY); //must be gray
cv::Size patternsize(9,6); //interior number of corners
bool patternfound = cv::findChessboardCorners(frame, patternsize, corners, cv::CALIB_CB_ADAPTIVE_THRESH + cv::CALIB_CB_NORMALIZE_IMAGE + cv::CALIB_CB_FAST_CHECK); //finding them corners
if(patternfound == false) //gotta know
qDebug() << "failure";
if(patternfound)
qDebug() << "success!";
std::vector<cv::Point3f> objectCorners; //low priority issue - if I don't do this here, it becomes empty. Not sure why.
for(int y=0; y<6; ++y)
for(int x=0; x<9; ++x)
objectCorners.push_back(cv::Point3f(x*28,y*28,0)); //filling the array
cv::cornerSubPix(frame, corners, cv::Size(11, 11), cv::Size(-1, -1),
cv::TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
cv::cvtColor(frame, frame, CV_GRAY2BGR); //I don't want gray lines
imagePoints.push_back(corners); //filling array of arrays with pixel coord array
objectPoints.push_back(objectCorners); //filling array of arrays with real life coord array, or rather copies of the same thing over and over
cout << corners << endl << objectCorners;
cout << endl << objectCorners.size() << "___" << objectPoints.size() << "___" << corners.size() << "___" << imagePoints.size() << endl;
cv::drawChessboardCorners(frame, patternsize, cv::Mat(corners), patternfound); //drawing.
if(success > 5)
double rms = cv::calibrateCamera(objectPoints, corners, cv::Size(sizeX, sizeY), intrinsicMatrix, distortCoeffs, rvecs, tvecs, cv::CALIB_USE_INTRINSIC_GUESS);
//error - caused by passing CORNERS instead of IMAGEPOINTS. Also, imageSize is 640x480, and I've set the central point to 1310... etc
cout << endl << intrinsicMatrix << endl << distortCoeffs << endl;
cout << "\nrms - " << rms << endl;
success = success + 1;
//cv::imshow("Distorted", frame);
//cv::imshow("Undistorted", testTwo);
我已经阅读了一些(This was an especially informative read),包括在 *** 上创建的十几个线程,我发现这个错误是由不均匀的 imagePoints 和 objectPoints 或它们部分为空或空或零(以及无帮助的教程的链接)。都不是这种情况 - .size() 检查的输出是:
54___7___54___7
对于 objectCorners(现实生活中的坐标)、objectPoints(插入的数组数量),对于角点(像素坐标)和 imagePoints 也是如此。它们也不是空的,输出是:
(...)
277.6792, 208.92903;
241.83429, 208.93048;
206.99866, 208.84637;
(...)
84, 56, 0;
112, 56, 0;
140, 56, 0;
168, 56, 0;
(...)
示例框架:
我知道这是一团糟,但到目前为止,我正在尝试完成代码而不是获得准确的读数。
每个人正好有 54 行。有没有人对导致错误的原因有任何想法?我在 Windows 7 上使用 OpenCV 2.4.8 和 Qt Creator 5.4。
【问题讨论】:
能不能在这里贴一帧625.avi,看看效果如何。另外,我认为您最好使用一组单独的图像而不是 avi。 不幸的是它必须是一个视频。我将在 OP 中包含框架 可能在drawChessboardCorners处设置断点,检查objectPoints和corners中的元素大小是否相同。 检查每个元素的大小是我想到的第一件事,是的,它们都是相同的大小。 objectPoints 和 imagePoints 都有相同数量的数组,每个数组都有 54 个元素。 @YangKui 我是个白痴。绝对的白痴。我通过了角落而不是 imagePoints。不过有新的错误。它使我的内在矩阵无效。而且失真系数真的很奇怪(-3.80871228702875;379.970212949286;0.4127166512419153;0.03035805582878129;-4.178316666034348) 【参考方案1】:首先,必须切换角点和图像点,正如您已经注意到的那样。
在大多数情况下(如果不是全部),大小
减少使用图像数量的一个建议:使用从不同视点拍摄的图像。来自 10 个不同视点的 10 张图像比来自相同(或附近)视点的 100 张图像带来更好的结果。这就是视频输入不好的原因之一。我猜用你的代码,传递给 calibratecamera 的所有图像可能是从附近的角度来看。如果是这样,校准精度会降低。
【讨论】:
一个很好的答案。我会相应地更改我的代码。不幸的是它必须是一个视频......我想我会添加一个工具,允许用户选择特定的帧(或者更确切地说在选择的时间戳周围寻找它们),并让他选择 10-15,覆盖从不同的角度看整个屏幕,并将结果保存到 XML 文件中。从英寸转换而来的 633*1/2.5 为 2532,与 4620(最准确的结果)或 5800(制造商给出的焦距)相去甚远。那好吧。我会更加努力地校准。以上是关于OpenCV calibrateCamera - 断言失败(nimages > 0 && nimages == (int)imagePoints1.total()的主要内容,如果未能解决你的问题,请参考以下文章
OpenCV每日函数 相机校准calibrateCamera函数
函数 cv2.calibrateCamera PYTHON 中类型 Point3f 的问题
OpenCV calibrateCamera 如何在 undistort 后将角点匹配到提供的 objectPoints
OpenCV calibrateCamera - 断言失败(nimages > 0 && nimages == (int)imagePoints1.total()