与 OpenCV findHomography 和 warpPerspective 混淆

Posted

技术标签:

【中文标题】与 OpenCV findHomography 和 warpPerspective 混淆【英文标题】:confused with OpenCV findHomography and warpPerspective 【发布时间】:2015-08-14 08:49:19 【问题描述】:

首先,对不起我的英语不好。我会尽力表达我的问题。 我正在做一个包括两个图像对齐的项目。我所做的只是检测关键点,匹配这些点并估计这两个图像之间的转换。 这是我的代码:

static void target_region_warping( 
Mat IN  template_image,
Mat IN  input_image,
Mat OUT &warped_image,
int IN  method
)

    vector<KeyPoint> kpt1, kpt2;
    vector<Point2f> points1, points2;
    Mat desc1, desc2;
    vector<Point2f> points, points_transformed;
    vector<vector<DMatch> > matches1, matches2;
    vector<DMatch> sym_matches, fm_matches;
    Mat im_show;
    float x, y;
    Mat fundemental;

    // To avoid NaN's when best match has zero distance we will use inversed ratio. 
    const float minRatio = 1.0f / 1.5f;

    // match scheme, sift + ransac
    Ptr<xfeatures2d::SIFT> sift = xfeatures2d::SIFT::create( 1000, 3, 0.004, 20 );
    Ptr<flann::IndexParams> indexParams = makePtr<flann::KDTreeIndexParams>(5); // instantiate LSH index parameters
    Ptr<flann::SearchParams> searchParams = makePtr<flann::SearchParams>(50);       // instantiate flann search parameters
    Ptr<DescriptorMatcher> matcher = makePtr<FlannBasedMatcher>(indexParams, searchParams);

    sift->detectAndCompute( template_image, noArray(), kpt1, desc1 );
    sift->detectAndCompute( input_image, noArray(), kpt2, desc2 );

    // step1: match and remove outliers using ratio
    // KNN match will return 2 nearest matches for each query descriptor
    matcher->knnMatch( desc1, desc2, matches1, 2 );

    // for all matches
    for ( std::vector<std::vector<cv::DMatch>>::iterator matchIterator= matches1.begin(); 
          matchIterator!= matches1.end(); ++matchIterator ) 
    
        // if 2 NN has been identified
        if (matchIterator->size() > 1) 
        
            // check distance ratio
            if ( (*matchIterator)[0].distance /
                (*matchIterator)[1].distance > minRatio) 
            
                matchIterator->clear(); // remove match
            
         
        else  // does not have 2 neighbours
            matchIterator->clear(); // remove match
        
    

#ifdef TARGET_SHOW
    drawMatches( template_image, kpt1, input_image, kpt2, matches1, im_show );
    namedWindow( "SIFT matches: image1 -> image2", WINDOW_AUTOSIZE );
    imshow( "SIFT matches: image1 -> image2", im_show );
#endif

    //step2: image2 -> image1
    matcher->knnMatch( desc2, desc1, matches2, 2 );

    for ( std::vector<std::vector<cv::DMatch>>::iterator matchIterator= matches2.begin();
          matchIterator!= matches2.end(); ++matchIterator ) 
    
        // if 2 NN has been identified
        if (matchIterator->size() > 1) 
        
            // check distance ratio
            if ( (*matchIterator)[0].distance/
                (*matchIterator)[1].distance > minRatio) 
            
                matchIterator->clear(); // remove match
            
         
        else  // does not have 2 neighbours
            matchIterator->clear(); // remove match
        
    

    //step3: symmetric matching scheme
    // for all matches image 1 -> image 2
    for ( vector< vector<DMatch> >::const_iterator matchIterator1= matches1.begin();
          matchIterator1!= matches1.end(); ++matchIterator1 ) 
    
        // ignore deleted matches
        if (matchIterator1->size() < 2)
            continue;
        // for all matches image 2 -> image 1
        for ( std::vector<std::vector<cv::DMatch>>::const_iterator matchIterator2= matches2.begin();
              matchIterator2!= matches2.end(); ++matchIterator2 ) 
        
            // ignore deleted matches
            if (matchIterator2->size() < 2)
                continue;
            // Match symmetry test
            if ( ( *matchIterator1)[0].queryIdx == ( *matchIterator2 )[0].trainIdx &&
                ( *matchIterator2)[0].queryIdx == ( *matchIterator1 )[0].trainIdx ) 
            
                // add symmetrical match
                sym_matches.push_back(
                    cv::DMatch( (*matchIterator1)[0].queryIdx,
                    (*matchIterator1)[0].trainIdx,
                    (*matchIterator1)[0].distance));
                break; // next match in image 1 -> image 2
            
        
    

#ifdef TARGET_SHOW
    drawMatches( template_image, kpt1, input_image, kpt2, sym_matches, im_show );
    namedWindow( "SIFT matches: symmetric matching scheme", WINDOW_AUTOSIZE );
    imshow( "SIFT matches: symmetric matching scheme", im_show );
#endif

    // step4: Identify good matches using RANSAC
    // Return fundemental matrix
    // first, convert keypoints into Point2f
    for ( std::vector<cv::DMatch>::const_iterator it = sym_matches.begin();
          it!= sym_matches.end(); ++it ) 
    
        // Get the position of left keypoints
        x = kpt1[it->queryIdx].pt.x;
        y = kpt1[it->queryIdx].pt.y;
        points1.push_back( Point2f( x,y ) );

        // Get the position of right keypoints
        x = kpt2[it->trainIdx].pt.x;
        y = kpt2[it->trainIdx].pt.y;
        points2.push_back(cv::Point2f(x,y));
    

    // Compute F matrix using RANSAC
    std::vector<uchar> inliers(points1.size(),0);

    fundemental = findHomography(
        Mat(points1),
        Mat(points2),
        FM_RANSAC, 
        10, 
        inliers,            
        2000,           
        0.9999 );
    // extract the surviving (inliers) matches
    vector<uchar>::const_iterator itIn= inliers.begin();
    vector<DMatch>::const_iterator itM= sym_matches.begin();
    // for all matches
    for ( ;itIn!= inliers.end(); ++itIn, ++itM) 
    
        if (*itIn) 
         // it is a valid match
            fm_matches.push_back(*itM);
        
    

#ifdef TARGET_SHOW
    drawMatches( template_image, kpt1, input_image, kpt2, fm_matches, im_show );
    namedWindow( "SIFT matches: RANSAC matching scheme", WINDOW_AUTOSIZE );
    imshow( "SIFT matches: RANSAC matching scheme", im_show );
#endif

    // step5: warp image 1 to image 2
    cv::warpPerspective( input_image, // input image
        warped_image, // output image
        fundemental, // homography
        input_image.size(),
        cv::WARP_INVERSE_MAP | cv::INTER_CUBIC ); // size of output image

我的代码中 step5 有一些问题。也就是说,矩阵“基本”是通过估计从 template_image 到 input_image 的转换得到的。所以正确的调用方式应该是

// may I sign this "1"
cv::warpPerspective( template_image, // input image
        warped_image, // output image
        fundemental, // homography
        input_image.size(),
        cv::WARP_INVERSE_MAP | cv::INTER_CUBIC ); // size of output image

而不是

// I sign this "2"
cv::warpPerspective( input_image, // input image
        warped_image, // output image
        fundemental, // homography
        input_image.size(),
        cv::WARP_INVERSE_MAP | cv::INTER_CUBIC ); // size of output image

然而,实际上当我使用 absdiff 方法来测试这样的结果时:

// test method "1"
absdiff( warped_image, input_image, diff_image );
// test method "2"
absdiff( warped_image, template_image, diff_image );

我惊奇地发现错误的调用方法“2”会产生更好的结果,即“2”中的 diff_image 比“1”中的零元素更多。 不知道怎么回事,是不是我对“findHomograhpy”方法的理解有些错误?我需要一些帮助,谢谢!

【问题讨论】:

WARP_INVERSE_MAP 表示您的输入是相反的,这正是从输入到模板的转换;) 如果您删除 WARP_INVERSE_MAP,您应该会看到您预期的结果。但是:如果你删除它,你的代码会有点慢,因为扭曲总是以相反的方式完成,所以将执行矩阵求逆。该标志是表示用户已经提供了一个逆以减少计算需求。你可以测试一下,如果你尝试warpPerspective(template_image, ... , fundemental.inv(), ... , cv::WARP_INVERSE_MAP | cv::INTER_CUBIC)应该会有exp。结果 【参考方案1】:

请尝试这两个版本:

cv::warpPerspective( template_image, // input image
    warped_image, // output image
    fundemental, // homography
    input_image.size(), // size of output image
    cv::INTER_CUBIC );  // HERE, INVERSE FLAG IS REMOVED

cv::warpPerspective( template_image, // input image
    warped_image, // output image
    fundemental.inv(), // homography, HERE: INVERTED HOMOGRAPHY AS INPUT
    input_image.size(), // size of output image
    cv::WARP_INVERSE_MAP | cv::INTER_CUBIC ); 

标志cv::WARP_INVERSE_MAP 表示openCV 函数已经提供了一个反向转换。 图像变形总是反向进行,因为您要确保每个输出图像的像素都有一个合法值。

所以要从源图像扭曲到目标图像,您可以提供从源图像到目标图像的单应性,这意味着 openCV 将反转该转换,或者您提供从目标到源的单应性并指示 openCV 它已经反转.

http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html#void%20warpPerspective%28InputArray%20src,%20OutputArray%20dst,%20InputArray%20M,%20Size%20dsize,%20int%20flags,%20int%20borderMode,%20const%20Scalar&%20borderValue%29

当设置了 WARP_INVERSE_MAP 标志时。否则,先用 invert() 反转变换,然后代入上面的公式,而不是 M 。

【讨论】:

以上是关于与 OpenCV findHomography 和 warpPerspective 混淆的主要内容,如果未能解决你的问题,请参考以下文章

openCV中的findHomography函数分析以及RANSAC算法的详解

(转载)利用SIFT和RANSAC算法(openCV框架)实现物体的检测与定位,并求出变换矩阵(findFundamentalMat和findHomography的比较) 置顶

Python:OpenCV findHomography输入

OpenCV 的 findHomography 产生无意义的结果

OpenCV findHomography 错误

OpenCV中的「透视变换 / 投影变换 / 单应性」—cv.warpPerspectivecv.findHomography