EMGU CV SURF 图像匹配

Posted

技术标签:

【中文标题】EMGU CV SURF 图像匹配【英文标题】:EMGU CV SURF image match 【发布时间】:2012-04-02 04:48:20 【问题描述】:

我一直在使用 EMGU CV 库中的 SURF 特征检测示例。

到目前为止,它的效果非常好;我可以检测到 2 个给定图像之间的匹配对象,但我遇到了关于图像何时不匹配的问题。

我一直在寻求论坛的支持,但他们在我所在的位置下方。有谁知道哪些参数确定图像是否匹配。当我使用 2 个不匹配的图像进行测试时,代码仍然会继续进行,就好像存在匹配一样,并且即使在不匹配的情况下,也会在图像的随机位置绘制一条模糊的粗红线。

如果没有匹配项,我希望中断代码并且不再继续。

附录:

      static void Run()
      
          Image<Gray, Byte> modelImage = new Image<Gray, byte>("HatersGonnaHate.png");
         Image<Gray, Byte> observedImage = new Image<Gray, byte>("box_in_scene.png");
         Stopwatch watch;
         HomographyMatrix homography = null;

         SURFDetector surfCPU = new SURFDetector(500, false);

         VectorOfKeyPoint modelKeyPoints;
         VectorOfKeyPoint observedKeyPoints;
         Matrix<int> indices;
         Matrix<float> dist;
         Matrix<byte> mask;

         if (GpuInvoke.HasCuda)
         
            GpuSURFDetector surfGPU = new GpuSURFDetector(surfCPU.SURFParams, 0.01f);
            using (GpuImage<Gray, Byte> gpuModelImage = new GpuImage<Gray, byte>(modelImage))
            //extract features from the object image
            using (GpuMat<float> gpuModelKeyPoints = surfGPU.DetectKeyPointsRaw(gpuModelImage, null))
            using (GpuMat<float> gpuModelDescriptors = surfGPU.ComputeDescriptorsRaw(gpuModelImage, null, gpuModelKeyPoints))
            using (GpuBruteForceMatcher matcher = new GpuBruteForceMatcher(GpuBruteForceMatcher.DistanceType.L2))
            
               modelKeyPoints = new VectorOfKeyPoint();
               surfGPU.DownloadKeypoints(gpuModelKeyPoints, modelKeyPoints);
               watch = Stopwatch.StartNew();

               // extract features from the observed image
               using (GpuImage<Gray, Byte> gpuObservedImage = new GpuImage<Gray, byte>(observedImage))
               using (GpuMat<float> gpuObservedKeyPoints = surfGPU.DetectKeyPointsRaw(gpuObservedImage, null))
               using (GpuMat<float> gpuObservedDescriptors = surfGPU.ComputeDescriptorsRaw(gpuObservedImage, null, gpuObservedKeyPoints))
               using (GpuMat<int> gpuMatchIndices = new GpuMat<int>(gpuObservedDescriptors.Size.Height, 2, 1))
               using (GpuMat<float> gpuMatchDist = new GpuMat<float>(gpuMatchIndices.Size, 1))
               
                  observedKeyPoints = new VectorOfKeyPoint();
                  surfGPU.DownloadKeypoints(gpuObservedKeyPoints, observedKeyPoints);

                  matcher.KnnMatch(gpuObservedDescriptors, gpuModelDescriptors, gpuMatchIndices, gpuMatchDist, 2, null);

                  indices = new Matrix<int>(gpuMatchIndices.Size);
                  dist = new Matrix<float>(indices.Size);
                  gpuMatchIndices.Download(indices);
                  gpuMatchDist.Download(dist);

                  mask = new Matrix<byte>(dist.Rows, 1);

                  mask.SetValue(255);

                  Features2DTracker.VoteForUniqueness(dist, 0.8, mask);

                  int nonZeroCount = CvInvoke.cvCountNonZero(mask);
                  if (nonZeroCount >= 4)
                  
                     nonZeroCount = Features2DTracker.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
                     if (nonZeroCount >= 4)
                        homography = Features2DTracker.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 3);
                  

                  watch.Stop();
               
            
         
         else
         
            //extract features from the object image
            modelKeyPoints = surfCPU.DetectKeyPointsRaw(modelImage, null);
            //MKeyPoint[] kpts = modelKeyPoints.ToArray();
            Matrix<float> modelDescriptors = surfCPU.ComputeDescriptorsRaw(modelImage, null, modelKeyPoints);

            watch = Stopwatch.StartNew();

            // extract features from the observed image
            observedKeyPoints = surfCPU.DetectKeyPointsRaw(observedImage, null);
            Matrix<float> observedDescriptors = surfCPU.ComputeDescriptorsRaw(observedImage, null, observedKeyPoints);

            BruteForceMatcher matcher = new BruteForceMatcher(BruteForceMatcher.DistanceType.L2F32);
            matcher.Add(modelDescriptors);
            int k = 2;
            indices = new Matrix<int>(observedDescriptors.Rows, k);
            dist = new Matrix<float>(observedDescriptors.Rows, k);
            matcher.KnnMatch(observedDescriptors, indices, dist, k, null);

            mask = new Matrix<byte>(dist.Rows, 1);

            mask.SetValue(255);

            Features2DTracker.VoteForUniqueness(dist, 0.8, mask);

            int nonZeroCount = CvInvoke.cvCountNonZero(mask);
            if (nonZeroCount >= 4)
            
               nonZeroCount = Features2DTracker.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
               if (nonZeroCount >= 4)
                  homography = Features2DTracker.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 3);
            

            watch.Stop();
         

         //Draw the matched keypoints
        Image<Bgr, Byte> result = Features2DTracker.DrawMatches(modelImage, modelKeyPoints, observedImage, observedKeyPoints,
            indices, new Bgr(255, 255, 255), new Bgr(255, 255, 255), mask, Features2DTracker.KeypointDrawType.NOT_DRAW_SINGLE_POINTS);

         #region draw the projected region on the image
         if (homography != null)
           //draw a rectangle along the projected model
            Rectangle rect = modelImage.ROI;
            PointF[] pts = new PointF[]  
               new PointF(rect.Left, rect.Bottom),
               new PointF(rect.Right, rect.Bottom),
               new PointF(rect.Right, rect.Top),
               new PointF(rect.Left, rect.Top);
            homography.ProjectPoints(pts);

            result.DrawPolyline(Array.ConvertAll<PointF, Point>(pts, Point.Round), true, new Bgr(Color.Red), 5);
         
         #endregion

         ImageViewer.Show(result, String.Format("Matched using 0 in 1 milliseconds", GpuInvoke.HasCuda ? "GPU" : "CPU", watch.ElapsedMilliseconds));
      


   


`

【问题讨论】:

附加:为了更清楚,当两个图像不匹配时,我希望停止执行并检查另一个图像。 更新:我想我解决了这个问题。我刚刚降低了唯一性阈值:Features2DTracker.VoteForUniqueness(dist, 0.8, mask);从 0.8 变为 0.5。工作正常。 你能写出你是如何解决的吗?谢谢 我正在使用 2 个相似度很高但不完全匹配的图像进行调试测试。我发现通过降低唯一性阈值,我可以检测到某种形式的相似性。它仍在进行中,因为我的目标是改进它。 你可以粘贴你的答案并接受你自己的问题作为回答,然后我可以投票。 【参考方案1】:

我不确定是否有适合所有图像序列或所有几何变形情况的方法。

我建议您计算两个图像之间的PSNR,并研究您的图像序列的容差阈值。

【讨论】:

以上是关于EMGU CV SURF 图像匹配的主要内容,如果未能解决你的问题,请参考以下文章

图像数据访问和图像匹配中的 C# 和 EMgu CV

opencv之SURF图像匹配

Matlab/CV系列基于SIFT/SURF配准和小波变换图像融合Matlab仿真

将一个图像中的 SURF 描述符与其他图像中的描述符列表进行比较

如何将 Emgu.cv 用于未知人员?

怎么提取SURF匹配后的图像特征点坐标