如何隔离轮廓内的所有内容,对其进行缩放并测试与图像的相似性?
Posted
技术标签:
【中文标题】如何隔离轮廓内的所有内容,对其进行缩放并测试与图像的相似性?【英文标题】:How to isolate everything inside of a contour, scale it, and test the similarity to an image? 【发布时间】:2020-04-11 13:24:52 【问题描述】:我做一个项目只是为了好玩,我的目标是玩在线扑克并让程序识别桌面上的牌。我正在使用 OpenCV 和 python 来隔离卡片所在的区域。我已经能够拍摄该区域的图像,对其进行灰度和阈值化,并在卡片边缘绘制轮廓。我现在被困在如何前进上。
这是我目前的代码:
import cv2
from PIL import ImageGrab
import numpy as np
def processed(image):
grayscaled = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresholded = cv2.Canny(grayscaled, threshold1 = 200, threshold2 = 200)
return thresholded
def drawcard1():
screen = ImageGrab.grab(bbox = (770,300,850,400))
processed_img = processed(np.array(screen))
outside_contour, dummy = cv2.findContours(processed_img.copy(), 0,2)
colored = cv2.cvtColor(processed_img, cv2.COLOR_GRAY2BGR)
cv2.drawContours(colored, outside_contour, 0, (0,255,0),2)
cv2.imshow('resized_card', colored)
while True:
drawcard1()
if cv2.waitKey(25) & 0xFF == ord('w'):
cv2.destroyAllWindows()
break
这是我到目前为止的结果:
我需要能够获取轮廓的内部,并移除其外部的任何内容。然后生成的图像应该只是卡片,我需要将其缩放到 49x68 像素。一旦我能做到这一点,我的计划是获得军衔和西装的轮廓,并用白色像素填充它,然后我将与一组图像进行比较以确定最合适的。
我对 OpenCV 和图像处理非常陌生,但我发现这些东西非常迷人!我已经能够通过 Google 做到这一点,但这次我找不到任何东西。
这是我现在用来替换游戏的图像:
这是我将用来比较表卡的图像之一:
【问题讨论】:
【参考方案1】:这种情况非常适合template matching。这个想法是在更大的图像中搜索并找到模板图像的位置。为了执行此方法,模板在输入图像上滑动(类似于 2D 卷积),其中执行比较方法以确定像素相似度。这是模板匹配背后的基本思想。不幸的是,这种基本方法存在缺陷,因为它仅在模板图像大小与要在输入图像中找到的所需项目相同时才有效。因此,如果您的模板图像小于在输入图像中要查找的所需区域,则此方法将不起作用。
为了解决这个限制,我们可以通过使用np.linspace()
动态重新缩放图像来实现缩放变体模板匹配。在每次迭代中,我们调整输入图像的大小并跟踪比率。我们继续调整大小,直到模板图像大小大于调整大小的图像,同时跟踪最高相关值。更高的相关值意味着更好的匹配。一旦我们遍历不同的尺度,我们找到最大匹配的比率,然后计算边界框的坐标来确定 ROI。
使用您的模板图像:
这是检测到的以绿色突出显示的卡片。为了可视化动态模板匹配的过程,取消注释代码中的部分。
代码
import cv2
import numpy as np
# Resizes a image and maintains aspect ratio
def maintain_aspect_ratio_resize(image, width=None, height=None, inter=cv2.INTER_AREA):
# Grab the image size and initialize dimensions
dim = None
(h, w) = image.shape[:2]
# Return original image if no need to resize
if width is None and height is None:
return image
# We are resizing height if width is none
if width is None:
# Calculate the ratio of the height and construct the dimensions
r = height / float(h)
dim = (int(w * r), height)
# We are resizing width if height is none
else:
# Calculate the ratio of the 0idth and construct the dimensions
r = width / float(w)
dim = (width, int(h * r))
# Return the resized image
return cv2.resize(image, dim, interpolation=inter)
# Load template and convert to grayscale
template = cv2.imread('template.png')
template = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
(tH, tW) = template.shape[:2]
cv2.imshow("template", template)
# Load original image, convert to grayscale
original_image = cv2.imread('1.jpg')
gray = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)
found = None
# Dynamically rescale image for better template matching
for scale in np.linspace(0.1, 3.0, 20)[::-1]:
# Resize image to scale and keep track of ratio
resized = maintain_aspect_ratio_resize(gray, width=int(gray.shape[1] * scale))
r = gray.shape[1] / float(resized.shape[1])
# Stop if template image size is larger than resized image
if resized.shape[0] < tH or resized.shape[1] < tW:
break
# Threshold resized image and apply template matching
thresh = cv2.threshold(resized, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
detected = cv2.matchTemplate(thresh, template, cv2.TM_CCOEFF)
(_, max_val, _, max_loc) = cv2.minMaxLoc(detected)
# Uncomment this section for visualization
'''
clone = np.dstack([thresh, thresh, thresh])
cv2.rectangle(clone, (max_loc[0], max_loc[1]), (max_loc[0] + tW, max_loc[1] + tH), (0,255,0), 2)
cv2.imshow('visualize', clone)
cv2.waitKey(50)
'''
# Keep track of correlation value
# Higher correlation means better match
if found is None or max_val > found[0]:
found = (max_val, max_loc, r)
# Compute coordinates of bounding box
(_, max_loc, r) = found
(start_x, start_y) = (int(max_loc[0] * r), int(max_loc[1] * r))
(end_x, end_y) = (int((max_loc[0] + tW) * r), int((max_loc[1] + tH) * r))
# Draw bounding box on ROI
cv2.rectangle(original_image, (start_x, start_y), (end_x, end_y), (0,255,0), 5)
cv2.imshow('detected', original_image)
cv2.imwrite('detected.png', original_image)
cv2.waitKey(0)
【讨论】:
哇,这比我想象的要复杂得多。谢谢!以上是关于如何隔离轮廓内的所有内容,对其进行缩放并测试与图像的相似性?的主要内容,如果未能解决你的问题,请参考以下文章