在OpenCV Python中检测/提取图像之间的最大差异
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了在OpenCV Python中检测/提取图像之间的最大差异相关的知识,希望对你有一定的参考价值。
我正在进行射击模拟器项目,我必须从图像中检测弹孔。我试图区分两个图像,以便我可以检测图像之间的新孔,但它没有按预期工作。在两个图像之间,由于相机框架之间的轻微移动,以前的弹孔有微小的变化。
我的第一张照片就在这里
before.png
第二个就在这里
after.png
我尝试使用此代码来检查差异
import cv2
import numpy as np
before = cv2.imread("before.png") after = cv2.imread("after.png")
result = after - before
cv2.imwrite("result.png", result)
我得到的结果是.png是下图
result.png
但这不是我所期望的,我只想检测新的孔,但它显示的是前一个图像的一些像素的差异。我期待的结果是
expected.png
请帮我搞清楚,这样才能发现很大的差异。
提前致谢。
任何新的想法将不胜感激。
为了找到两个图像之间的差异,您可以使用Image Quality Assessment: From Error Visibility to Structural Similarity中引入的结构相似性指数(SSIM)。此方法已在scikit-image库中实现以进行图像处理。你可以用scikit-image
安装pip install scikit-image
。
使用scikit-image中的compare_ssim()
函数,它返回一个score
和一个差异图像diff
。 score
表示两个输入图像之间的结构相似性指数,并且可以落在范围[-1,1]之间,其中值更接近于表示更高相似性的值。但是因为你只对这两个图像的不同之处感兴趣,所以diff
想象你正在寻找什么。 diff
图像包含两个图像之间的实际图像差异。
接下来,我们使用cv2.findContours()
找到所有轮廓并过滤最大轮廓。最大的轮廓应代表新检测到的差异,因为轻微差异应小于添加的子弹。
这是两个图像之间的实际差异。注意如何捕获所有差异,但由于新子弹很可能是最大的轮廓,我们可以过滤掉相机帧之间的所有其他轻微移动。
注意:如果我们假设新子弹在diff
图像中具有最大轮廓,则此方法非常有效。如果最新的孔较小,则可能需要屏蔽现有区域,新图像中的任何新轮廓都是新孔(假设图像是带有白洞的均匀黑色背景)。
from skimage.measure import compare_ssim
import cv2
before = cv2.imread('before.png')
after = cv2.imread('after.png')
# Convert images to grayscale
before_gray = cv2.cvtColor(before, cv2.COLOR_BGR2GRAY)
after_gray = cv2.cvtColor(after, cv2.COLOR_BGR2GRAY)
# Compute SSIM between two images
(score, diff) = compare_ssim(before_gray, after_gray, full=True)
# The diff image contains the actual image differences between the two images
# and is represented as a floating point data type in the range [0,1]
# so we must convert the array to 8-bit unsigned integers in the range
# [0,255] before we can use it with OpenCV
diff = (diff * 255).astype("uint8")
# Threshold the difference image, followed by finding contours to
# obtain the regions of the two input images that differ
thresh = cv2.threshold(diff, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
contours = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
contour_sizes = [(cv2.contourArea(contour), contour) for contour in contours]
# The largest contour should be the new detected difference
if len(contour_sizes) > 0:
largest_contour = max(contour_sizes, key=lambda x: x[0])[1]
x,y,w,h = cv2.boundingRect(largest_contour)
cv2.rectangle(before, (x, y), (x + w, y + h), (36,255,12), 2)
cv2.rectangle(after, (x, y), (x + w, y + h), (36,255,12), 2)
cv2.imshow('before', before)
cv2.imshow('after', after)
cv2.imshow('diff',diff)
cv2.waitKey(0)
这是另一个具有不同输入图像的示例。 SSIM非常适合检测图像之间的差异
这是我的方法:在我们从另一个中减去一个之后,仍然存在一些噪音,所以我只是试图去掉那个噪音。我将图像划分为它的大小的百分位数,并且,对于图像的每个小部分,比较之前和之后,以便仅剩下重要的白色像素块。当有遮挡时,即每当新镜头与现有镜头重叠时,该算法缺乏精确度。
import cv2
import numpy as np
# This is the percentage of the width/height we're gonna cut
# 0.99 < percent < 0.1
percent = 0.01
before = cv2.imread("before.png")
after = cv2.imread("after.png")
result = after - before # Here, we eliminate the biggest differences between before and after
h, w, _ = result.shape
hPercent = percent * h
wPercent = percent * w
def isBlack(crop): # Function that tells if the crop is black
mask = np.zeros(crop.shape, dtype = int)
return not (np.bitwise_or(crop, mask)).any()
for wFrom in range(0, w, int(wPercent)): # Here we are gonna remove that noise
for hFrom in range(0, h, int(hPercent)):
wTo = int(wFrom+wPercent)
hTo = int(hFrom+hPercent)
crop = result[wFrom:wTo,hFrom:hTo] # Crop the image
if isBlack(crop): # If it is black, there is no shot in it
continue # We dont need to continue with the algorithm
beforeCrop = before[wFrom:wTo,hFrom:hTo] # Crop the image before
if not isBlack(beforeCrop): # If the image before is not black, it means there was a hot already there
result[wFrom:wTo,hFrom:hTo] = [0, 0, 0] # So, we erase it from the result
cv2.imshow("result",result )
cv2.imshow("before", before)
cv2.imshow("after", after)
cv2.waitKey(0)
正如您所看到的,它适用于您提供的用例。下一步是保持一系列拍摄位置,以便你可以
我的代码:
from skimage.measure import compare_ssim
import argparse
import imutils
import cv2
import numpy as np
# load the two input images
imageA = cv2.imread('./Input_1.png')
cv2.imwrite("./org.jpg", imageA)
# imageA = cv2.medianBlur(imageA,29)
imageB = cv2.imread('./Input_2.png')
cv2.imwrite("./test.jpg", imageB)
# imageB = cv2.medianBlur(imageB,29)
# convert the images to grayscale
grayA = cv2.cvtColor(imageA, cv2.COLOR_BGR2GRAY)
grayB = cv2.cvtColor(imageB, cv2.COLOR_BGR2GRAY)
##########################################################################################################
difference = cv2.subtract(grayA,grayB)
result = not np.any(difference)
if result is True:
print ("Pictures are the same")
else:
cv2.imwrite("./open_cv_subtract.jpg", difference )
print ("Pictures are different, the difference is stored.")
##########################################################################################################
diff = cv2.absdiff(grayA, grayB)
cv2.imwrite("./tabsdiff.png", diff)
##########################################################################################################
grayB=cv2.resize(grayB,(grayA.shape[1],grayA.shape[0]))
(score, diff) = compare_ssim(grayA, grayB, full=True)
diff = (diff * 255).astype("uint8")
print("SSIM: {}".format(score))
#########################################################################################################
thresh = c以上是关于在OpenCV Python中检测/提取图像之间的最大差异的主要内容,如果未能解决你的问题,请参考以下文章