使用 OpenCV 的视角和光照条件略有不同的两张图像之间的差异

Posted

技术标签:

【中文标题】使用 OpenCV 的视角和光照条件略有不同的两张图像之间的差异【英文标题】:Differences between two images with slightly different point of view and lighting conditions with OpenCV 【发布时间】:2021-08-16 13:07:32 【问题描述】:

使用CV - Extract differences between two images 中介绍的方法,我们可以识别两个对齐图像之间的差异。

当相机角度(视角)和光照条件略有不同时,如何用 OpenCV 做到这一点?

来自How to match and align two images using SURF features (Python OpenCV )? 的代码有助于旋转/对齐两个图像,但由于透视变换(“单应性”)的结果并不完美,“差异”算法在此处无法正常工作。

例如,如何从这 2 张照片中仅获取绿色贴纸(= 差异)?

【问题讨论】:

我认为你的方法是要走的路。为什么透视变换的结果不完美?你能提供一张图片吗? 它并不完美,因为曝光不同(第二张图片较暗)。此外,相机移动(其光学中心移动)。单应性在数学上无法完美映射。一张图片根本不包含要纠正的信息。 @ChristophRackwitz 是的。我们可以通过哪种算法获得最好的结果? 也许可以使用最小二乘匹配/相关来解决对齐图像的第一步。我在网上没有找到太多的文献。可以在here 或Luhmann et al. 中找到一个示例。 嗨。如您所知,我提供了一个答案,但实际上我还有另一种有效的方法。因此,如果现有答案不够充分,您可以告诉我。 【参考方案1】:

概念

使用您提供的link 中的部分代码,我们可以获得足够的图像对齐以减去图像,将生成的图像转换为二进制阈值,并检测二进制图像中的最大轮廓以绘制到空白画布将成为扭曲图像的蒙版。

然后,我们可以使用用于扭曲第二张图像以与第一张图像对齐的矩阵的逆矩阵来扭曲蒙版以对应于原始状态下的第二张图像。

代码

import cv2
import numpy as np

def get_matrix(img1, img2, pts):
    sift = cv2.xfeatures2d.SIFT_create()
    matcher = cv2.FlannBasedMatcher("algorithm": 1, "trees": 5)
    kpts1, descs1 = sift.detectAndCompute(cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY), None)
    kpts2, descs2 = sift.detectAndCompute(cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY), None)
    matches = sorted(matcher.knnMatch(descs1, descs2, 2), key=lambda x: x[0].distance)
    good = [m1 for m1, m2 in matches if m1.distance < 0.7 * m2.distance]
    src_pts = np.float32([[kpts1[m.queryIdx].pt] for m in good])
    dst_pts = np.float32([[kpts2[m.trainIdx].pt] for m in good])
    M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5)
    dst = cv2.perspectiveTransform(pts, M).astype('float32')
    return cv2.getPerspectiveTransform(dst, pts)

def get_mask(img):
    mask = np.zeros(img.shape[:2], 'uint8')
    img_canny = cv2.Canny(img, 0, 0)
    img_dilate = cv2.dilate(img_canny, None, iterations=2)
    img_erode = cv2.erode(img_dilate, None, iterations=3)
    contours, _ = cv2.findContours(img_erode, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    cnt = cv2.convexHull(max(contours, key=cv2.contourArea))
    cv2.drawContours(mask, [cnt], -1, 255, -1)
    return mask

img1 = cv2.imread("bar1.jpg")
img2 = cv2.imread("bar2.jpg")

h, w, _ = img1.shape

pts = np.float32([[[0, 0]], [[0, h - 1]], [[w - 1, h - 1]], [[w - 1, 0]]])
perspectiveM = get_matrix(img1, img2, pts)

warped = cv2.warpPerspective(img2, perspectiveM, (w, h))

_, thresh = cv2.threshold(cv2.subtract(warped, img1), 40, 255, cv2.THRESH_BINARY)
mask = get_mask(thresh)

perspectiveM = cv2.warpPerspective(mask, np.linalg.inv(perspectiveM), (w, h))
res = cv2.bitwise_and(img2, img2, mask=perspectiveM)

cv2.imshow("Images", np.hstack((img1, img2, res)))
cv2.waitKey(0)

输出

解释

    导入必要的库:
import cv2
import numpy as np
    定义一个函数get_matrix,它将接收两个图像数组img1img2,以及一组点pts,并将返回一个矩阵,该矩阵将对应于img2img1 对齐。部分代码来自您提供的链接:
def get_matrix(img1, img2, pts):
    sift = cv2.xfeatures2d.SIFT_create()
    matcher = cv2.FlannBasedMatcher("algorithm": 1, "trees": 5)
    kpts1, descs1 = sift.detectAndCompute(cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY), None)
    kpts2, descs2 = sift.detectAndCompute(cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY), None)
    matches = sorted(matcher.knnMatch(descs1, descs2, 2), key=lambda x: x[0].distance)
    good = [m1 for m1, m2 in matches if m1.distance < 0.7 * m2.distance]
    src_pts = np.float32([[kpts1[m.queryIdx].pt] for m in good])
    dst_pts = np.float32([[kpts2[m.trainIdx].pt] for m in good])
    M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5)
    dst = cv2.perspectiveTransform(pts, M).astype('float32')
    return cv2.getPerspectiveTransform(dst, pts)
    定义一个函数get_mask,它将接收一个图像数组img,并返回一个掩码,其中img 是两个图像之间的减法,与前面定义的get_matrix 函数对齐。蒙版是一个空白画布,上面绘制了从img 检测到的最大轮廓:
def get_mask(img):
    mask = np.zeros(img.shape[:2], 'uint8')
    img_canny = cv2.Canny(img, 0, 0)
    img_dilate = cv2.dilate(img_canny, None, iterations=2)
    img_erode = cv2.erode(img_dilate, None, iterations=3)
    contours, _ = cv2.findContours(img_erode, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    cnt = cv2.convexHull(max(contours, key=cv2.contourArea))
    cv2.drawContours(mask, [cnt], -1, 255, -1)
    return mask
    读入两张图片并获得它们的尺寸(在这种情况下,我们将只获得一张图片的尺寸,因为它们的尺寸相同)
img1 = cv2.imread("bar1.jpg")
img2 = cv2.imread("bar2.jpg")

h, w, _ = img1.shape
    定义一组要传递给get_matrix函数的点,利用该函数获取矩阵,并用它扭曲img2
pts = np.float32([[[0, 0]], [[0, h - 1]], [[w - 1, h - 1]], [[w - 1, 0]]])
perspectiveM = get_matrix(img1, img2, pts)

warped = cv2.warpPerspective(img2, perspectiveM, (w, h))
    减去两个对齐的图像,并使用cv2.threshold()cv2.THRESH_BINARY 模式来获得两个图像之间的黑白差异。对于二值图像,使用前面定义的get_mask 函数来获取掩码:
_, thresh = cv2.threshold(cv2.subtract(warped, img1), 40, 255, cv2.THRESH_BINARY)
mask = get_mask(thresh)
    由于我们想要原始第二张图像上的绿色标签,而我们目前有从第二张图像中获取绿色标签的蒙版扭曲,我们需要通过用于扭曲的矩阵的逆矩阵来扭曲蒙版首先是图片:
perspectiveM = cv2.warpPerspective(mask, np.linalg.inv(perspectiveM), (w, h))
res = cv2.bitwise_and(img2, img2, mask=perspectiveM)
    最后,显示图像。我使用np.hstack()方法在一个窗口中显示三个图像:
cv2.imshow("Images", np.hstack((img1, img2, res)))

cv2.waitKey(0)

【讨论】:

【参考方案2】:

对于两个图像的对齐,您可以使用仿射变换。为此,您需要两个图像中的三个点对。为了获得这些点,我将使用对象角。以下是我为获得角点所遵循的步骤。

    高斯混合模型的背景减法(或对象提取) 第一步输出去噪 使用轮廓获取角

我将为所有这些功能使用 opencv 库。

import cv2
from sklearn.mixture import GaussianMixture as GMM
import matplotlib.pyplot as plt
import numpy as np
import math


def extract_object(img):

    img2 = img.reshape((-1,3))

    n_components = 2

    #covariance choices: full, tied, diag, spherical
    gmm = GMM(n_components=n_components, covariance_type='tied')
    gmm.fit(img2)
    gmm_prediction = gmm.predict(img2)

    #Put numbers back to original shape so we can reconstruct segmented image
    original_shape = img.shape
    segmented_img = gmm_prediction.reshape(original_shape[0], original_shape[1])

    # set background always to 0
    if segmented_img[0,0] != 0:
        segmented_img = cv2.bitwise_not(segmented_img)
    return segmented_img


def remove_noise(img):
    img_no_noise = np.zeros_like(img)

    labels,stats= cv2.connectedComponentsWithStats(img.astype(np.uint8),connectivity=4)[1:3]

    largest_area_label = np.argmax(stats[1:, cv2.CC_STAT_AREA]) +1

    img_no_noise[labels==largest_area_label] = 1
    return img_no_noise

def get_box_points(img):
    contours, _ = cv2.findContours(img.astype(np.uint8), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

    cnt = contours[0]
    rect = cv2.minAreaRect(cnt)
    box_points = cv2.boxPoints(rect)
    box_points = np.int0(box_points)

    return box_points

img = cv2.imread('choco.jpg',1)
img_paper = cv2.imread('choco_with_paper.jpg',1)

# remove background
img_bg_removed = extract_object(img) 
img_paper_bg_removed = extract_object(img_paper)

img_no_noise = remove_noise(img_bg_removed)
img_paper_no_noise = remove_noise(img_paper_bg_removed)

img_box_points = get_box_points(img_no_noise)
img_paper_box_points = get_box_points(img_paper_no_noise)

图像的边角略微偏离,但对于这项任务来说已经足够了。我确信有更好的方法来检测角落,但这对我来说是最快的解决方案:)

接下来,我将应用仿射变换将原始图像与纸张对齐/对齐。

# Affine transformation matrix
M = cv2.getAffineTransform(img_box_points[0:3].astype(np.float32), img_paper_box_points[0:3].astype(np.float32))

# apply M to the original binary image
img_registered = cv2.warpAffine(img_no_noise.astype(np.float32), M, dsize=(img_paper_no_noise.shape[1],img_paper_no_noise.shape[0]))

# get the difference
dif = img_registered-img_paper_no_noise

# remove minus values
dif[dif<1]=0

这是纸质图像和注册的原始图像之间的区别。

我所要做的就是在这些区域中获取最大的组件(即那张纸),并应用一个凸出的外壳来覆盖大部分的那张纸。

dif = remove_noise(dif) # get the largest component
contours, _ = cv2.findContours(dif.astype(np.uint8), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
drawing = dif.copy().astype(np.uint8)

hull = [cv2.convexHull(contours[0])]

cv2.drawContours(drawing, hull, 0, 255,-1)

img_paper_extracted = cv2.bitwise_and(img_paper,img_paper,mask=drawing)

这是我的最终结果。

【讨论】:

【参考方案3】:

这些图像中的蓝色和绿色在颜色方面非常接近([80,95] vs [97, 101] 在 Hue Channel 上)。不幸的是,浅蓝色和绿色作为颜色彼此相邻。我在 HSV 和 LAB 颜色空间中都进行了尝试,看看我是否可以在一个与另一个之间获得更好的分离。

我使用您提到的特征匹配来对齐图像。我们可以看到透视差异导致糖果的小块突出(蓝色小块)

我根据两者之间的像素级颜色差异制作了一个蒙版。

由于图像排列不完美,因此出现了很多突出的地方。为了帮助解决这个问题,我们还可以检查每个像素周围的正方形区域,以查看其附近的任何邻居是否与其颜色匹配。如果是,我们会将其从掩码中移除。

我们可以使用它在原始图像上进行绘画以标记不同之处。

这是 LAB 版本代码的结果

我将在此处包含两个版本的代码。它们与“WASD”交互以更改两个参数(颜色边距和模糊边距)。 color_margin 表示两种颜色必须有多么不同才能不再被视为相同。 fuzz_margin 是在像素周围寻找匹配颜色的距离。

lab_version.py

import cv2
import numpy as np

# returns the difference mask between two single-channel images
def diffChannel(one, two, margin):
    # get the largest difference per pixel
    diff = np.maximum(cv2.subtract(one, two), cv2.subtract(two, one));

    # mask on margin
    mask = cv2.inRange(diff, margin, 255);
    return mask;

# returns difference between colors of two image in the LAB colorspace
# (ignores the L channel) <- the 'L' channel holds how bright the image is
def labDiff(one, two, margin):
    # split
    l1,a1,b1 = cv2.split(one);
    l2,a2,b2 = cv2.split(two);

    # do a diff on the 'a' and 'b' channels
    a_mask = diffChannel(a1, a2, margin);
    b_mask = diffChannel(b1, b2, margin);

    # combine masks
    mask = cv2.bitwise_or(a_mask, b_mask);
    return mask;

# add/remove margin to all sides of an image
def addMargin(img, margin):
    return cv2.copyMakeBorder(img, margin, margin, margin, margin, cv2.BORDER_CONSTANT, 0);
def removeMargin(img, margin):
    return img[margin:-margin, margin:-margin];

# fuzzy match the masked pixels to clean up small differences in the image
def fuzzyMatch(src, dst, mask, margin, radius):
    # add margins to prevent out-of-bounds error
    src = addMargin(src, radius);
    dst = addMargin(dst, radius);
    mask = addMargin(mask, radius);

    # do a search on a square window
    size = radius * 2 + 1;

    # get mask points
    temp = np.where(mask == 255);
    points = [];
    for a in range(len(temp[0])):
        y = temp[0][a];
        x = temp[1][a];
        points.append([x,y]);

    # do a fuzzy match on each position
    for point in points:
        # unpack
        x,y = point;

        # calculate slice positions
        left = x - radius;
        right = x + radius + 1;
        top = y - radius;
        bottom = y + radius + 1;

        # make color window
        color_window = np.zeros((size, size, 3), np.uint8);
        color_window[:] = src[y,x];

        # do a lab diff with dest
        dst_slice = dst[top:bottom, left:right];
        diff = labDiff(color_window, dst_slice, margin);

        # if any part of the diff is false, erase from mask
        if np.any(diff != 255):
            mask[y,x] = 0;

    # remove margins
    src = removeMargin(src, radius);
    dst = removeMargin(dst, radius);
    mask = removeMargin(mask, radius);
    return mask;

# params
color_margin = 15;
fuzz_margin = 5;

# load images
left = cv2.imread("left.jpg");
right = cv2.imread("right.jpg");

# align
# get keypoints
sift = cv2.SIFT_create();
kp1, des1 = sift.detectAndCompute(left, None);
kp2, des2 = sift.detectAndCompute(right, None);

# match
bfm = cv2.BFMatcher();
matches = bfm.knnMatch(des1, des2, k=2); # only get two possible matches

# ratio test (reject matches that are close together)
# these features are typically repetitive, and close together (like teeth on a comb)
# and are very likely to match onto the wrong one causing misalignment
cleaned = [];
for a,b in matches:
    if a.distance < 0.7 * b.distance:
        cleaned.append(a);

# calculate homography
src = np.float32([kp1[a.queryIdx].pt for a in cleaned]).reshape(-1,1,2);
dst = np.float32([kp2[a.trainIdx].pt for a in cleaned]).reshape(-1,1,2);
hmat, _ = cv2.findHomography(src, dst, cv2.RANSAC, 5.0);

# warp left
h,w = left.shape[:2];
left = cv2.warpPerspective(left, hmat, (w,h));

# mask left
mask = np.zeros((h,w), np.uint8);
mask[:] = 255;
warp_mask = cv2.warpPerspective(mask, hmat, (w,h));

# difference check
# change to a less light-sensitive color space
left_lab = cv2.cvtColor(left, cv2.COLOR_BGR2LAB);
right_lab = cv2.cvtColor(right, cv2.COLOR_BGR2LAB);

# tweak params
done = False;
while not done:
    diff_mask = labDiff(left_lab, right_lab, color_margin);

    # combine with warp mask (get rid of the blank space after the warp)
    diff_mask = cv2.bitwise_and(diff_mask, warp_mask);

    # do fuzzy matching to clean up mask pixels
    before = np.copy(diff_mask);
    diff_mask = fuzzyMatch(left_lab, right_lab, diff_mask, color_margin, fuzz_margin);

    # open (erode + dilate) to clean up small dots
    kernel = np.ones((5,5), np.uint8);
    diff_mask = cv2.morphologyEx(diff_mask, cv2.MORPH_OPEN, kernel);

    # pull just the diff
    just_diff = np.zeros_like(right);
    just_diff[diff_mask == 255] = right[diff_mask == 255];
    copy = np.copy(right);
    copy[diff_mask == 255] = (0,255,0);

    # show
    cv2.imshow("Right", copy);
    cv2.imshow("Before Fuzz", before);
    cv2.imshow("After Fuzz", diff_mask);
    cv2.imshow("Just the Diff", just_diff);
    key = cv2.waitKey(0);

    cv2.imwrite("mark2.png", copy);

    # check key
    done = key == ord('q');
    change = False;
    if key == ord('d'):
        color_margin += 1;
        change = True;
    if key == ord('a'):
        color_margin -= 1;
        change = True;
    if key == ord('w'):
        fuzz_margin += 1;
        change = True;
    if key == ord('s'):
        fuzz_margin -= 1;
        change = True;

    # print vals
    if change:
        print("Color: " + str(color_margin) + " || Fuzz: " + str(fuzz_margin));

hsv_version.py

import cv2
import numpy as np

# returns the difference mask between two single-channel images
def diffChannel(one, two, margin):
    # get the largest difference per pixel
    diff = np.maximum(cv2.subtract(one, two), cv2.subtract(two, one));

    # mask on margin
    mask = cv2.inRange(diff, margin, 255);
    return mask;

# returns difference between colors of two images in the LAB colorspace
# (ignores the L channel) <- the 'L' channel holds how bright the image is
def labDiff(one, two, margin):
    # split
    l1,a1,b1 = cv2.split(one);
    l2,a2,b2 = cv2.split(two);

    # do a diff on the 'a' and 'b' channels
    a_mask = diffChannel(a1, a2, margin);
    b_mask = diffChannel(b1, b2, margin);

    # combine masks
    mask = cv2.bitwise_or(a_mask, b_mask);
    return mask;

# returns the difference between colors of two images in the HSV colorspace
# the 'H' channel is hue (color)
def hsvDiff(one, two, margin):
    # split
    h1,s1,v1 = cv2.split(one);
    h2,s2,v2 = cv2.split(two);

    # do a diff on the 'h' channel
    h_mask = diffChannel(h1, h2, margin);
    return h_mask;

# add/remove margin to all sides of an image
def addMargin(img, margin):
    return cv2.copyMakeBorder(img, margin, margin, margin, margin, cv2.BORDER_CONSTANT, 0);
def removeMargin(img, margin):
    return img[margin:-margin, margin:-margin];

# fuzzy match the masked pixels to clean up small differences in the image
def fuzzyMatch(src, dst, mask, margin, radius):
    # add margins to prevent out-of-bounds error
    src = addMargin(src, radius);
    dst = addMargin(dst, radius);
    mask = addMargin(mask, radius);

    # do a search on a square window
    size = radius * 2 + 1;

    # get mask points
    temp = np.where(mask == 255);
    points = [];
    for a in range(len(temp[0])):
        y = temp[0][a];
        x = temp[1][a];
        points.append([x,y]);
    print("Num Points in Mask: " + str(len(points)));

    # do a fuzzy match on each position
    for point in points:
        # unpack
        x,y = point;

        # calculate slice positions
        left = x - radius;
        right = x + radius + 1;
        top = y - radius;
        bottom = y + radius + 1;

        # make color window
        color_window = np.zeros((size, size, 3), np.uint8);
        color_window[:] = src[y,x];

        # do a lab diff with dest
        dst_slice = dst[top:bottom, left:right];
        diff = hsvDiff(color_window, dst_slice, margin);
        # diff = labDiff(color_window, dst_slice, margin);

        # if any part of the diff is false, erase from mask
        if np.any(diff != 255):
            mask[y,x] = 0;

    # remove margins
    src = removeMargin(src, radius);
    dst = removeMargin(dst, radius);
    mask = removeMargin(mask, radius);
    return mask;

# params
color_margin = 15;
fuzz_margin = 5;

# load images
left = cv2.imread("left.jpg");
right = cv2.imread("right.jpg");

# align
# get keypoints
sift = cv2.SIFT_create();
kp1, des1 = sift.detectAndCompute(left, None);
kp2, des2 = sift.detectAndCompute(right, None);

# match
bfm = cv2.BFMatcher();
matches = bfm.knnMatch(des1, des2, k=2); # only get two possible matches

# ratio test (reject matches that are close together)
# these features are typically repetitive, and close together (like teeth on a comb)
# and are very likely to match onto the wrong one causing misalignment
cleaned = [];
for a,b in matches:
    if a.distance < 0.7 * b.distance:
        cleaned.append(a);

# calculate homography
src = np.float32([kp1[a.queryIdx].pt for a in cleaned]).reshape(-1,1,2);
dst = np.float32([kp2[a.trainIdx].pt for a in cleaned]).reshape(-1,1,2);
hmat, _ = cv2.findHomography(src, dst, cv2.RANSAC, 5.0);

# warp left
h,w = left.shape[:2];
left = cv2.warpPerspective(left, hmat, (w,h));

# mask left
mask = np.zeros((h,w), np.uint8);
mask[:] = 255;
warp_mask = cv2.warpPerspective(mask, hmat, (w,h));

# difference check
# change to a less light-sensitive color space
left_hsv = cv2.cvtColor(left, cv2.COLOR_BGR2HSV);
right_hsv = cv2.cvtColor(right, cv2.COLOR_BGR2HSV);

# loop
done = False;
color_margin = 5;
fuzz_margin = 5;
while not done:
    diff_mask = hsvDiff(left_hsv, right_hsv, color_margin);

    # combine with warp mask (get rid of the blank space after the warp)
    diff_mask = cv2.bitwise_and(diff_mask, warp_mask);

    # do fuzzy matching to clean up mask pixels
    before = np.copy(diff_mask);
    diff_mask = fuzzyMatch(left_hsv, right_hsv, diff_mask, color_margin, fuzz_margin);

    # open (erode + dilate) to clean up small dots
    kernel = np.ones((5,5), np.uint8);
    diff_mask = cv2.morphologyEx(diff_mask, cv2.MORPH_OPEN, kernel);

    # get channel
    h1,_,_ = cv2.split(left_hsv);
    h2,_,_ = cv2.split(right_hsv);

    # copy
    copy = np.copy(right);
    copy[diff_mask == 255] = (0,255,0);

    # show
    cv2.imshow("Left hue", h1);
    cv2.imshow("Right hue", h2);
    cv2.imshow("Mark", copy);
    cv2.imshow("Before", before);
    cv2.imshow("Diff", diff_mask);
    key = cv2.waitKey(0);

    cv2.imwrite("mark1.png", copy);

    # check key
    done = key == ord('q');
    change = False;
    if key == ord('d'):
        color_margin += 1;
        change = True;
    if key == ord('a'):
        color_margin -= 1;
        change = True;
    if key == ord('w'):
        fuzz_margin += 1;
        change = True;
    if key == ord('s'):
        fuzz_margin -= 1;
        change = True;

    # print vals
    if change:
        print("Color: " + str(color_margin) + " || Fuzz: " + str(fuzz_margin));

【讨论】:

以上是关于使用 OpenCV 的视角和光照条件略有不同的两张图像之间的差异的主要内容,如果未能解决你的问题,请参考以下文章

Python笔记-使用SSIM找两张图不同及使用Opencv显示

使用 OpenCV (Python) 改进轮廓检测

MicrosoftProjectOxford 微软牛津计划

SQL Server中 两个不同的数据库中的两张表如何关联?

Unity Shader学习笔记坐标变换

如何使用 Homography 在 OpenCV 中转换图片?