Uniform Circular LBP人脸识别实现

Posted

技术标签:

【中文标题】Uniform Circular LBP人脸识别实现【英文标题】:Uniform Circular LBP face recognition implementation 【发布时间】:2014-01-02 02:14:30 【问题描述】:

我正在尝试使用统一圆形 LBP(1 个单位半径邻域中的 8 个点)来实现一个基本的人脸识别系统。 我正在拍摄一张图片,将其尺寸调整为 200 x 200 像素,然后将图片拆分为 8x8 小图片。然后我计算每个小图像的直方图并获取直方图列表。为了比较 2 张图像,我计算相应直方图之间的卡方距离并生成分数。

这是我的统一 LBP 实现:

import numpy as np
import math
uniform = 0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 58, 6: 5, 7: 6, 8: 7, 9: 58, 10: 58, 11: 58, 12: 8, 13: 58, 14: 9, 15: 10, 16: 11, 17: 58, 18: 58, 19: 58, 20: 58, 21: 58, 22: 58, 23: 58, 24: 12, 25: 58, 26: 58, 27: 58, 28: 13, 29: 58, 30: 14, 31: 15, 32: 16, 33: 58, 34: 58, 35: 58, 36: 58, 37: 58, 38: 58, 39: 58, 40: 58, 41: 58, 42: 58, 43: 58, 44: 58, 45: 58, 46: 58, 47: 58, 48: 17, 49: 58, 50: 58, 51: 58, 52: 58, 53: 58, 54: 58, 55: 58, 56: 18, 57: 58, 58: 58, 59: 58, 60: 19, 61: 58, 62: 20, 63: 21, 64: 22, 65: 58, 66: 58, 67: 58, 68: 58, 69: 58, 70: 58, 71: 58, 72: 58, 73: 58, 74: 58, 75: 58, 76: 58, 77: 58, 78: 58, 79: 58, 80: 58, 81: 58, 82: 58, 83: 58, 84: 58, 85: 58, 86: 58, 87: 58, 88: 58, 89: 58, 90: 58, 91: 58, 92: 58, 93: 58, 94: 58, 95: 58, 96: 23, 97: 58, 98: 58, 99: 58, 100: 58, 101: 58, 102: 58, 103: 58, 104: 58, 105: 58, 106: 58, 107: 58, 108: 58, 109: 58, 110: 58, 111: 58, 112: 24, 113: 58, 114: 58, 115: 58, 116: 58, 117: 58, 118: 58, 119: 58, 120: 25, 121: 58, 122: 58, 123: 58, 124: 26, 125: 58, 126: 27, 127: 28, 128: 29, 129: 30, 130: 58, 131: 31, 132: 58, 133: 58, 134: 58, 135: 32, 136: 58, 137: 58, 138: 58, 139: 58, 140: 58, 141: 58, 142: 58, 143: 33, 144: 58, 145: 58, 146: 58, 147: 58, 148: 58, 149: 58, 150: 58, 151: 58, 152: 58, 153: 58, 154: 58, 155: 58, 156: 58, 157: 58, 158: 58, 159: 34, 160: 58, 161: 58, 162: 58, 163: 58, 164: 58, 165: 58, 166: 58, 167: 58, 168: 58, 169: 58, 170: 58, 171: 58, 172: 58, 173: 58, 174: 58, 175: 58, 176: 58, 177: 58, 178: 58, 179: 58, 180: 58, 181: 58, 182: 58, 183: 58, 184: 58, 185: 58, 186: 58, 187: 58, 188: 58, 189: 58, 190: 58, 191: 35, 192: 36, 193: 37, 194: 58, 195: 38, 196: 58, 197: 58, 198: 58, 199: 39, 200: 58, 201: 58, 202: 58, 203: 58, 204: 58, 205: 58, 206: 58, 207: 40, 208: 58, 209: 58, 210: 58, 211: 58, 212: 58, 213: 58, 214: 58, 215: 58, 216: 58, 217: 58, 218: 58, 219: 58, 220: 58, 221: 58, 222: 58, 223: 41, 224: 42, 225: 43, 226: 58, 227: 44, 228: 58, 229: 58, 230: 58, 231: 45, 232: 58, 233: 58, 234: 58, 235: 58, 236: 58, 237: 58, 238: 58, 239: 46, 240: 47, 241: 48, 242: 58, 243: 49, 244: 58, 245: 58, 246: 58, 247: 50, 248: 51, 249: 52, 250: 58, 251: 53, 252: 54, 253: 55, 254: 56, 255: 57


def bilinear_interpolation(i, j, y, x, img):
    fy, fx = int(y), int(x)
    cy, cx = math.ceil(y), math.ceil(x)

    # calculate the fractional parts
    ty = y - fy
    tx = x - fx

    w1 = (1 - tx) * (1 - ty)
    w2 =      tx  * (1 - ty)
    w3 = (1 - tx) *      ty
    w4 =      tx  *      ty

    return w1 * img[i + fy, j + fx] + w2 * img[i + fy, j + cx] + \
           w3 * img[i + cy, j + fx] + w4 * img[i + cy, j + cx]

def thresholded(center, pixels):
    out = []
    for a in pixels:
        if a > center:
            out.append(1)
        else:
            out.append(0)
    return out


def uniform_circular(img, P, R):
    ysize, xsize = img.shape
    transformed_img = np.zeros((ysize - 2 * R,xsize - 2 * R), dtype=np.uint8)
    for y in range(R, len(img) - R):
        for x in range(R, len(img[0]) - R):
            center = img[y,x]
            pixels = []
            for point in range(0, P):
                r = R * math.cos(2 * math.pi * point / P)
                c = R * math.sin(2 * math.pi * point / P)
                pixels.append(bilinear_interpolation(y, x, r, c, img))

            values = thresholded(center, pixels)
            res = 0
            for a in range(0, P):
                    res += values[a] << a
            transformed_img.itemset((y - R,x - R), uniform[res])

    transformed_img = transformed_img[R:-R,R:-R]
    return transformed_img

我对@9​​87654321@ 做了一个实验,每个对象拍摄了 2 张画廊图像和 8 张探测图像。实验的 ROC 为:

在上述 ROC 中,x 轴表示错误接受率y 轴表示真正接受率。根据统一 LBP 标准,准确性似乎很差。我确信我的实现有问题。如果有人可以帮助我,那就太好了。感谢阅读。

编辑:

我想我在上面的代码中犯了一个错误。我顺时针走,而关于 LBP 的论文建议我在分配权重时应该逆时针走。行:c = R * math.sin(2 * math.pi * point / P) 应该是 c = -R * math.sin(2 * math.pi * point / P)。编辑后的结果更糟。这表明我的代码有问题。我猜我选择插值坐标的方式搞砸了。

编辑:接下来我尝试复制 @bytefish 的代码 here 并使用统一的 hashmap 来实现统一的循环 LBP。

def uniform_circular(img, P, R):
    ysize, xsize = img.shape
    transformed_img = np.zeros((ysize - 2 * R,xsize - 2 * R), dtype=np.uint8)
    for point in range(0, P):
        x = R * math.cos(2 * math.pi * point / P)
        y = -R * math.sin(2 * math.pi * point / P)
        fy, fx = int(y), int(x)
        cy, cx = math.ceil(y), math.ceil(x)

        # calculate the fractional parts
        ty = y - fy
        tx = x - fx

        w1 = (1 - tx) * (1 - ty)
        w2 =      tx  * (1 - ty)
        w3 = (1 - tx) *      ty
        w4 =      tx  *      ty 
        for i in range(R, ysize - R):
            for j in range(R, xsize - R):
                t = w1 * img[i + fy, j + fx] + w2 * img[i + fy, j + cx] + \
                    w3 * img[i + cy, j + fx] + w4 * img[i + cy, j + cx]
                center = img[i,j]
                pixels = []
                res = 0
                transformed_img[i - R,j - R] += (t > center) << point

    for i in range(R, ysize - R):
        for j in range(R, xsize - R):
            transformed_img[i - R,j - R] = uniform[transformed_img[i - R,j - R]]

这是相同的 ROC:

我尝试在 C++ 中实现相同的代码。代码如下:

#include <stdio.h>
#include <stdlib.h>
#include <opencv2/opencv.hpp>

using namespace cv;

int* uniform_circular_LBP_histogram(Mat& src) 
 int i, j;
 int radius = 1;
 int neighbours = 8;
 Size size = src.size();
 int *hist_array = (int *)calloc(59,sizeof(int));
 int uniform[] = 0,1,2,3,4,58,5,6,7,58,58,58,8,58,9,10,11,58,58,58,58,58,58,58,12,58,58,58,13,58,14,15,16,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,17,58,58,58,58,58,58,58,18,58,58,58,19,58,20,21,22,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,23,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,24,58,58,58,58,58,58,58,25,58,58,58,26,58,27,28,29,30,58,31,58,58,58,32,58,58,58,58,58,58,58,33,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,34,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,35,36,37,58,38,58,58,58,39,58,58,58,58,58,58,58,40,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,41,42,43,58,44,58,58,58,45,58,58,58,58,58,58,58,46,47,48,58,49,58,58,58,50,51,52,58,53,54,55,56,57;
 Mat dst = Mat::zeros(size.height - 2 * radius, size.width - 2 * radius, CV_8UC1);

 for (int n = 0; n < neighbours; n++) 
   float x = static_cast<float>(radius) *  cos(2.0 * M_PI * n / static_cast<float>(neighbours));
   float y = static_cast<float>(radius) * -sin(2.0 * M_PI * n / static_cast<float>(neighbours));

   int fx = static_cast<int>(floor(x));
   int fy = static_cast<int>(floor(y));
   int cx = static_cast<int>(ceil(x));
   int cy = static_cast<int>(ceil(x));

   float ty = y - fy;
   float tx = y - fx;

   float w1 = (1 - tx) * (1 - ty);
   float w2 =      tx  * (1 - ty);
   float w3 = (1 - tx) *      ty;
   float w4 = 1 - w1 - w2 - w3;

   for (i = 0; i < 59; i++) 
     hist_array[i] = 0;
   

   for (i = radius; i < size.height - radius; i++) 
     for (j = radius; j < size.width - radius; j++) 
       float t = w1 * src.at<uchar>(i + fy, j + fx) + \
                 w2 * src.at<uchar>(i + fy, j + cx) + \
                 w3 * src.at<uchar>(i + cy, j + fx) + \
                 w4 * src.at<uchar>(i + cy, j + cx);
       dst.at<uchar>(i - radius, j - radius) += ((t > src.at<uchar>(i,j)) && \
                                                 (abs(t - src.at<uchar>(i,j)) > std::numeric_limits<float>::epsilon())) << n;
     
   
 

 for (i = radius; i < size.height - radius; i++) 
   for (j = radius; j < size.width - radius; j++) 
       int val = uniform[dst.at<uchar>(i - radius, j - radius)];
       dst.at<uchar>(i - radius, j - radius) = val;
       hist_array[val] += 1;
   
 
 return hist_array;


int main( int argc, char** argv )

 Mat src;

 int i,j;
 src = imread( argv[1], 0 );
 if( argc != 2 || !src.data )
   
     printf( "No image data \n" );
     return -1;
   

 const int width = 200;
 const int height = 200;
 Size size = src.size();
 Size new_size = Size();
 new_size.height = 200;
 new_size.width  = 200;
 Mat resized_src;
 resize(src, resized_src, new_size, 0, 0, INTER_CUBIC);

 int count = 1;
 for (i = 0; i <= width - 8; i += 25) 
   for (j = 0; j <= height - 8; j += 25) 
     Mat new_mat = resized_src.rowRange(i, i + 25).colRange(j, j + 25);
     int *hist = uniform_circular_LBP_histogram(new_mat);
     int z;
     for (z = 0; z < 58; z++) 
       std::cout << hist[z] << ",";
     
     std::cout << hist[z] << "\n";
     count += 1;
   
 
 return 0;

ROC 相同:

我还做了一个基于排名的实验。并得到了这条 CMC 曲线。

关于CMC曲线的一些细节:X轴代表等级。 (1-10) 和 Y 轴代表精度 (0-1)。所以,我得到了 80%+ Rank1 的准确率。

【问题讨论】:

将图像分割成 8X8 小图像并不是最佳选择。我遇到了一篇论文,讨论了使用 LBP 时子图像的最佳尺寸,如果你愿意,我可以查一下。另外,请记住,您需要裁剪图像,以便仅包含人脸区域(我也可以向您推荐一篇讨论如何对齐和裁剪人脸的论文)。 @GilLevi 我很想看到那篇论文,它讨论了使用 LBP 时子图像的最佳尺寸。而且,我使用的 AT&T 数据库只包含人脸。所以,我不需要照顾裁剪或任何事情。谢谢! 好的,请通过 gil.levi100 "at" gmail.com 给我发邮件,我会把论文发给你。 【参考方案1】:

我不了解 python,但很可能您的代码已损坏。

我的建议是,按照这 2 个链接,尝试将其中一个 C++ 代码移植到 python。第一个链接还包含一些关于 LBP 的信息。

http://www.bytefish.de/blog/local_binary_patterns/

https://github.com/berak/uniform-lbp

还有一件事我可以说,你说你正在将图像调整为 200x200。你为什么这样做?据我所知,AT&T 图像比这小,你只是让图像变大,但我认为这对你没有帮助,而且它可能会对性能产生负面影响。

【讨论】:

在我的第三次尝试中,我确实尝试完全复制 bytefish 的代码。但这并没有太大影响我的准确性。我现在将查看 berak 的代码。并回答调整大小部分:我正在调整图像大小以使其成为 8 的倍数,以便我可以轻松地将图像拆分为 8x8 图像。 我知道您是为此而做的,但如果原始尺寸是 73x73,如果将其缩放到 200x200、64x64 或 80x80 可能就不够了。 我曾尝试编写类似 bytefish 代码的 C++ 代码,但仍然没有显示出很大的改进。【参考方案2】:

我会说一些东西试试看

(1)在错误接受率和错误拒绝率之间绘制ROC曲线。

False Acceptance 可以,但上面描述的应该是 Genuine 拒绝率,而不是 Genuine 接受率。

检查绘制曲线的参数。我对代码帮不上什么忙。我不懂 Python。

(2) 如果你没有得到更好的结果,请检查 LBP 是否有效地解决了你试图解决的问题。 LBP主要用于纹理分析。

【讨论】:

以上是关于Uniform Circular LBP人脸识别实现的主要内容,如果未能解决你的问题,请参考以下文章

opencv人脸识别用哪种方法比较好?Eigenfaces?Fisherfaces?LBP?

SSD+LBP实现人脸识别

使用 LBP、深度学习和 OpenCV 进行实时人脸识别

图像识别基于LBP+LPQ算法融合人脸表情识别matlab源码

人脸表情识别基于matlab GUI LBP+SVM脸部动态特征人脸表情识别含Matlab源码 1369期

人脸表情识别基于matlab GUI LBP+SVM脸部动态特征人脸表情识别含Matlab源码 1369期