人脸验证与识别——从模型训练到项目部署

Posted 知来者逆

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了人脸验证与识别——从模型训练到项目部署相关的知识,希望对你有一定的参考价值。

前言

1.人脸验证其实是人脸识别中的一种,人脸验证要做的是1对1的验证,算法的验证模式是对当前人脸与另一张人脸做比对,然后给出得分值,可以按得分值来证明可以当前的人脸是否与另一给脸匹配上。这种使用最多的场景就是人脸解锁,还有高铁站检票口的身份认证入站。
2.人脸识别要做的是1对N的比对,就是拿当前采集到的人脸,然后拿去跟之前采集并保存在数据库人脸中找到与当前使用者人脸数据相符合的图像,并进行匹配,并且知道你是谁这种的使用场景就小区门禁,公司人脸打卡签到。
3.如果要完成一个完成的人脸验证或者人脸识别,基本的分为三个步骤,人脸检测(检测当前画面是否存在人脸),静默活体检测(当前检测到的人脸是否存在欺骗),人脸识别(验证当前检测的到的人脸与要匹配的人脸的是否匹配上)。

一、环境

1.训练环境系统是win10,Anaconda 3.5,python3.7,PyTorch 1.6,显卡RTX3080;cuda10.2,cudnn7.1。
2.模型部署环境PC用的是Vs2019, OpenCV4.5,用了ncnn做推理加速库,ncnn的版本是20220216这个版本。

二、人脸检测

1.在做人脸识别之前,首先最主要的一步是肯定是先检测到当前图像是否存在人脸,这个属于人脸检测的范围,目前有很多开源的人脸检测算法和模型,OpenCV本身也带有人脸检测的算法,但在人脸验证识别中,要涉及到人脸对齐,就是不只是检测人脸还要检测出人脸的关键点。
不带人脸关键点的检测:

带人脸关键点的检测:

2.因为要考虑应用到移动端或者边缘设备上,我这里用的是yolov5-face这个算法,github地址:https://github.com/deepcam-cn/yolov5-face,这里它使用的数据集是WIDERFace,也给出了标签文件,如果想优化自己的使用场景,可以按数据格式添加自己的数据集。它的检测结果和一些参数:


3.训练之后保存成.pt的模型,先转换成onnx模型,onnx模型就可以使用onnxruntime或者是opencv的dnn来进行C++推理,我这里转了onnx模型之后再转成ncnn模型,关于模型方法可以参考我之前的博客或者是ncnn官方文档。

4.ncnn模型推理代码:

#include "yoloface.h"
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>

#define clip(x, y) (x < 0 ? 0 : (x > y ? y : x))

static inline float intersection_area(const Object& a, const Object& b)

    cv::Rect_<float> inter = a.rect & b.rect;
    return inter.area();


static void qsort_descent_inplace(std::vector<Object>& faceobjects, int left, int right)

    int i = left;
    int j = right;
    float p = faceobjects[(left + right) / 2].prob;

    while (i <= j)
    
        while (faceobjects[i].prob > p)
            i++;

        while (faceobjects[j].prob < p)
            j--;

        if (i <= j)
        
            // swap
            std::swap(faceobjects[i], faceobjects[j]);

            i++;
            j--;
        
    

    #pragma omp parallel sections
    
        #pragma omp section
        
            if (left < j) qsort_descent_inplace(faceobjects, left, j);
        
        #pragma omp section
        
            if (i < right) qsort_descent_inplace(faceobjects, i, right);
        
    


static void qsort_descent_inplace(std::vector<Object>& faceobjects)

    if (faceobjects.empty())
        return;

    qsort_descent_inplace(faceobjects, 0, faceobjects.size() - 1);


static void nms_sorted_bboxes(const std::vector<Object>& faceobjects, std::vector<int>& picked, float nms_threshold)

    picked.clear();

    const int n = faceobjects.size();

    std::vector<float> areas(n);
    for (int i = 0; i < n; i++)
    
        areas[i] = faceobjects[i].rect.area();
    

    for (int i = 0; i < n; i++)
    
        const Object& a = faceobjects[i];

        int keep = 1;
        for (int j = 0; j < (int)picked.size(); j++)
        
            const Object& b = faceobjects[picked[j]];

            // intersection over union
            float inter_area = intersection_area(a, b);
            float union_area = areas[i] + areas[picked[j]] - inter_area;
            // float IoU = inter_area / union_area
            if (inter_area / union_area > nms_threshold)
                keep = 0;
        

        if (keep)
            picked.push_back(i);
    


static inline float sigmoid(float x)

    return static_cast<float>(1.f / (1.f + exp(-x)));


static void generate_proposals(const ncnn::Mat& anchors, int stride, const ncnn::Mat& in_pad, const ncnn::Mat& feat_blob, float prob_threshold, std::vector<Object>& objects)

    const int num_grid = feat_blob.h;

    int num_grid_x;
    int num_grid_y;
    if (in_pad.w > in_pad.h)
    
        num_grid_x = in_pad.w / stride;
        num_grid_y = num_grid / num_grid_x;
    
    else
    
        num_grid_y = in_pad.h / stride;
        num_grid_x = num_grid / num_grid_y;
    

    const int num_class = feat_blob.w - 5-10;

    const int num_anchors = anchors.w / 2;

    for (int q = 0; q < num_anchors; q++)
    
        const float anchor_w = anchors[q * 2];
        const float anchor_h = anchors[q * 2 + 1];

        const ncnn::Mat feat = feat_blob.channel(q);

        for (int i = 0; i < num_grid_y; i++)
        
            for (int j = 0; j < num_grid_x; j++)
            
                const float* featptr = feat.row(i * num_grid_x + j);

                // find class index with max class score
                int class_index = 0;
                float class_score = -FLT_MAX;
                for (int k = 0; k < num_class; k++)
                
                    float score = featptr[5 +10+ k];
                    if (score > class_score)
                    
                        class_index = k;
                        class_score = score;
                    
                

                float box_score = featptr[4];

		        float confidence = sigmoid(box_score); //* sigmoid(class_score);

                if (confidence >= prob_threshold)
                
                    float dx = sigmoid(featptr[0]);
                    float dy = sigmoid(featptr[1]);
                    float dw = sigmoid(featptr[2]);
                    float dh = sigmoid(featptr[3]);

                    float pb_cx = (dx * 2.f - 0.5f + j) * stride;
                    float pb_cy = (dy * 2.f - 0.5f + i) * stride;

                    float pb_w = pow(dw * 2.f, 2) * anchor_w;
                    float pb_h = pow(dh * 2.f, 2) * anchor_h;

                    float x0 = pb_cx - pb_w * 0.5f;
                    float y0 = pb_cy - pb_h * 0.5f;
                    float x1 = pb_cx + pb_w * 0.5f;
                    float y1 = pb_cy + pb_h * 0.5f;

                    Object obj;
                    obj.rect.x = x0;
                    obj.rect.y = y0;
                    obj.rect.width = x1 - x0;
                    obj.rect.height = y1 - y0;
                    obj.label = class_index;
                    obj.prob = confidence;

		            for (int l = 0; l < 5; l++)
		            
			            float x = featptr[2 * l + 5] * anchor_w + j * stride;
			            float y = featptr[2 * l + 1 + 5] * anchor_h + i * stride;
			            obj.pts.push_back(cv::Point2f(x, y));
		            
	                objects.push_back(obj);
                
            
        
    




YoloFace::YoloFace()




int YoloFace::loadModel(std::string model, bool use_gpu)

    bool has_gpu = false;
    face_net.clear();

    face_net.opt = ncnn::Option();

#if NCNN_VULKAN
    ncnn::create_gpu_instance();
    has_gpu = ncnn::get_gpu_count() > 0;
#endif
    bool to_use_gpu = has_gpu && use_gpu;
    face_net.opt.use_vulkan_compute = to_use_gpu;
    face_net.load_param((model+".param").c_str());
    face_net.load_model((model + ".bin").c_str());

    return 0;




int YoloFace::detection(const cv::Mat& rgb, std::vector<Object>& objects, float prob_threshold, float nms_threshold)

    int img_w = rgb.cols;
    int img_h = rgb.rows;

    // letterbox pad to multiple of 32
    int w = img_w;
    int h = img_h;
    float scale = 1.f;
    if (w > h)
    
        scale = (float)target_size / w;
        w = target_size;
        h = h * scale;
    
    else
    
        scale = (float)target_size / h;
        h = target_size;
        w = w * scale;
    

    ncnn::Mat in = ncnn::Mat::from_pixels_resize(rgb.data, ncnn::Mat::PIXEL_RGB, img_w, img_h, w, h);

    int wpad = (w + 31) / 32 * 32 - w;
    int hpad = (h + 31) / 32 * 32 - h;
    ncnn::Mat in_pad;
    ncnn::copy_make_border(in, in_pad, hpad / 2, hpad - hpad / 2, wpad / 2, wpad - wpad / 2, ncnn::BORDER_CONSTANT, 114.f);

    in_pad.substract_mean_normalize(0, norm_vals);

    ncnn::Extractor ex = face_net.create_extractor();

    ex.input("data", in_pad);

    std::vector<Object> proposals;

    // anchor setting from yolov5/models/yolov5s.yaml

    // stride 8
    
        ncnn::Mat out;
        ex.extract("981", out);

        ncnn::Mat anchors(6);
        anchors[0] = 4.f;
        anchors[1] = 5.f;
        anchors[2] = 8.f;
        anchors[3] = 10.f;
        anchors[4] = 13.f;
        anchors[5] = 16.f;

        std::vector<Object> objects8;
        generate_proposals(anchors, 8, in_pad, out, prob_threshold, objects8);

        proposals.insert(proposals.end(), objects8.begin(), objects8.end());
    

    // stride 16
    
        ncnn::Mat out;
        ex.extract("983", out);

        ncnn::Mat anchors(6);
        anchors[0] = 23.f;
        anchors[1] = 29.f;
        anchors[2] = 43.f;
        anchors[3] = 55.f;
        anchors[4] = 73.f;
        anchors[5] = 105.f;

        std::vector<Object> objects16;
        generate_proposals(anchors, 16, in_pad, out, prob_threshold, objects16);

        proposals.insert(proposals.end(), objects16.begin(), objects16.end());
    

    // stride 32
    
        ncnn::Mat out;
        ex.extract("985", out);

        ncnn::Mat anchors(6);
        anchors[0] = 146.f;
        anchors[1] = 217.f;
        anchors[2] = 231.f;
        anchors[3] = 300.f;
        anchors[4] = 335.f;
        anchors[5] = 433.f;

        std::vector<Object> objects32;
        generate_proposals(anchors, 32, in_pad, out, prob_threshold, objects32);

        proposals.insert(proposals.end(), objects32.begin(), objects32.end());
    

    // sort all proposals by score from highest to lowest
    qsort_descent_inplace(proposals);

    // apply nms with nms_threshold
    std::vector<int> picked;
    nms_sorted_bboxes(proposals, picked, nms_threshold);

    int count = picked.size();

    objects.resize(count);
    for (int i = 0; i < count; i++)
    
        objects[i] = proposals[picked[i]];

        // adjust offset to original unpadded
        float x0 = (objects[i].rect.x - (wpad / 2)) / scale;
        float y0 = (objects[i].rect.y - (hpad / 2)) <

以上是关于人脸验证与识别——从模型训练到项目部署的主要内容,如果未能解决你的问题,请参考以下文章

PaddleHub人脸识别方案部署,将训练好的模型在pytchrom中进行部署应用

Kaggle+SCF端到端验证码识别从训练到部署

如何快速成为图像识别大神?英伟达专家带你低门槛高效实现AI模型训练与部署 | 英伟达CV公开课

基于opencv的人脸采集训练及识别应用

深度学习基于卷积神经网络(tensorflow)的人脸识别项目

深度学习基于卷积神经网络(tensorflow)的人脸识别项目