使用“截断”大 68 点模型进行人脸编码
Posted
技术标签:
【中文标题】使用“截断”大 68 点模型进行人脸编码【英文标题】:Use of "truncated" large 68-points model for face encoding 【发布时间】:2021-06-27 18:11:14 【问题描述】:face_recognition 版本:1.3.0 Python版本:3.92 操作系统:Windows 10 64
说明 我试图解决“蒙面”的人脸识别和识别。一个想法是使用截断的面部特征集仅比较面部的某些部分(眼睛、眉毛、鼻子的一部分)。因此,我定义了一个新的 dlib 类,它返回上述面部部分的“原始”地标和( 0. 0) – 剩下的 68 点大模型。我还定义了一个新方法 face_encodings_masked() 来处理新类:
我做了什么
import face_recognition
from face_recognition import api as frapi
import numpy as np
import dlib
class Full_object_detection_masked(dlib.full_object_detection):
def part(self, idx:int):
if idx in range(2, 15) or idx in range(48, 68):
return (0, 0)
return super().part(idx)
def parts(self):
lst = dlib.points()
for idx in range(0, 2):
old_x = super().part(idx).x
old_y = super().part(idx).y
lst.insert(idx, dlib.point(old_x, old_y))
for idx in range(2, 15):
lst.insert(idx, dlib.point(0, 0))
for idx in range(15, 29):
old_x = super().part(idx).x
old_y = super().part(idx).y
lst.insert(idx, dlib.point(old_x, old_y))
for idx in range(29, 36):
lst.insert(idx, dlib.point(0, 0))
for idx in range(36, 48):
old_x = super().part(idx).x
old_y = super().part(idx).y
lst.insert(idx, dlib.point(old_x, old_y))
for idx in range(48, 68):
lst.insert(idx, dlib.point(0, 0))
return lst
def face_encodings_masked(face_image, known_face_locations=None, num_jitters=1, model="large"):
"""
Given an image, return the 128-dimension face encoding for each face in the image.
:param face_image: The image that contains one or more faces
:param known_face_locations: Optional - the bounding boxes of each face if you already know them.
:param num_jitters: How many times to re-sample the face when calculating encoding. Higher is more accurate, but slower (i.e. 100 is 100x slower)
:param model: Optional - which model to use. "large" (default) or "small" which only returns 5 points but is faster.
:return: A list of 128-dimensional face encodings (one for each face in the image)
"""
raw_landmarks = frapi._raw_face_landmarks(face_image,
known_face_locations,
model)
masked_raw_landmarks = []
for lm in raw_landmarks:
masked_raw_landmarks.append(Full_object_detection_masked(lm.rect, lm.parts()))
return [np.array(
frapi.face_encoder.compute_face_descriptor(
face_image,
raw_landmark_set,
num_jitters)) for raw_landmark_set in masked_raw_landmarks]
image = face_recognition.load_image_file("1.jpg")
"""calling modified face_encodings with truncated face landmarks.."""
enc_masked = face_encodings_masked(image, known_face_locations=None, num_jitters=1, model="large")
"""calling standard face_encodings with normal face landmarks.."""
enc_full = frapi.face_encodings(image, known_face_locations=None, num_jitters=1, model="large")
print("Difference between face encodings with partial (masked) and full landmarks:")
print(enc_full[0] - enc_masked[0])
我确实希望看到不同..但没有区别:
具有部分(蒙面)和完整地标的面部编码之间的区别:
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0.]
我尝试探索 face_recognition 和 dlib 模块的源代码,但发现每个适当的方法都需要人脸界标作为参数......所以,如果人脸界标不同 - 人脸编码应该不同,对吧?但事实并非如此。有什么想法吗?
【问题讨论】:
【参考方案1】:我决定采用另一种方式,在基本面部上使用虚拟蒙版(5 种不同类型)应用程序,然后 - 也为“蒙面”面部存储额外的编码。 ..好吧,它有效。不是 100% 有效,但.. 确实如此。 How it works - at Github
【讨论】:
以上是关于使用“截断”大 68 点模型进行人脸编码的主要内容,如果未能解决你的问题,请参考以下文章
基于 Tensorflow 2.x 从零训练 15 点人脸关键点检测模型
基于 Tensorflow 2.x 从零训练 15 点人脸关键点检测模型
基于 Tensorflow 2.x 从零训练 15 点人脸关键点检测模型