使用Chinese Whispers算法的人脸聚类
Posted
技术标签:
【中文标题】使用Chinese Whispers算法的人脸聚类【英文标题】:Face clustering using Chinese Whispers algorithm 【发布时间】:2017-11-05 20:52:37 【问题描述】:我正在尝试使用中文耳语算法进行人脸聚类。我使用 dlib 和 python 来提取每个人脸的特征并映射到 128 D 向量中,如 Davisking 在https://github.com/davisking/dlib/blob/master/examples/dnn_face_recognition_ex.cpp 中所述。
然后我按照那里给出的说明构建了一个图表。我实现了中文耳语算法并应用于此图。谁能告诉我我犯了什么错误?任何人都可以使用中文耳语算法上传用于面部聚类的python代码吗?这是我的中文耳语代码:
import networkx as nx
import random
from random import shuffle
import math
def chinese_whispers(nodes,edges,iterations):
G = nx.Graph()
G.add_nodes_from(nodes)
#print(G.node)
for n, v in enumerate(nodes):
G.node[n]['class'] = v
#print(n,v)
G.add_edges_from(edges)
#gn=G.nodes()
#for node in gn:
#print((node,G[node],G.node,G.node[node]))
#(0, 16: 'weight': 0.49846761956907698, 14: 'weight': 0.55778036559581601, 7: 'weight': 0.43902511314524784, 'class': 0)
for z in range(0, iterations):
gn = G.nodes()
# I randomize the nodes to give me an arbitrary start point
shuffle(gn)
for node in gn:
neighs = G[node]
classes =
# do an inventory of the given nodes neighbours and edge weights
for ne in neighs:
if isinstance(ne, int):
key=G.node[ne]['class']
if key in classes:
classes[key] += G[node][ne]['weight']
else:
classes[key] = G[node][ne]['weight']
# find the class with the highest edge weight sum
max = 0
maxclass = 0
for c in classes:
if classes[c] > max:
max = classes[c]
maxclass = c
# set the class of target node to the winning local class
G.node[node]['class'] = maxclass
n_clusters = []
for node in G.nodes():
n_clusters.append(G.node[node]['class'])
return(n_clusters)
这里是面部特征提取和128D向量中每个人脸的编码代码,以及这些用于应用中文耳语的图的构造。
from sklearn import cluster
import cv2
import sys
import os
import dlib
import glob
from skimage import io
import numpy as np
from sklearn.cluster import KMeans
from sklearn.manifold import TSNE
from matplotlib import pyplot as plt
import chinese
from chinese import chinese_whispers
predictor_path = "/home/deeplearning/Desktop/face_recognition
examples/shape_predictor_68_face_landmarks.dat"
face_rec_model_path = "/home/deeplearning/Desktop/face_recognition
examples/dlib_face_recognition_resnet_model_v1.dat"
faces_folder_path = "/home/deeplearning/Desktop/face_recognition
examples/test11/"
# Load all the models we need: a detector to find the faces, a shape predictor
# to find face landmarks so we can precisely localize the face, and finally the
# face recognition model.
detector = dlib.get_frontal_face_detector()
#print (detector)
sp = dlib.shape_predictor(predictor_path)
facerec = dlib.face_recognition_model_v1(face_rec_model_path)
#win = dlib.image_window()
# Now process all the images
dict=
for f in glob.glob(os.path.join(faces_folder_path, "*.jpg")):
print("Processing file: ".format(f))
img = io.imread(f)
dets = detector(img, 3)
for k, d in enumerate(dets):
shape = sp(img, d)
face_descriptor = facerec.compute_face_descriptor(img, shape)
a=np.array(face_descriptor)
dict[(f,d)] = (a,f)
answ=np.array(list(dict.values()))
tmp=answ.shape[0]
ans=np.zeros((tmp,128))
for i in range(tmp):
ans[i]=np.array(answ[i][0])
nodes=[]
for i in range(tmp):
nodes.append(i)
edges=[]
for i in range(tmp):
for j in range(i+1,tmp):
dist=np.sqrt(np.sum((ans[i]-ans[j])**2))
if dist < 0.6:
edges.append((i,j,'weight': dist))
iterations=10
cluster=chinese_whispers(nodes,edges,iterations)
我不明白我做错了什么。有人可以帮我解决这个问题吗? 提前致谢。
【问题讨论】:
欢迎来到 Stack Overflow。请花时间阅读The Tour 并参考Help Center 中的材料,您可以在这里问什么以及如何问。 “但我没有得到很好的准确性”不是一个很有帮助的问题描述。堆栈溢出最适合解决具体问题,还有这类问题 【参考方案1】:我之前使用过 Dlib 进行人脸聚类。
很抱歉,我没有正确回答您的问题。 您是遇到错误还是没有得到准确的结果?
假设您没有得到正确的结果,我建议使用 shape_predictor_5_face_landmarks.dat 而不是 64 人脸标志,因为它在使用中文耳语算法进行聚类时可以提供更好的结果。
你也可以试试DLib自带的中文耳语聚类功能,看看效果是否更好。
示例 - face_clustering.py
#!/usr/bin/python
# The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt
#
# This example shows how to use dlib's face recognition tool for clustering using chinese_whispers.
# This is useful when you have a collection of photographs which you know are linked to
# a particular person, but the person may be photographed with multiple other people.
# In this example, we assume the largest cluster will contain photos of the common person in the
# collection of photographs. Then, we save extracted images of the face in the largest cluster in
# a 150x150 px format which is suitable for jittering and loading to perform metric learning (as shown
# in the dnn_metric_learning_on_images_ex.cpp example.
# https://github.com/davisking/dlib/blob/master/examples/dnn_metric_learning_on_images_ex.cpp
#
# COMPILING/INSTALLING THE DLIB PYTHON INTERFACE
# You can install dlib using the command:
# pip install dlib
#
# Alternatively, if you want to compile dlib yourself then go into the dlib
# root folder and run:
# python setup.py install
#
# Compiling dlib should work on any operating system so long as you have
# CMake installed. On Ubuntu, this can be done easily by running the
# command:
# sudo apt-get install cmake
#
# Also note that this example requires Numpy which can be installed
# via the command:
# pip install numpy
import sys
import os
import dlib
import glob
if len(sys.argv) != 5:
print(
"Call this program like this:\n"
" ./face_clustering.py shape_predictor_5_face_landmarks.dat dlib_face_recognition_resnet_model_v1.dat ../examples/faces output_folder\n"
"You can download a trained facial shape predictor and recognition model from:\n"
" http://dlib.net/files/shape_predictor_5_face_landmarks.dat.bz2\n"
" http://dlib.net/files/dlib_face_recognition_resnet_model_v1.dat.bz2")
exit()
predictor_path = sys.argv[1]
face_rec_model_path = sys.argv[2]
faces_folder_path = sys.argv[3]
output_folder_path = sys.argv[4]
# Load all the models we need: a detector to find the faces, a shape predictor
# to find face landmarks so we can precisely localize the face, and finally the
# face recognition model.
detector = dlib.get_frontal_face_detector()
sp = dlib.shape_predictor(predictor_path)
facerec = dlib.face_recognition_model_v1(face_rec_model_path)
descriptors = []
images = []
# Now find all the faces and compute 128D face descriptors for each face.
for f in glob.glob(os.path.join(faces_folder_path, "*.jpg")):
print("Processing file: ".format(f))
img = dlib.load_rgb_image(f)
# Ask the detector to find the bounding boxes of each face. The 1 in the
# second argument indicates that we should upsample the image 1 time. This
# will make everything bigger and allow us to detect more faces.
dets = detector(img, 1)
print("Number of faces detected: ".format(len(dets)))
# Now process each face we found.
for k, d in enumerate(dets):
# Get the landmarks/parts for the face in box d.
shape = sp(img, d)
# Compute the 128D vector that describes the face in img identified by
# shape.
face_descriptor = facerec.compute_face_descriptor(img, shape)
descriptors.append(face_descriptor)
images.append((img, shape))
# Now let's cluster the faces.
labels = dlib.chinese_whispers_clustering(descriptors, 0.5)
num_classes = len(set(labels))
print("Number of clusters: ".format(num_classes))
# Find biggest class
biggest_class = None
biggest_class_length = 0
for i in range(0, num_classes):
class_length = len([label for label in labels if label == i])
if class_length > biggest_class_length:
biggest_class_length = class_length
biggest_class = i
print("Biggest cluster id number: ".format(biggest_class))
print("Number of faces in biggest cluster: ".format(biggest_class_length))
# Find the indices for the biggest class
indices = []
for i, label in enumerate(labels):
if label == biggest_class:
indices.append(i)
print("Indices of images in the biggest cluster: ".format(str(indices)))
# Ensure output directory exists
if not os.path.isdir(output_folder_path):
os.makedirs(output_folder_path)
# Save the extracted faces
print("Saving faces in largest cluster to output folder...")
for i, index in enumerate(indices):
img, shape = images[index]
file_path = os.path.join(output_folder_path, "face_" + str(i))
# The size and padding arguments are optional with default size=150x150 and padding=0.25
dlib.save_face_chip(img, shape, file_path, size=150, padding=0.25)
您还可以更改阈值和迭代次数,看看它是否能给您带来更好的结果。
希望这会有所帮助。
【讨论】:
以上是关于使用Chinese Whispers算法的人脸聚类的主要内容,如果未能解决你的问题,请参考以下文章
dlib库包的介绍与使用,opencv+dlib检测人脸框opencv+dlib进行人脸68关键点检测,opencv+dlib实现人脸识别,dlib进行人脸特征聚类dlib视频目标跟踪