Scikit-learn:使用 DBSCAN 进行聚类后,绘制的点比初始数据样本少
Posted
技术标签:
【中文标题】Scikit-learn:使用 DBSCAN 进行聚类后,绘制的点比初始数据样本少【英文标题】:Scikit-learn: Less points plotted than initial data samples after clustering with DBSCAN 【发布时间】:2018-12-09 23:33:34 【问题描述】:我在使用库 scikit-learn 中的 DBSCAN 实现时,发现绘制的点数低于初始样本数。 特别是在DBSCANhttp://scikit-learn.org/stable/auto_examples/cluster/plot_dbscan.html的官方demo中,自动生成了750个样本。但是,当我打印每个集群有多少点以及有多少异常值时,结果是: 集群 1:224, 第 2 组:228, 第 3 组:227, 异常值:18, --> TOTAL = 697。从下面的代码可以看出,我只是在原始代码中添加了几行,用于打印每个聚类的点数和异常值数。我对此感到困惑,我想知道为什么会发生这种情况以及缺失点在哪里。 提前感谢您的回答!
print(__doc__)
import numpy as np
from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn.datasets.samples_generator import make_blobs
from sklearn.preprocessing import StandardScaler
# #############################################################################
# Generate sample data
centers = [[1, 1], [-1, -1], [1, -1]]
X, labels_true = make_blobs(n_samples=750, centers=centers, cluster_std=0.4,
random_state=0)
X = StandardScaler().fit_transform(X)
# #############################################################################
# Compute DBSCAN
db = DBSCAN(eps=0.3, min_samples=10).fit(X)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('Estimated number of clusters: %d' % n_clusters_)
print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, labels))
print("Completeness: %0.3f" % metrics.completeness_score(labels_true, labels))
print("V-measure: %0.3f" % metrics.v_measure_score(labels_true, labels))
print("Adjusted Rand Index: %0.3f"
% metrics.adjusted_rand_score(labels_true, labels))
print("Adjusted Mutual Information: %0.3f"
% metrics.adjusted_mutual_info_score(labels_true, labels))
print("Silhouette Coefficient: %0.3f"
% metrics.silhouette_score(X, labels))
# #############################################################################
# Plot result
import matplotlib.pyplot as plt
unique_labels = set(labels)
i=1
for k in zip(unique_labels):
class_member_mask = (labels == k)
if k == (-1,):
xy = X[class_member_mask & ~core_samples_mask]
current_outliers = len(xy)
print "OUTLIERS :", current_outliers
else:
xy = X[class_member_mask & core_samples_mask]
print "CLUSTER", i, " :",len(xy)
i+=1
colors = [plt.cm.Spectral(each)
for each in np.linspace(0, 1, len(unique_labels))]
for k, col in zip(unique_labels, colors):
if k == -1:`enter code here`
col = [0, 0, 0, 1]
class_member_mask = (labels == k)
xy = X[class_member_mask & core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=14)
xy = X[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=6)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
【问题讨论】:
【参考方案1】:您的情节中仅包含core samples。如果您想考虑所有点,请删除core_samples_mask
上的约束:
if k == (-1,):
xy = X[class_member_mask]
current_outliers = len(xy)
print "OUTLIERS :", current_outliers
else:
xy = X[class_member_mask]
print "CLUSTER", i, " :",len(xy)
i+=1
【讨论】:
以上是关于Scikit-learn:使用 DBSCAN 进行聚类后,绘制的点比初始数据样本少的主要内容,如果未能解决你的问题,请参考以下文章
Python:使用 scikit-learn 的 dbscan 进行字符串聚类,使用 Levenshtein 距离作为度量: