如何使用 for 循环在决策树上正确实现 bagging?
Posted
技术标签:
【中文标题】如何使用 for 循环在决策树上正确实现 bagging?【英文标题】:How to properly implement bagging on decision tree with for loop? 【发布时间】:2019-04-12 15:54:25 【问题描述】:我正在尝试使用决策树和 for 循环来实现 bagging 和投票。我正在使用 sklearn 重采样。但是,我收到了Number of labels=97 does not match number of samples=77
,我可以理解为什么,但我不确定如何解决它。
数据集中有 150 个样本。 有 150 个标签 所以 150 * 0.35 = 97 和 97 * 0.8 = 77。 X 是长度为 150 的特征矩阵,并且 y是长度为150的标签向量
下面是我的代码
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.utils import resample
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.35, random_state=3)
predictions = []
for i in range(1,20):
bootstrap_size = int(0.8*len(X_train))
bag = resample(X_train, n_samples = bootstrap_size , random_state=i , replace = True)
Base_DecisionTree = DecisionTreeClassifier(random_state=3)
Base_DecisionTree.fit(bag, y_train)
y_predict = Base_DecisionTree.predict(X_test)
accuracy = accuracy_score(y_test, y_predict)
predictions.append(accuracy)
【问题讨论】:
【参考方案1】:您还应该重新采样标签并在fit()
中使用它。
x_bag, y_bag = resample(X_train, y_train, n_samples = bootstrap_size , random_state=i , replace = True)
tree = DecisionTreeClassifier(random_state=3)
tree.fit(x_bag, y_bag)
【讨论】:
是的,这行得通。但是为什么所有的预测都是一样的呢?[0.8301886792452831, 0.8301886792452831, 0.8301886792452831, 0.8301886792452831, 0.8301886792452831, 0.8301886792452831, 0.8301886792452831, 0.8301886792452831, 0.8301886792452831, 0.8301886792452831, 0.8301886792452831, 0.8301886792452831, 0.8301886792452831, 0.8301886792452831, 0.8301886792452831, 0.8301886792452831, 0.8301886792452831, 0.8301886792452831, 0.8301886792452831]
我很傻,我不得不在 jupyter 上重新启动内核。感谢您的帮助。以上是关于如何使用 for 循环在决策树上正确实现 bagging?的主要内容,如果未能解决你的问题,请参考以下文章
如何正确遍历删除List中的元素(普通for循环增强for循环迭代器iteratorremoveIf+方法引用)