决策树分类器我不断收到 NaN 错误

Posted

技术标签:

【中文标题】决策树分类器我不断收到 NaN 错误【英文标题】:Decision Tree Classifier I keep getting NaN error 【发布时间】:2019-10-12 14:07:53 【问题描述】:

我有一个小的决策树代码,我相信我将所有内容都转换为 int,并且我已经使用 isnan、max 等检查了我的训练/测试数据。

我真的不知道为什么它会给出这个错误。

所以我尝试从决策树传递 Mnist 数据集,然后我将使用一个类进行攻击。

代码如下:

 from AttackUtils import Attack
    from AttackUtils import calc_output_weighted_weights, targeted_gradient, non_targeted_gradient, non_targeted_sign_gradient
    (X_train_woae, y_train_woae), (X_test_woae, y_test_woae) = mnist.load_data()
    X_train_woae = X_train_woae.reshape((len(X_train_woae), np.prod(X_train_woae.shape[1:])))
    X_test_woae = X_test_woae.reshape((len(X_test_woae), np.prod(X_test_woae.shape[1:])))

    from sklearn import tree
    #model_woae = LogisticRegression(multi_class='multinomial', solver='lbfgs', fit_intercept=False)
    model_woae = tree.DecisionTreeClassifier(class_weight='balanced')
    model_woae.fit(X_train_woae, y_train_woae)
    #model_woae.coef_ = model_woae.feature_importances_
    coef_int = np.round(model_woae.tree_.compute_feature_importances(normalize=False) * X_train_woae.size).astype(int)
    attack_woae = Attack(model_woae)
    attack_woae.prepare(X_train_woae, y_train_woae, X_test_woae, y_test_woae)
    weights_woae = attack_woae.weights
    num_classes_woae = len(np.unique(y_train_woae))
    attack_woae.create_one_hot_targets(y_test_woae)
    attack_woae.attack_to_max_epsilon(non_targeted_gradient, 50)
    non_targeted_scores_woae = attack_woae.scores

所以攻击类进行扰动和非目标梯度攻击。 这是攻击类:

import numpy as np
from sklearn.metrics import accuracy_score


def calc_output_weighted_weights(output, w):
    for c in range(len(output)):
        if c == 0:
            weighted_weights = output[c] * w[c]
        else:
            weighted_weights += output[c] * w[c]
    return weighted_weights


def targeted_gradient(foolingtarget, output, w):
    ww = calc_output_weighted_weights(output, w)
    for k in range(len(output)):
        if k == 0:
            gradient = foolingtarget[k] * (w[k]-ww)
        else:
            gradient += foolingtarget[k] * (w[k]-ww)
    return gradient


def non_targeted_gradient(target, output, w):
    ww = calc_output_weighted_weights(output, w)
    for k in range(len(target)):
        if k == 0:
            gradient = (1-target[k]) * (w[k]-ww)
        else:
            gradient += (1-target[k]) * (w[k]-ww)
    return gradient


def non_targeted_sign_gradient(target, output, w):
    gradient = non_targeted_gradient(target, output, w)
    return np.sign(gradient)


class Attack:

    def __init__(self, model):
        self.fooling_targets = None
        self.model = model

    def prepare(self, X_train, y_train, X_test, y_test):
        self.images = X_test
        self.true_targets = y_test
        self.num_samples = X_test.shape[0]
        self.train(X_train, y_train)
        print("Model training finished.")
        self.test(X_test, y_test)
        print("Model testing finished. Initial accuracy score: " + str(self.initial_score))

    def set_fooling_targets(self, fooling_targets):
        self.fooling_targets = fooling_targets

    def train(self, X_train, y_train):
        self.model.fit(X_train, y_train)
        self.weights = self.model.coef_
        self.num_classes = self.weights.shape[0]

    def test(self, X_test, y_test):
        self.preds = self.model.predict(X_test)
        self.preds_proba = self.model.predict_proba(X_test)
        self.initial_score = accuracy_score(y_test, self.preds)

    def create_one_hot_targets(self, targets):
        self.one_hot_targets = np.zeros(self.preds_proba.shape)
        for n in range(targets.shape[0]):
            self.one_hot_targets[n, targets[n]] = 1

    def attack(self, attackmethod, epsilon):
        perturbed_images, highest_epsilon = self.perturb_images(epsilon, attackmethod)
        perturbed_preds = self.model.predict(perturbed_images)
        score = accuracy_score(self.true_targets, perturbed_preds)
        return perturbed_images, perturbed_preds, score, highest_epsilon

    def perturb_images(self, epsilon, gradient_method):
        perturbed = np.zeros(self.images.shape)
        max_perturbations = []
        for n in range(self.images.shape[0]):
            perturbation = self.get_perturbation(epsilon, gradient_method, self.one_hot_targets[n], self.preds_proba[n])
            perturbed[n] = self.images[n] + perturbation
            max_perturbations.append(np.max(perturbation))
        highest_epsilon = np.max(np.array(max_perturbations))
        return perturbed, highest_epsilon

    def get_perturbation(self, epsilon, gradient_method, target, pred_proba):
        gradient = gradient_method(target, pred_proba, self.weights)
        inf_norm = np.max(gradient)
        perturbation = epsilon / inf_norm * gradient
        return perturbation

    def attack_to_max_epsilon(self, attackmethod, max_epsilon):
        self.max_epsilon = max_epsilon
        self.scores = []
        self.epsilons = []
        self.perturbed_images_per_epsilon = []
        self.perturbed_outputs_per_epsilon = []
        for epsilon in range(0, self.max_epsilon):
            perturbed_images, perturbed_preds, score, highest_epsilon = self.attack(attackmethod, epsilon)
            self.epsilons.append(highest_epsilon)
            self.scores.append(score)
            self.perturbed_images_per_epsilon.append(perturbed_images)
            self.perturbed_outputs_per_epsilon.append(perturbed_preds)

这是它给出的回溯:

值错误

Traceback(最近一次调用最后一次)在 4 num_classes_woae = len(np.unique(y_train_woae)) 5 attack_woae.create_one_hot_targets(y_test_woae) ----> 6 attack_woae.attack_to_max_epsilon(non_targeted_gradient, 50) 7 non_targeted_scores_woae = attack_woae.scores

~\MULTIATTACK\AttackUtils.py 在 attack_to_max_epsilon(自我,攻击方法,max_epsilon) 106 self.perturbed_outputs_per_epsilon = [] 范围内的 epsilon 为 107(0,self.max_epsilon): --> 108 perturbed_images, perturbed_preds, score, highest_epsilon = self.attack(attackmethod, epsilon) 109 self.epsilons.append(highest_epsilon) 110 self.scores.append(score)

~\MULTIATTACK\AttackUtils.py in attack(self, 攻击方法,ε) 79 def攻击(自我,攻击方法,ε): 80 perturbed_images,highest_epsilon = self.perturb_images(epsilon,attackmethod) ---> 81 perturbed_preds = self.model.predict(perturbed_images) 82 分数 = 准确度分数(self.true_targets,perturbed_preds) 83 返回 perturbed_images、perturbed_preds、score、highest_epsilon

...\appdata\local\programs\python\python35\lib\site-packages\sklearn\tree\tree.py 在预测(自我,X,check_input) 第413章 第414章 --> 415 X = self._validate_X_predict(X, check_input) 第416章 417 n_samples = X.shape[0]

...\appdata\local\programs\python\python35\lib\site-packages\sklearn\tree\tree.py 在 _validate_X_predict(self, X, check_input) 第374章 375 如果检查输入: --> 376 X = check_array(X, dtype=DTYPE, accept_sparse="csr") 第377章,如果是稀疏(X)和(X.indices.dtype!= np.intc或 第378章

...\appdata\local\programs\python\python35\lib\site-packages\sklearn\utils\validation.py 在 check_array(数组,accept_sparse,accept_large_sparse,dtype, 订单、复制、force_all_finite、ensure_2d、allow_nd、 ensure_min_samples、ensure_min_features、warn_on_dtype、估计器) 第566章 第567章 --> 568 allow_nan=force_all_finite == 'allow-nan') 569 第570章

...\appdata\local\programs\python\python35\lib\site-packages\sklearn\utils\validation.py 在_assert_all_finite(X,allow_nan) 54 不是allow_nan,也不是np.isfinite(X).all()): 55 type_err = 'infinity' if allow_nan else 'NaN, infinity' ---> 56 引发 ValueError(msg_err.format(type_err, X.dtype)) 57 58

ValueError: 输入包含 NaN、无穷大或一个太大的值 dtype('float32').

编辑:

我已将系数编号添加为 0,现在它在该行下方给出了相同的错误,attack.attack_to_max_epsilon(non_targeted_gradient, epsilon_number)

【问题讨论】:

也许只是 float32 溢出,但大声笑这个数字很大 @user8426627 我把它们变小了,但还是一样...你说的是 coef_int 数字,对吧? 在训练 clf 之前尝试将 one-hot 编码应用于您的目标或标签。 @FreddyDaniel 你能提供更多细节吗?我不确定我是否完全理解 我认为你是机器学习的新手,请看一下one-hot encondemachinelearningmastery.com/…是什么,然后尝试搜索数据集归一化来训练机器学习算法。 【参考方案1】:

在训练之前尝试将 one-hot enconde 应用于您的标签。

from sklearn.preprocessing import LabelEncoder

mylabels= ["label1", "label2", "label2"..."n.label"]
le = LabelEncoder()
labels = le.fit_transform(mylabels)

然后尝试拆分您的数据:

from sklearn.model_selection import train_test_split
(x_train, x_test, y_train, y_test) = train_test_split(data,
                                                     labels,
                                                     test_size=0.25)

现在您的标签可能会用数字编码,这对训练机器学习算法很有帮助。

【讨论】:

问题是算法与其他人一起工作得很好,我正在使用 Mnist 所以 afaik 标签也随之而来。问题在于决策树分类器。我对问题进行了编辑

以上是关于决策树分类器我不断收到 NaN 错误的主要内容,如果未能解决你的问题,请参考以下文章

为啥决策树显示正确分类而某些实例被错误分类

sklearn-分类决策树

决策树基本原理

其实决策树的定义用一个邮件分类系统去诠释

分类/决策树和选择拆分

Weka中决策树和混淆矩阵中正确/错误分类实例之间的差异