INVALID_ARGUMENT:断言失败:[预测必须是 <= 1] [条件 x <= y 没有按元素保持:]

Posted

技术标签:

【中文标题】INVALID_ARGUMENT:断言失败:[预测必须是 <= 1] [条件 x <= y 没有按元素保持:]【英文标题】:INVALID_ARGUMENT: assertion failed: [predictions must be <= 1] [Condition x <= y did not hold element-wise:] 【发布时间】:2022-01-12 01:06:12 【问题描述】:

我有以下模型,我想利用标准度量函数来报告真/假阳性和真/假阴性。

from transformers import TFRobertaForSequenceClassification

model = TFRobertaForSequenceClassification.from_pretrained('roberta-base', num_labels=1)

optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
model.compile(
    optimizer=optimizer, 
    loss=tf.keras.losses.BinaryCrossentropy(from_logits=False),
    metrics = [
      'accuracy',
      tf.keras.metrics.TruePositives(),
      tf.keras.metrics.TrueNegatives(),
      tf.keras.metrics.FalseNegatives(),
      tf.keras.metrics.FalsePositives()
    ]) # can also use any keras loss fn
history = model.fit(train_dataset.shuffle(1000).batch(16), epochs=10, batch_size=16, validation_data = test_dataset.batch(1))

但我收到以下错误,不知道如何排除故障。怎么会有些预测大于1?

INVALID_ARGUMENT:  assertion failed: [predictions must be <= 1] [Condition x <= y did not hold element-wise:] [x (tf_roberta_for_sequence_classification_5/classifier/out_proj/BiasAdd:0) = ] [[0.375979185][0.340960771][0.41201663]...] [y (Cast_9/x:0) = ] [1]
     [[node assert_less_equal/Assert/AssertGuard/Assert
 (defined at /usr/local/lib/python3.7/dist-packages/keras/utils/metrics_utils.py:615)

【问题讨论】:

【参考方案1】:

这是这些指标的一个已知问题,因为它们的预定义阈值以及 y_pred 没有在 0 和 1 之间被挤压的事实。查看此 issue 了解更多信息。这是一个基于链接问题中发布的解决方法的简单工作示例。

from transformers import RobertaTokenizer, TFRobertaForSequenceClassification
import tensorflow as tf
import pandas as pd

class TruePositives(tf.keras.metrics.TruePositives):
    def __init__(self, from_logits=False, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self._from_logits = from_logits

    def update_state(self, y_true, y_pred, sample_weight=None):
        if self._from_logits:
            super(TruePositives, self).update_state(y_true, tf.nn.sigmoid(y_pred), sample_weight)
        else:
            super(TruePositives, self).update_state(y_true, y_pred, sample_weight)


class FalsePositives(tf.keras.metrics.FalsePositives):
    def __init__(self, from_logits=False, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self._from_logits = from_logits

    def update_state(self, y_true, y_pred, sample_weight=None):
        if self._from_logits:
            super(FalsePositives, self).update_state(y_true, tf.nn.sigmoid(y_pred), sample_weight)
        else:
            super(FalsePositives, self).update_state(y_true, y_pred, sample_weight)


class TrueNegatives(tf.keras.metrics.TrueNegatives):
    def __init__(self, from_logits=False, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self._from_logits = from_logits

    def update_state(self, y_true, y_pred, sample_weight=None):
        if self._from_logits:
            super(TrueNegatives, self).update_state(y_true, tf.nn.sigmoid(y_pred), sample_weight)
        else:
            super(TrueNegatives, self).update_state(y_true, y_pred, sample_weight)


class FalseNegatives(tf.keras.metrics.FalseNegatives):
    def __init__(self, from_logits=False, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self._from_logits = from_logits

    def update_state(self, y_true, y_pred, sample_weight=None):
        if self._from_logits:
            super(FalseNegatives, self).update_state(y_true, tf.nn.sigmoid(y_pred), sample_weight)
        else:
            super(FalseNegatives, self).update_state(y_true, y_pred, sample_weight)


d = 'Text': ['You are fishy', 'Fishy people are fishy'], 'Label': [1, 0]
train = pd.DataFrame(data=d)
train_text = list(train['Text'].values)
train_label = list(train['Label'].values)

val = pd.DataFrame(data=d)
val_text = list(val['Text'].values)
val_label = list(val['Label'].values)

tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = TFRobertaForSequenceClassification.from_pretrained('roberta-base')

train_encodings = tokenizer(train_text, truncation=True, padding=True)
val_encodings = tokenizer(val_text, truncation=True, padding=True)

train_dataset = tf.data.Dataset.from_tensor_slices((
    dict(train_encodings),
    train_label
))
val_dataset = tf.data.Dataset.from_tensor_slices((
    dict(val_encodings),
    val_label
))
model = TFRobertaForSequenceClassification.from_pretrained('roberta-base', num_labels=1)

optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
model.compile(
    optimizer=optimizer, 
    loss=tf.keras.losses.BinaryCrossentropy(from_logits=False),
    metrics = [
      'accuracy',
      TruePositives(from_logits=True),
      TrueNegatives(from_logits=True),
      FalseNegatives(from_logits=True),
      FalsePositives(from_logits=True)
    ]) # can also use any keras loss fn
history = model.fit(train_dataset.shuffle(2).batch(1), epochs=2, validation_data = val_dataset.batch(1))
Epoch 1/2
2/2 [==============================] - 81s 6s/step - loss: 7.7125 - accuracy: 0.5000 - true_positives_16: 0.0000e+00 - true_negatives_15: 1.0000 - false_negatives_15: 1.0000 - false_positives_15: 0.0000e+00 - val_loss: 7.7125 - val_accuracy: 0.5000 - val_true_positives_16: 0.0000e+00 - val_true_negatives_15: 1.0000 - val_false_negatives_15: 1.0000 - val_false_positives_15: 0.0000e+00
Epoch 2/2
2/2 [==============================] - 3s 1s/step - loss: 7.7125 - accuracy: 0.5000 - true_positives_16: 0.0000e+00 - true_negatives_15: 1.0000 - false_negatives_15: 1.0000 - false_positives_15: 0.0000e+00 - val_loss: 7.7125 - val_accuracy: 0.5000 - val_true_positives_16: 0.0000e+00 - val_true_negatives_15: 1.0000 - val_false_negatives_15: 1.0000 - val_false_positives_15: 0.0000e+00 

【讨论】:

以上是关于INVALID_ARGUMENT:断言失败:[预测必须是 <= 1] [条件 x <= y 没有按元素保持:]的主要内容,如果未能解决你的问题,请参考以下文章

(Google AutoML) 错误:3 INVALID_ARGUMENT:不支持的有效负载类型“行”

InvalidArgumentError:断言失败:[Condition x == y did not hold element-wise:]

python 零宽负预测先行断言 (心得)

错误:(gcloud.app.deploy)INVALID_ARGUMENT:无法解析源

为啥流利的断言失败但断言通过了枚举?

“在抛出 std::invalid_argument' what(): stoi 的实例后调用终止?”