尝试使用 tensorflow 特征列时损失 nan
Posted
技术标签:
【中文标题】尝试使用 tensorflow 特征列时损失 nan【英文标题】:loss nan when trying to work with tensorflow feature columns 【发布时间】:2021-10-05 18:02:35 【问题描述】:我有this dataframe。
我正在尝试关注this example。
我要预测的目标值是zg500
。我想使用的另一个功能是tas
。
我想创建特征列,以结合经纬度:
import numpy as np
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow import feature_column
df = pd.read_csv('./df.csv')
# if unamed column exists
#df.drop(['Unnamed: 0'],
# axis=1,
# inplace=True)
df.dropna(inplace=True)
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('zg500')
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
batch_size = 16
train_ds = df_to_dataset(df, batch_size=batch_size)
feature_columns = []
tas = feature_column.numeric_column("tas")
latitude = feature_column.numeric_column("lats")
longitude = feature_column.numeric_column("lons")
bucketized_lat = feature_column.bucketized_column(latitude, boundaries=[0, 20, 40, 70])
bucketized_lon = feature_column.bucketized_column(longitude, boundaries=[-45, -20, 0, 20, 60])
feature_columns.append(tas)
feature_columns.append(bucketized_lat)
feature_columns.append(bucketized_lon)
lat_lon = feature_column.crossed_column([bucketized_lat, bucketized_lon], 1000)
lat_lon = feature_column.indicator_column(lat_lon)
feature_columns.append(lat_lon)
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
创建模型:
model = tf.keras.Sequential([
feature_layer,
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1)
])
model.compile(optimizer='adam',
loss='mse')
history = model.fit(train_ds, epochs=2)
现在,我收到了 nan loss:
10918/10918 [==============================] - 10s 861us/step - loss: nan
Epoch 2/2
10918/10918 [==============================] - 10s 857us/step - loss: nan
另外,我想知道为什么使用df
数据框而不是train_ds
:
history = model.fit(df.iloc[:, [0, 2, 3]].values,
df.iloc[:, 1].values,
epochs=2)
产生:
ValueError: ('We expected a dictionary here. Instead we got: ', <tf.Tensor 'IteratorGetNext:0' shape=(32, 3) dtype=float32>)
【问题讨论】:
您可以追踪出现的第一个 NaN 值的来源。它是根据您可以检查的其他值计算得出的。例如,如果在第一个训练步骤之前损失不是NaN,然后变成NaN,这通常是由于学习率太高。另请参阅 ***.com/questions/40050397/… 和类似帖子。 【参考方案1】:在损失中获得nan
的原因是您的目标值处于极端状态。它们的范围从 e^-32 到 e^31。你可以很容易地看到这一点。
df['zg500']
'''
0 -3.996248e-29
1 2.476790e+11
2 -1.010202e+08
3 -1.407987e-02
4 2.240596e-32
...
1742 -1.682389e+11
1743 -4.802401e+00
1744 -3.480795e+31
1745 1.026754e+21
1746 1.790822e+23
Name: zg500, Length: 1739, dtype: float64
'''
我们缩放目标的解决方法。虽然不建议这样做,但我们别无选择。下面是使用Standard Scaler
来缩放目标的轻微修改。
ss = StandardScaler()
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = ss.fit_transform(dataframe['zg500'].values.reshape(-1,1))
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
完成此操作后,以下是训练模型的结果。
history = model.fit(train_ds, epochs=2)
'''
Consider rewriting this model with the Functional API.
109/109 [==============================] - 1s 804us/step - loss: 27.0520
Epoch 2/10
109/109 [==============================] - 0s 769us/step - loss: 1.0166
Epoch 3/10
109/109 [==============================] - 0s 753us/step - loss: 1.0148
Epoch 4/10
109/109 [==============================] - 0s 779us/step - loss: 1.0115
Epoch 5/10
109/109 [==============================] - 0s 775us/step - loss: 1.0107
Epoch 6/10
109/109 [==============================] - 0s 915us/step - loss: 1.0107
Epoch 7/10
109/109 [==============================] - 0s 1ms/step - loss: 1.0034
Epoch 8/10
109/109 [==============================] - 0s 784us/step - loss: 1.0092
Epoch 9/10
109/109 [==============================] - 0s 735us/step - loss: 1.0151
Epoch 10/10
109/109 [==============================] - 0s 803us/step - loss: 1.0105
'''
【讨论】:
对我的上一个问题有什么想法吗?using the df dataframe instead of train_ds
?谢谢! (upv)
@George 我不太确定。无论如何,我会努力让它发挥作用。如果是这样,我会更新我的答案。以上是关于尝试使用 tensorflow 特征列时损失 nan的主要内容,如果未能解决你的问题,请参考以下文章