用示例学习 Keras

Posted zhuo木鸟

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了用示例学习 Keras相关的知识,希望对你有一定的参考价值。

文章目录


但 CNN 的隐藏层大于或等于两层时,最好用 RELU 作为激活函数—— by zhuo 木鸟

网格寻优调参(包括网络层数、节点个数、编译方式等)

以神经网络+鸢尾花数据集为例:

from sklearn.datasets import load_iris
import numpy as np
from sklearn.metrics import make_scorer,f1_score,accuracy_score
from sklearn.linear_model import LogisticRegression
from keras.models import Sequential,model_from_json,model_from_yaml
from keras.layers import Dense
from keras.utils import to_categorical   # one-hot 咱不用
from keras.callbacks import ModelCheckpoint
from keras.wrappers.scikit_learn import KerasClassifier
# import matplotlib.pyplot as plt
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import KFold
from sklearn.preprocessing import LabelEncoder #导入LabelEncoder模块
le = LabelEncoder()   #实例一个LabelEncoder对象
X,y = load_iris(return_X_y=True)    #导入数据集
y = le.fit_transform(y)   #转换数据
span = list(set(y))    #将span赋值为test_df的取值范围(转换后)
le.inverse_transform(span)    #数据拟转换,可以查看各个数字的含义

seed = 7
np.random.seed(seed)

def create_ann(units_list=[10],optimizer='rmsprop',init='glorot_uniform'):
    ann = Sequential()
    units=units_list[0]
    ann.add(Dense(units=units,activation='relu',
                    input_shape=(4,),kernel_initializer=init))
    for units in units_list[1:]:
        ann.add(Dense(units=units,activation='relu',
                        kernel_initializer=init))
    ann.add(Dense(units=3,activation='sigmoid',kernel_initializer=init))
    ann.compile(loss='categorical_crossentropy',optimizer=optimizer)
    
    return ann

# ann = create_ann()
# ann.fit(X,y,batch_size=5,epochs=100,verbose=0)

model = KerasClassifier(build_fn=create_ann,epochs=100,batch_size=5,
                        verbose=0)    #可以看到,把 fit 的参数也带进去了。
# 但如果后文有用 grid 将其网格寻优,那么上面的设置其实可以不用的,只需要:
# model = KerasClassifier(build_fn=create_ann,verbose=0)
grid = 
grid['units_list']=[[10],[4,6],[3,4,3]]
grid['optimizer']=['rmsprop','adam']
grid['init']=['glorot_uniform','normal']
grid['epochs']=[50,25]
grid['batch_size']=[5,3]

kfold = KFold(n_splits=5,shuffle=True,random_state=seed)
scorer = make_scorer(f1_score,average='macro')  #用不了不知道为什么.......
# acc_scorer = make_scorer(accuracy_score)
grid_search = GridSearchCV(estimator=model,param_grid=grid,scoring=scorer,cv=kfold)
results = grid_search.fit(X,y)

print('Best:%f using %s'%(results.best_score_,results.best_params_))

means = results.cv_results_['mean_test_score']
stds = results.cv_results_['std_test_score']
params = results.cv_results_['params']
for mean,std,param in zip(means,stds,params):
    print('%f(+-%f) with: %r'%(mean,std,param))

模型筛选——交叉验证

假设有两个模型,一个是上面的神经网络,一个是逻辑回归。如何筛选最好的模型呢?

为了筛选模型,就要评价模型。评价模型的方法有交叉验证:

from sklearn.linear_model import LogisticRegression 
from sklearn.model_selection import cross_val_score
ann = KerasClassifier(build_fn=create_ann,epochs=100,batch_size=5,
                        units_list=[4,6],init='normal',
                        verbose=0)

lg = LogisticRegression(penalty='none')
S_lg_i = cross_val_score(lg,X,y,scoring=scorer,cv=kfold)    #计算出逻辑回归模型的Si
S_ann_i = cross_val_score(ann,X,y,scoring=scorer,cv=kfold)    #计算出ANN模型的Si
print('逻辑回归: Baseline:%.2f (+-%.2f) f1_score:',%(S_lg_i.mean(),S_lg_i.std()))
print('神经网络: Baseline:%.2f (+-%.2f) f1_score:',%(S_ann_i.mean(),S_ann_i.std()))

正则化 Dropout 与最大范数约束

from keras.layers import Dropout
from keras.constraints import maxnorm

def create_ann(optimizer='rmsprop',init='glorot_uniform'):
    ann = Sequential()
    ann.add(Dense(units=10,activation='relu',
                    input_shape=(4,),kernel_initializer=init))
    ann.add(Dropout(rate=0.2))
    ann.add(Dense(units=6,activation='relu',
                        kernel_initializer=init))
    ann.add(Dropout(rate=0.5))   #最好是 0.2~0.5 这个范围
    ann.add(Dense(units=6,activation='relu',
                        kernel_initializer=init,
                        kernel_constrain=maxnorm(3)))
    ann.add(Dense(units=3,activation='sigmoid',kernel_initializer=init))
    ann.compile(loss='categorical_crossentropy',optimizer=optimizer)
    
    return ann

学习率调整

线性衰减

衰减函数:
l r k = l r k + 1 × 1 1 + d e c a y × e p o c h s lr_k = lr_k+1\\times \\frac11+decay\\times epochs lrk=lrk+1×1+decay×epochs1

from keras.optimizers import SGD
learningRate = 0.1   #大学习率
momentum = 0.9   #大动量值 0.9~0.99
decay_rate = 0.005  
sgd = SGD(lr=learningRate,momentum=momentum,decay=decay_rate,nesterov=False)
model = KerasClassifier(build_fn=create_ann,epochs=100,batch_size=5,optimizer=sgd,
                        verbose=0)    

指数衰减

l r = l r × d r o p R a t e f l o o r ( 1 + e p o c h s e p o c h D r o p s ) lr = lr \\times dropRate^floor(\\frac1+epochsepochDrops) lr=lr×dropRatefloor(epochDrops1+epochs)

from keras.callbacks import LearningRateScheduler
from math import pow,floor
def step_decay(epoch):
	init_lrate = 0.1
	dropRate = 0.5
	epochDrops = 10
	lrate = init_lrate*pow(drop,floor(1+epoch)/epochDrops)
	return lrate
	
learningRate = 0.1   #大学习率
momentum = 0.9   #大动量值 0.9~0.99
decay_rate = 0
sgd = SGD(lr=learningRate,momentum=momentum,decay=decay_rate,nesterov=False)
lrate = LearningRateScheduler(step_decay)
model = KerasClassifier(build_fn=create_ann,epochs=100,batch_size=5,optimizer=sgd,
                        verbose=0,callbacks=[lrate])  

结果可视化

from sklearn.datasets import load_iris
import numpy as np
from sklearn.linear_model import LogisticRegression
X,y = load_iris(return_X_y=True)    #导入数据集
from keras.models import Sequential,model_from_json,model_from_yaml
from keras.layers import Dense
from keras.utils import to_categorical
from keras.callbacks import ModelCheckpoint
import matplotlib.pyplot as plt

y = to_categorical(y,num_classes=3)

seed = 7
np.random.seed(seed)

def create_model(optimizer='rmsprop',init='glorot_uniform'):
    model = Sequential()
    model.add(Dense(units=4,activation='relu',input_dim=4,kernel_initializer=init))
    model.add(Dense(units=6,activation='relu',kernel_initializer=init))
    model.add(Dense(units=3,activation='sigmoid',kernel_initializer=init))
    model.compile(loss='categorical_crossentropy',optimizer=optimizer,metrics=['acc'])
    
    return model

model = create_model()

filepath = 'weights-best.h5'
checkpoint = ModelCheckpoint(filepath=filepath,monitor='val_acc',
                              verbose=1,save_best_only=True,mode='max')
callback_list=[checkpoint]

history = model.fit(X,y,epochs=50,batch_size=5,verbose=0,callbacks=callback_list,validation_split=0.3)
print(history.history.keys())

font1 = 'family' : 'Times New Roman',
'weight' : 'normal',
'size'   : 20,

plt.rcParams['font.sans-serif']=['SimHei']
plt.rcParams['axes.unicode_minus'] = False


plt.figure(figsize=(12,4))

plt.subplots_adjust(left=0.125, bottom=None, right=0.9, top=None,
                wspace=0.3, hspace=None)
plt.subplot(1,2,1)
plt.plot(history.history['loss'],linewidth=3,label='Train')
plt.plot(history.history['val_loss'],linewidth=3,linestyle='dashed',label='Test')
plt.xlabel('Epoch',fontsize=20)
plt.ylabel('loss',fontsize=20)
plt.legend(prop=font1)

plt.subplot(1,2,2)
plt.plot(history.history['acc'],linewidth=3,label='Train')
plt.plot(history.history['val_acc'],linewidth=3,linestyle='dashed',label='Test')
plt.xlabel('Epoch',fontsize=20)
plt.ylabel('Acc',fontsize=20)
plt.legend(prop=font1)

两种学习方法(重要)

1、从 create_model 函数中,直接建立:ann = create_model()。然后,在用 ann.fit。当然,在 fit 里面,要设置好 fit 的参数,如 validation_data,validation_split,epochs,batch_size,verbose 等等。

2、用 KerasClassifier 封装,ann = KerasClassifier(build_fn = create_model, arg*) 。这里的参数,不仅可以设置 fit 的参数,同时还可以设置 build_fn 的参数。不过,build_fn 的参数主要是编译时的参数,编译时的参数有:metrics,loss,optimizer。然后,metrics 不可以用 scorer 替代,只能用 keras 内置的 acc、mse 填进去。当然,build_fn 的参数可能还有 units_list,用来调整网络拓扑的、或者 rate,maxnorm,用来调整正则化参数。

模型文件管理

保存为 pickle 文件

pickle 不仅仅可以用于保存模型(包括权重),而且可以用来保存数据:

from sklearn.externals import joblib
joblib.dump(要保存的变量名,r'D:\\xxx\\xxx\\data.pkl')

载入 pickle 文件:

data = joblib.load(r'D:\\xxx\\xxx\\data.pkl')

当然,模型也可以用同样的方式,保存为 pickle 文件:

joblib.dump(要保存的模型变量,r'D:\\xxx\\xxx\\model.pkl')

保存为 HDF5 文件

注意,保存为 HDF5、JSON、YAML 只有 keras 才可以,sklearn 不可以

from keras.models import Sequential
from keras.layers import Dense,Activation    #导入神经层构造包
from keras.utils import to_categorical    #导入one-hot编码法
from sklearn.datasets import load_boston
from keras.models import load_model    
X,y = load_boston(return_X_y=True)
ANN = Sequential()    #定义一个sequential类,以便构造神经网络
ANN.add(Dense(units=64,activation=’relu’,input_shape=(len(X[1,:]),)))    
ANN.add(Dense(units=1,activation=’linear’))    #输出层,units即节点个数
ANN.compile(optimizer=’adam’,loss=’mse’)
#编译模型
ANN.fit(X,y,epochs=100,batch_size=50)
#对模型进行训练
ANN.save(‘model.h5’)    #保存模型为model.h5文件
model = load_model(“model.h5”)    #读取model.h5文件,并导入模型

保存为 JSON 与 YAML

from sklearn.datasets import load_iris
import numpy as np
from sklearn.linear_model import LogisticRegression
X,y = load_iris(return_X_y=True)    #导入数据集
from keras.models import Sequential,model_from_json,model_from_yaml
from keras.layers import Dense
from keras.utils import to_categorical

y = to_categorical(y,num_classes=3)

seed = 7
np.random.seed(seed)

def create_model(optimizer='rmsprop',init='glorot_uniform'):
    model = Sequential()
    model.add(Dense(units=4,activation='relu',input_dim=4,kernel_initializer=init))
    model.add(Dense(units=6,activation='relu',kernel_initializer=init))
    model.add(Dense(units=3,activation='sigmoid',kernel_initializer=init))
    model.compile(loss='categorical_crossentropy',optimizer=optimizer,metrics=['accuracy'])
    
    return model

model = create_model()
model.fit(X,y,epochs=50,batch_size=5,verbose=0)

model_json = model.to_json()
model_yaml = model.to_yaml()
with open('model.json','w') as file:
    file.write(model_json)
with open('model.yaml','w') as file:
    file.write(model_yaml)    
    
model.save_weights('model_weights.h5')

with open('model.json','r') as file:
    model_json_load = file.read()
with open('model.yaml','r') as file:
    model_yaml_load = file.read()
        
model_load1 = model_from_json(model_json_load)
model_load2 = model_from_yaml(model_yaml_load)
model_load1.load_weights('model_weights.h5')
model_load2.load_weights('model_weights.h5')

这里的保存模型,是将网络的拓扑结构保存为 JSON 或者 YAML。与保存为 HDF5 或者 Pickle 不同的是,它们保存的是模型的“形式”,而后者既保存了模型的拓扑、又同时保存了模型的参数。

当然,可以用 model.save_weights 的方式,将模型的参数单独保存为一个 HDF5 文件,然后再将网络的拓扑结构,保存为 JSON 或者 YAML 文件。

模型更新

一般的,为了保证模型的时效性,通常需要定期对模型进行更新。这个时间通常是 1~2 个月,或者 3-6 个月。更新的方法可以是全量更新、也可以是增量更新。

假设数据的增量为 X_increment,y_increment,那么增量更新可以直接是:

model.compile(loss='categorical_crossentropy',optimizer='rmsprop',metrics=['acc']
model.fit( X_increment,y_increment,epochs=10,batch_size=5,verbose=2)   #verbose 控制显示

也就是说,增量更新可以让我们重新编译模型(当然也可以不)。当然,全量更新是将新增的数据,加入

以上是关于用示例学习 Keras的主要内容,如果未能解决你的问题,请参考以下文章

Tensorflow+kerasKeras 用Class类封装的模型如何调试call子函数的模型内部变量

基于Docker的TensorFlow机器学习框架搭建和实例源码解读

python系统学习14类的继承与创新

基于Docker的TensorFlow机器学习框架搭建和实例源码解读

后端开发者的Vue学习之路

Tensorflow+kerasKeras API两种训练GAN网络的方式