python VGG16 + BatchNormalization
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了python VGG16 + BatchNormalization相关的知识,希望对你有一定的参考价值。
def VGG_16_BN(input_shape):
model = models.Sequential()
model.add(Convolution2D(64, (3,3), input_shape=input_shape, activation='relu', padding='same', use_bias=False))
model.add(BatchNormalization())
model.add(Convolution2D(64, (3,3), activation='relu', padding='same', use_bias=False))
model.add(BatchNormalization())
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Convolution2D(128, (3,3), activation='relu', padding='same', use_bias=False))
model.add(BatchNormalization())
model.add(Convolution2D(128, (3,3), activation='relu', padding='same', use_bias=False))
model.add(BatchNormalization())
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Convolution2D(256, (3,3), activation='relu', padding='same', use_bias=False))
model.add(BatchNormalization())
model.add(Convolution2D(256, (3,3), activation='relu', padding='same', use_bias=False))
model.add(BatchNormalization())
model.add(Convolution2D(256, (3,3), activation='relu', padding='same', use_bias=False))
model.add(BatchNormalization())
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Convolution2D(512, (3,3), activation='relu', padding='same', use_bias=False))
model.add(BatchNormalization())
model.add(Convolution2D(512, (3,3), activation='relu', padding='same', use_bias=False))
model.add(BatchNormalization())
model.add(Convolution2D(512, (3,3), activation='relu', padding='same', use_bias=False))
model.add(BatchNormalization())
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Convolution2D(512, (3,3), activation='relu', padding='same', use_bias=False))
model.add(BatchNormalization())
model.add(Convolution2D(512, (3,3), activation='relu', padding='same', use_bias=False))
model.add(BatchNormalization())
model.add(Convolution2D(512, (3,3), activation='relu', padding='same', use_bias=False))
model.add(BatchNormalization())
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Flatten())
model.add(Dense(4096, activation='relu', use_bias=False))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(4096, activation='relu', use_bias=False))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
return model
以上是关于python VGG16 + BatchNormalization的主要内容,如果未能解决你的问题,请参考以下文章
带有caffe的python中的VGG人脸描述符
VGG-16复现
用于 VGG19 模型参数的 Tensorflow Float16
vgg16 和vgg19的区别
VGG16 pre-trained model 实现image classification
为啥resnet34 比vgg16 还慢