tensorflow1.3 API学习笔记 1
Posted 刘二毛
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了tensorflow1.3 API学习笔记 1相关的知识,希望对你有一定的参考价值。
tf.layers.conv2d 卷积层
https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/layers/conv2dconv2d(
inputs,
filters,
kernel_size,
strides=(1, 1),
padding='valid',
data_format='channels_last',
dilation_rate=(1, 1),
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
trainable=True,
name=None,
reuse=None
)
参数说明:
inputs
: 输入数据.filters
:卷积核
. 类型和input
必须相同,4维tensor,[filter_height, filter_width, in_channels, out_channels],如[5,5,3,32]
.kernel_size
: An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.strides
: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying anydilation_rate
value != 1.padding
: One of"valid"
or"same"
(case-insensitive).-
data_format
: A string, one ofchannels_last
(default) orchannels_first
. The ordering of the dimensions in the inputs.channels_last
corresponds to inputs with shape(batch, height, width, channels)
whilechannels_first
corresponds to inputs with shape(batch, channels, height, width)
. -
dilation_rate
: An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying anydilation_rate
value != 1 is incompatible with specifying any stride value != 1. activation
: Activation function. Set it to None to maintain a linear activation.use_bias
: Boolean, whether the layer uses a bias.kernel_initializer
: An initializer for the convolution kernel.bias_initializer
: An initializer for the bias vector. If None, no bias will be applied.kernel_regularizer
: Optional regularizer for the convolution kernel.bias_regularizer
: Optional regularizer for the bias vector.activity_regularizer
: Regularizer function for the output.trainable
: Boolean, ifTrue
also add variables to the graph collectionGraphKeys.TRAINABLE_VARIABLES
(seetf.Variable
).name
: A string, the name of the layer.reuse
: Boolean, whether to reuse the weights of a previous layer by the same name.
tf.layers.max_pooling2d
https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/layers/max_pooling2d
max_pooling2d(
inputs,
pool_size,
strides,
padding='valid',
data_format='channels_last',
name=None
)
参数说明:
inputs
: 进行池化的数据。pool_size
: 池化的核大小(pool_height, pool_width),如[3,3]. 如果长宽相等,也可以直接设置为一个数,如pool_size=3.strides
: 池化的滑动步长。可以设置为[1,1]这样的两个整数. 也可以直接设置为一个数,如strides=2padding
: 边缘填充,'same' 和'valid‘选其一。默认为validdata_format
: 输入数据格式,默认为channels_last
,即(batch, height, width, channels),也可以设置为
channels_first
对应(batch, channels, height, width)
.name
: 层的名字
示例:
pool1=tf.layers.max_pooling2d(inputs=x, pool_size=[2, 2], strides=1)
# tf.layers.average_pooling2d是均值池化
tf.layers.dense 全连接层
https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/layers/dense
dense(
inputs,
units,
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
trainable=True,
name=None,
reuse=None
)
inputs
: 输入数据,2维tensor.units
: 该层的神经单元结点数。activation
: 激活函数.use_bias
: Boolean型,是否使用偏置项.kernel_initializer
: 卷积核的初始化器.bias_initializer
: 偏置项的初始化器,默认初始化为0.kernel_regularizer
: 卷积核化的正则化,可选.bias_regularizer
: 偏置项的正则化,可选.activity_regularizer
: 输出的正则化函数.trainable
: Boolean型,表明该层的参数是否参与训练。如果为真则变量加入到图集合中GraphKeys.TRAINABLE_VARIABLES
(seetf.Variable
).name
: 层的名字.reuse
: Boolean型, 是否重复使用参数.
全连接层执行操作 outputs = activation(inputs.kernel + bias)
如果执行结果不想进行激活操作,则设置activation=None。
例:
#全连接层 dense1 = tf.layers.dense(inputs=pool3, units=512, activation=tf.nn.relu) dense2= tf.layers.dense(inputs=dense1, units=512, activation=tf.nn.relu)两个全链接
以上是关于tensorflow1.3 API学习笔记 1的主要内容,如果未能解决你的问题,请参考以下文章