tensorflow 卷积/反卷积-池化/反池化操作详解
Posted ranjiewen
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了tensorflow 卷积/反卷积-池化/反池化操作详解相关的知识,希望对你有一定的参考价值。
- Plese see this answer for a detailed example of how
tf.nn.conv2d_backprop_input
andtf.nn.conv2d_backprop_filter
in an example.
In tf.nn
, there are 4 closely related 2d conv functions:
tf.nn.conv2d
tf.nn.conv2d_backprop_filter
tf.nn.conv2d_backprop_input
tf.nn.conv2d_transpose
Given out = conv2d(x, w)
and the output gradient d_out
:
- Use
tf.nn.conv2d_backprop_filter
to compute the filter gradientd_w
- Use
tf.nn.conv2d_backprop_input
to compute the filter gradientd_x
tf.nn.conv2d_backprop_input
can be implemented bytf.nn.conv2d_transpose
- All 4 functions above can be implemented by
tf.nn.conv2d
- Actually, use TF‘s autodiff is the fastest way to compute gradients
Long Answer
Now, let‘s give an actual working code example of how to use the 4 functions above to compute d_x
and d_w
given d_out
. This shows how conv2d
, conv2d_backprop_filter
, conv2d_backprop_input
, and conv2d_transpose
are related to each other. Please find the full scripts here.
Computing d_x
in 4 different ways:
# Method 1: TF‘s autodiff
d_x = tf.gradients(f, x)[0]
# Method 2: manually using conv2d
d_x_manual = tf.nn.conv2d(input=tf_pad_to_full_conv2d(d_out, w_size),
filter=tf_rot180(w),
strides=strides,
padding=‘VALID‘)
# Method 3: conv2d_backprop_input
d_x_backprop_input = tf.nn.conv2d_backprop_input(input_sizes=x_shape,
filter=w,
out_backprop=d_out,
strides=strides,
padding=‘VALID‘)
# Method 4: conv2d_transpose
d_x_transpose = tf.nn.conv2d_transpose(value=d_out,
filter=w,
output_shape=x_shape,
strides=strides,
padding=‘VALID‘)
Computing d_w
in 3 different ways:
# Method 1: TF‘s autodiff
d_w = tf.gradients(f, w)[0]
# Method 2: manually using conv2d
d_w_manual = tf_NHWC_to_HWIO(tf.nn.conv2d(input=x,
filter=tf_NHWC_to_HWIO(d_out),
strides=strides,
padding=‘VALID‘))
# Method 3: conv2d_backprop_filter
d_w_backprop_filter = tf.nn.conv2d_backprop_filter(input=x,
filter_sizes=w_shape,
out_backprop=d_out,
strides=strides,
padding=‘VALID‘)
Please see the full scripts for the implementation of tf_rot180
, tf_pad_to_full_conv2d
, tf_NHWC_to_HWIO
. In the scripts, we check that the final output values of different methods are the same; a numpy implementation is also available.
- 第十四节,TensorFlow中的反卷积,反池化操作以及gradients的使用
- http://www.cnblogs.com/pinard/p/6494810.html :卷积神经网络(CNN)反向传播算法
- http://blog.csdn.net/yunpiao123456/article/details/52437794
以上是关于tensorflow 卷积/反卷积-池化/反池化操作详解的主要内容,如果未能解决你的问题,请参考以下文章