卷积神经网络相关内容总结

Posted mazinkaiser1991

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了卷积神经网络相关内容总结相关的知识,希望对你有一定的参考价值。

1)ResNet

(1)Deeper Bottleneck Architectures

bottleneck架构设计的目的就在于减少参数数量,进而缩短训练时间。注意bottleneck只在resnet50、resnet101、resnet152中使用,resnet18与resnet34仍使用两层3*3卷积堆叠的设计。

(2)对于channel不同的卷积层之间,使用步长为2的卷积操作。反之直接使用相加操作。

以keras resnet50实现为例:

def identity_block(input_tensor, kernel_size, filters, stage, block):
    """The identity block is the block that has no conv layer at shortcut.

    # Arguments
        input_tensor: input tensor
        kernel_size: default 3, the kernel size of
            middle conv layer at main path
        filters: list of integers, the filters of 3 conv layer at main path
        stage: integer, current stage label, used for generating layer names
        block: 'a','b'..., current block label, used for generating layer names

    # Returns
        Output tensor for the block.
    """
    filters1, filters2, filters3 = filters
    if backend.image_data_format() == 'channels_last':
        bn_axis = 3
    else:
        bn_axis = 1
    conv_name_base = 'res' + str(stage) + block + '_branch'
    bn_name_base = 'bn' + str(stage) + block + '_branch'

    x = layers.Conv2D(filters1, (1, 1),
                      name=conv_name_base + '2a')(input_tensor)
    x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x)
    x = layers.Activation('relu')(x)

    x = layers.Conv2D(filters2, kernel_size,
                      padding='same', name=conv_name_base + '2b')(x)
    x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2b')(x)
    x = layers.Activation('relu')(x)

    x = layers.Conv2D(filters3, (1, 1), name=conv_name_base + '2c')(x)
    x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2c')(x)

    x = layers.add([x, input_tensor])
    x = layers.Activation('relu')(x)
    return x

identity block是维度一致的res block,最后将输出与input_tensor直接相加。

def conv_block(input_tensor,
               kernel_size,
               filters,
               stage,
               block,
               strides=(2, 2)):
    """A block that has a conv layer at shortcut.

    # Arguments
        input_tensor: input tensor
        kernel_size: default 3, the kernel size of
            middle conv layer at main path
        filters: list of integers, the filters of 3 conv layer at main path
        stage: integer, current stage label, used for generating layer names
        block: 'a','b'..., current block label, used for generating layer names
        strides: Strides for the first conv layer in the block.

    # Returns
        Output tensor for the block.

    Note that from stage 3,
    the first conv layer at main path is with strides=(2, 2)
    And the shortcut should have strides=(2, 2) as well
    """
    filters1, filters2, filters3 = filters
    if backend.image_data_format() == 'channels_last':
        bn_axis = 3
    else:
        bn_axis = 1
    conv_name_base = 'res' + str(stage) + block + '_branch'
    bn_name_base = 'bn' + str(stage) + block + '_branch'

    x = layers.Conv2D(filters1, (1, 1), strides=strides,
                      name=conv_name_base + '2a')(input_tensor)
    x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x)
    x = layers.Activation('relu')(x)

    x = layers.Conv2D(filters2, kernel_size, padding='same',
                      name=conv_name_base + '2b')(x)
    x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2b')(x)
    x = layers.Activation('relu')(x)

    x = layers.Conv2D(filters3, (1, 1), name=conv_name_base + '2c')(x)
    x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2c')(x)

    shortcut = layers.Conv2D(filters3, (1, 1), strides=strides,
                             name=conv_name_base + '1')(input_tensor)
    shortcut = layers.BatchNormalization(
        axis=bn_axis, name=bn_name_base + '1')(shortcut)

    x = layers.add([x, shortcut])
    x = layers.Activation('relu')(x)
    return x

可以看到convblock中shortcut使用了一个在h、w两个方向上strides均为2的1*1卷积。最后同样包括相加操作。

2)mobilenet

(1)V1

将标准卷积分解为深度卷积(depthwise convolution)和逐点卷积(pointwise convolution)

还是以keras实现的mobilenet为例:

def _depthwise_conv_block(inputs, pointwise_conv_filters, alpha,
                          depth_multiplier=1, strides=(1, 1), block_id=1):
    channel_axis = 1 if backend.image_data_format() == 'channels_first' else -1
    pointwise_conv_filters = int(pointwise_conv_filters * alpha)

    x = layers.ZeroPadding2D((1, 1), name='conv_pad_%d' % block_id)(inputs)
    x = layers.DepthwiseConv2D((3, 3),
                               padding='valid',
                               depth_multiplier=depth_multiplier,
                               strides=strides,
                               use_bias=False,
                               name='conv_dw_%d' % block_id)(x)
    x = layers.BatchNormalization(
        axis=channel_axis, name='conv_dw_%d_bn' % block_id)(x)
    x = layers.Activation(relu6, name='conv_dw_%d_relu' % block_id)(x)

    x = layers.Conv2D(pointwise_conv_filters, (1, 1),
                      padding='same',
                      use_bias=False,
                      strides=(1, 1),
                      name='conv_pw_%d' % block_id)(x)
    x = layers.BatchNormalization(axis=channel_axis,
                                  name='conv_pw_%d_bn' % block_id)(x)
    return layers.Activation(relu6, name='conv_pw_%d_relu' % block_id)(x)

过程比较明确,depthwiseconv与1*1卷积(逐点卷积)。

(2)V2

a)引入了残差操作。

b)在depthwiseconv之前先使用1*1卷积增加维度。使用inverted residual block模块。

c)pointwise结束之后不在使用relu激活函数,而是使用linear激活函数,来防止relu对特征的破坏。

3)inception

Inception V1——构建了1x1、3x3、5x5的 conv 和3x3的 pooling 的分支网络,同时使用 MLPConv 和全局平均池化,扩宽卷积层网络宽度,增加了网络对尺度的适应性;

Inception V2——提出了 Batch Normalization,代替 Dropout 和 LRN,其正则化的效果让大型卷积网络的训练速度加快很多倍,同时收敛后的分类准确率也可以得到大幅提高,同时学习 VGG 使用两个3´3的卷积核代替5´5的卷积核,在降低参数量同时提高网络学习能力;
Inception V3——引入了 Factorization,将一个较大的二维卷积拆成两个较小的一维卷积,比如将3´3卷积拆成1´3卷积和3´1卷积,一方面节约了大量参数,加速运算并减轻了过拟合,同时增加了一层非线性扩展模型表达能力,除了在 Inception Module 中使用分支,还在分支中使用了分支(Network In Network In Network);
Inception V4——研究了 Inception Module 结合 Residual Connection,结合 ResNet 可以极大地加速训练,同时极大提升性能,在构建 Inception-ResNet 网络同时,还设计了一个更深更优化的 Inception v4 模型,能达到相媲美的性能。


 

以上是关于卷积神经网络相关内容总结的主要内容,如果未能解决你的问题,请参考以下文章

SIGAI深度学习第九集 卷积神经网络3

基于空洞卷积的多尺度2D特征融合网络

基于空洞卷积的多尺度2D特征融合网络

研究成果|编码-解码多尺度卷积神经网络人群计数方法

EfficientNet 解析:卷积神经网络模型尺度变换的反思

基于 Pytorch 解读 GoogLeNet 现代卷积神经网络