ND 卷积反向传播

Posted

技术标签:

【中文标题】ND 卷积反向传播【英文标题】:ND Convolution Backprogation 【发布时间】:2020-05-27 16:08:34 【问题描述】:

为了我的教育,我正在尝试在卷积神经网络中实现 N 维卷积层。

我想实现一个反向传播功能。但是,我不确定最有效的方法。

目前我用signal.fftconvolve来:

在前向步骤中,卷积过滤器和内核在所有过滤器上前向;

在反向传播步骤中,将导数(使用 FlipAllAxes 函数在所有维度上反转)与数组 (https://jefkine.com/general/2016/09/05/backpropagation-in-convolutional-neural-networks/) 对所有过滤器进行卷积并将它们求和。我认为输出是每个图像与每个滤波器的每个导数卷积的总和。

我对如何对导数进行卷积感到特别困惑。使用下面的类进行反向传播会导致权重大小爆炸式增长。

用输出和滤波器对导数进行卷积编程的正确方法是什么?

编辑:

根据这篇论文 (Fast Training of Convolutional Networks through FFTs),它试图做我想做的事:

前一层的导数由当前层的导数与权重的卷积给出:

dL/dy_f = dL/dx * w_f^T

权重的导数是导数与原始输入的卷积的分段和:

dL/dy = dL/dx * x

尽我所知,我已经在下面实现了这一点。然而,这似乎并没有给出预期的结果,因为我使用这一层编写的网络在训练期间表现出剧烈的波动。

    import numpy as np
    from scipy import signal

    class ConvNDLayer:
        def __init__(self,channels, kernel_size, dim):

            self.channels = channels
            self.kernel_size = kernel_size;
            self.dim = dim

            self.last_input = None

            self.filt_dims = np.ones(dim+1).astype(int)
            self.filt_dims[1:] =  self.filt_dims[1:]*kernel_size
            self.filt_dims[0]= self.filt_dims[0]*channels 
            self.filters = np.random.randn(*self.filt_dims)/(kernel_size)**dim


        def FlipAllAxes(self, array):

            sl = slice(None,None,-1)
            return array[tuple([sl]*array.ndim)] 

        def ViewAsWindows(self, array, window_shape, step=1):
             # -- basic checks on arguments
             if not isinstance(array, cp.ndarray):
                 raise TypeError("`array` must be a Cupy ndarray")
             ndim = array.ndim
             if isinstance(window_shape, numbers.Number):
                  window_shape = (window_shape,) * ndim
             if not (len(window_shape) == ndim):
                   raise ValueError("`window_shape` is incompatible with `arr_in.shape`")

             if isinstance(step, numbers.Number):
                  if step < 1:
                  raise ValueError("`step` must be >= 1")
                  step = (step,) * ndim
             if len(step) != ndim:
                   raise ValueError("`step` is incompatible with `arr_in.shape`")

              arr_shape = array.shape
              window_shape = np.asarray(window_shape, dtype=arr_shape.dtype))

              if ((arr_shape - window_shape) < 0).any():
                   raise ValueError("`window_shape` is too large")

              if ((window_shape - 1) < 0).any():
                    raise ValueError("`window_shape` is too small")

               # -- build rolling window view
                    slices = tuple(slice(None, None, st) for st in step)
                    window_strides = array.strides
                    indexing_strides = array[slices].strides
                    win_indices_shape = (((array.shape -window_shape)
                    // step) + 1)

                 new_shape = tuple(list(win_indices_shape) + list(window_shape))
                 strides = tuple(list(indexing_strides) + list(window_strides))

                  arr_out = as_strided(array, shape=new_shape, strides=strides)

                  return arr_out

        def UnrollAxis(self, array, axis):
             # This so it works with a single dimension or a sequence of them
             axis = cp.asnumpy(cp.atleast_1d(axis))
             axis2 = cp.asnumpy(range(len(axis)))

             # Put unrolled axes at the beginning
             array = cp.moveaxis(array, axis,axis2)
             # Unroll
             return array.reshape((-1,) + array.shape[len(axis):])

        def Forward(self, array):

             output_shape =cp.zeros(array.ndim + 1)    
             output_shape[1:] =  cp.asarray(array.shape)
             output_shape[0]= self.channels 
             output_shape = output_shape.astype(int)
             output = cp.zeros(cp.asnumpy(output_shape))

             self.last_input = array

             for i, kernel in enumerate(self.filters):
                    conv = self.Convolve(array, kernel)
                    output[i] = conv

             return output


        def Backprop(self, d_L_d_out, learn_rate):

            d_A= cp.zeros_like(self.last_input)
            d_W = cp.zeros_like(self.filters)


           for i, (kernel, d_L_d_out_f) in enumerate(zip(self.filters, d_L_d_out)):

                d_A += signal.fftconvolve(d_L_d_out_f, kernel.T, "same")
                conv = signal.fftconvolve(d_L_d_out_f, self.last_input, "same")
                conv = self.ViewAsWindows(conv, kernel.shape)
                axes = np.arange(kernel.ndim)
                conv = self.UnrollAxis(conv, axes)  
                d_W[i] = np.sum(conv, axis=0)


           output = d_A*learn_rate
           self.filters =  self.filters - d_W*learn_rate
           return output

【问题讨论】:

【参考方案1】:

将梯度与 learn_rate 相乘通常是不够的。

为了获得更好的性能和减少剧烈波动,梯度使用优化器通过除以过去几个梯度 (RMSprop) 等方法进行缩放。

更新还取决于错误,如果您为每个样本单独传递错误,这通常会产生噪音,因此认为对多个样本(小批量)进行平均会更好。

【讨论】:

以上是关于ND 卷积反向传播的主要内容,如果未能解决你的问题,请参考以下文章

007-卷积神经网络-前向传播-反向传播

卷积神经网络的反向传播

编写C语言版本的卷积神经网络CNN之三:CNN的误差反向传播过程

卷积神经网络(CNN)反向传播算法公式详细推导

卷积神经网络(CNN)反向传播算法公式详细推导

如何通过反向传播训练卷积神经网络中的过滤器?