简单的 vanilla RNN 没有通过梯度检查

Posted

技术标签:

【中文标题】简单的 vanilla RNN 没有通过梯度检查【英文标题】:Simple vanilla RNN doesn't pass gradient check 【发布时间】:2019-09-18 22:33:19 【问题描述】:

我最近尝试从头开始实现一个普通的 RNN。我实现了一切,甚至运行了一个看似不错的示例!但我注意到梯度检查不成功!只有某些部分(特别是输出的权重和偏差)通过梯度检查,而其他权重(WhhWhx)不通过。

我关注了karpathy/corsera 的实施,并确保一切都已实施。然而karpathy/corsera 的代码通过了梯度检查,而我的没有。我现在不知道是什么原因造成的!

这里是原代码中负责后向传递的sn-ps:

def rnn_step_backward(dy, gradients, parameters, x, a, a_prev):
    
    gradients['dWya'] += np.dot(dy, a.T)
    gradients['dby'] += dy
    da = np.dot(parameters['Wya'].T, dy) + gradients['da_next'] # backprop into h
    daraw = (1 - a * a) * da # backprop through tanh nonlinearity
    gradients['db'] += daraw
    gradients['dWax'] += np.dot(daraw, x.T)
    gradients['dWaa'] += np.dot(daraw, a_prev.T)
    gradients['da_next'] = np.dot(parameters['Waa'].T, daraw)
    return gradients
    
def rnn_backward(X, Y, parameters, cache):
    # Initialize gradients as an empty dictionary
    gradients = 
    
    # Retrieve from cache and parameters
    (y_hat, a, x) = cache
    Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
    
    # each one should be initialized to zeros of the same dimension as its corresponding parameter
    gradients['dWax'], gradients['dWaa'], gradients['dWya'] = np.zeros_like(Wax), np.zeros_like(Waa), np.zeros_like(Wya)
    gradients['db'], gradients['dby'] = np.zeros_like(b), np.zeros_like(by)
    gradients['da_next'] = np.zeros_like(a[0])
    
    ### START CODE HERE ###
    # Backpropagate through time
    for t in reversed(range(len(X))):
        dy = np.copy(y_hat[t])
        # this means, subract the correct answer from the predicted value (1-the predicted value which is specified by Y[t])
        dy[Y[t]] -= 1
        gradients = rnn_step_backward(dy, gradients, parameters, x[t], a[t], a[t-1])
    ### END CODE HERE ###
    
    return gradients, a

这是我的实现:

def rnn_cell_backward(self, xt, h, h_prev, output, true_label, dh_next):
    """
        Runs a single backward pass once.
        Inputs:
        - xt: The input data of shape (Batch_size, input_dim_size)
        - h:  The next hidden state at timestep t(which comes from the forward pass)
        - h_prev: The previous hidden state at timestep t-1
        - output : The output at the current timestep
        - true_label: The label for the current timestep, used for calculating loss
        - dh_next: The gradient of hidden state h (dh) which in the beginning
            is zero and is updated as we go backward in the backprogagation.
            the dh for the next round, would come from the 'dh_prev' as we will see shortly!
            Just remember the backward pass is essentially a loop! and we start at the end 
            and traverse back to the beginning!

        Returns : 
        - dW1 : The gradient for W1
        - dW2 : The gradient for W2
        - dW3 : The gradient for W3
        - dbh : The gradient for bh
        - dbo : The gradient for bo
        - dh_prev : The gradient for previous hiddenstate at timestep t-1. this will be used
        as the next dh for the next round of backpropagation.
        - per_ts_loss  : The loss for current timestep.
    """
    e = np.copy(output)
    # correct idx for each row(sample)!
    idxs = np.argmax(true_label, axis=1)
    # number of rows(samples) in our batch
    rows = np.arange(e.shape[0])
    # This is the vectorized version of error_t = output_t - label_t or simply e = output[t] - 1
    # where t refers to the index in which label is 1. 
    e[rows, idxs] -= 1
    # This is used for our loss to see how well we are doing during training.
    per_ts_loss = output[rows, idxs].sum()

    # must have shape of W3 which is (vocabsize_or_output_dim_size, hidden_state_size)
    dW3 = np.dot(e.T, h)
    # dbo = e.1, since we have batch we use np.sum
    # e is a vector, when it is subtracted from label, the result will be added to dbo
    dbo = np.sum(e, axis=0)
    # when calculating the dh, we also add the dh from the next timestep as well
    # when we are in the last timestep, the dh_next is initially zero.
    dh = np.dot(e,  self.W3) + dh_next  # from later cell
    # the input part
    dtanh = (1 - h * h) * dh
    # dbh = dtanh.1, we use sum, since we have a batch
    dbh = np.sum(dtanh, axis=0)

    # compute the gradient of the loss with respect to W1
    # this is actually not needed! we only care about tune-able
    # parameters, so we are only after, W1,W2,W3, db and do
    # dxt = np.dot(dtanh, W1.T)

    # must have the shape of (vocab_size, hidden_state_size)
    dW1 = np.dot(xt.T, dtanh)

    # compute the gradient with respect to W2
    dh_prev = np.dot(dtanh, self.W2)
    # shape must be (HiddenSize, HiddenSize)
    dW2 = np.dot(h_prev.T, dtanh)

    return dW1, dW2, dW3, dbh, dbo, dh_prev, per_ts_loss

def rnn_layer_backward(self, Xt, labels, H, O):
    """
        Runs a full backward pass on the given data. and returns the gradients.
        Inputs: 
        - Xt: The input data of shape (Batch_size, timesteps, input_dim_size)
        - labels: The labels for the input data
        - H: The hiddenstates for the current layer prodced in the foward pass 
          of shape (Batch_size, timesteps, HiddenStateSize)
        - O: The output for the current layer of shape (Batch_size, timesteps, outputsize)

        Returns :
        - dW1: The gradient for W1
        - dW2: The gradient for W2
        - dW3: The gradient for W3
        - dbh: The gradient for bh
        - dbo: The gradient for bo
        - dh: The gradient for the hidden state at timestep t
        - loss: The current loss 

    """

    dW1 = np.zeros_like(self.W1)
    dW2 = np.zeros_like(self.W2)
    dW3 = np.zeros_like(self.W3)
    dbh = np.zeros_like(self.bh)
    dbo = np.zeros_like(self.bo)
    dh_next = np.zeros_like(H[:, 0, :])
    hprev = None

    _, T_x, _ = Xt.shape
    loss = 0
    for t in reversed(range(T_x)):

        # this if-else block can be removed! and for hprev, we can simply
        # use H[:,t -1, : ] instead, but I also add this in case it makes a
        # a difference! so far I have not seen any difference though!
        if t > 0:
            hprev = H[:, t - 1, :]
        else:
            hprev = np.zeros_like(H[:, 0, :])

        dw_1, dw_2, dw_3, db_h, db_o, dh_prev, e = self.rnn_cell_backward(Xt[:, t, :],
                                                                          H[:, t, :],
                                                                          hprev,
                                                                          O[:, t, :],
                                                                          labels[:, t, :],
                                                                          dh_next)
        dh_next = dh_prev
        dW1 += dw_1
        dW2 += dw_2
        dW3 += dw_3
        dbh += db_h
        dbo += db_o

        # Update the loss by substracting the cross-entropy term of this time-step from it.
        loss -= np.log(e)

    return dW1, dW2, dW3, dbh, dbo, dh_next, loss

我已经评论了所有内容,并在此处提供了一个最小示例来演示这一点:

My code(未通过梯度检查)

这是我用作指导的实现。这是来自karpathy/Coursera 并通过了所有梯度检查!:original code

此时我不知道为什么这不起作用。我是 Python 的初学者,所以这可能是我找不到问题的原因。

【问题讨论】:

【参考方案1】:

2 个月后,我想我找到了罪魁祸首!我应该更改以下行:

# compute the gradient with respect to W2
dh_prev = np.dot(dtanh, self.W2)

# compute the gradient with respect to W2
# note the transpose here!
dh_prev = np.dot(dtanh, self.W2.T) 

当我最初编写反向传递时,我只关注尺寸,这让我犯了这个错误。这实际上是在无意识/盲目重塑/转置(或不这样做!)中可能发生的混乱功能的一个示例。 为了了解这里出了什么问题,让我举个例子。 假设我们有一个人的特征矩阵,并且我们将每一行分配给每个人,因此我们的矩阵看起来像这样:

      Features |  Age  | height(cm)  |  weight(kg)  | 
matrix =       |   20  |    185      |      75      |
               |   85  |    155      |      95      |
               |   40  |    205      |     120      |

现在,如果我们把它变成一个 numpy 数组,我们将拥有以下内容:

m = np.array([[20, 185, 75],
             [85, 155, 95],
             [40, 205, 120]])

一个简单的 3x3 数组对吗? 现在我们解释矩阵的方式非常重要,这里每一行每一列都有特定的含义。每个人用一行来描述,每一列是一个特定的特征向量。 因此,您会看到我们用来表示数据的矩阵中有一个“结构”。 换句话说,每个数据项都表示为一行,每列指定一个特征。与另一个矩阵相乘时,要注意这个语义,意思是两个矩阵相乘时,每个数据行都必须有这个语义。 让我们举个例子,让这一点更清楚: 假设我们有两个矩阵:

 m1 = np.array([[20, 185, 75],
             [85, 155, 95],
             [40, 205, 120]])

 m2 = np.array([[0.9, 0.8, 0.85],
                [0.1, 0.5, 0.4],
                [0.6, 0.9, 0.8]])

这两个矩阵包含按行排列的数据,因此,将它们相乘会得到正确的答案,但是使用 Transpose 改变数据的顺序会破坏语义,我们将相乘不相关的数据! 在我的情况下,我需要转置第二个矩阵以使顺序正确 为了手头的手术!这有望修复梯度检查!

【讨论】:

以上是关于简单的 vanilla RNN 没有通过梯度检查的主要内容,如果未能解决你的问题,请参考以下文章

RNN 梯度消失/爆炸问题的解决方法

深度学习面试题33:RNN的梯度更新(BPTT)

普通RNN,LSTM长短期记忆

RNN中的梯度消失爆炸原因

RNN梯度问题

Keras之RNN和LSTM