python 前向和后向传播神经网络

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了python 前向和后向传播神经网络相关的知识,希望对你有一定的参考价值。

def forward_backward_prop(data, labels, params, dimensions):
    """
    Forward and backward propagation for a two-layer sigmoidal network

    Compute the forward propagation and for the cross entropy cost,
    and backward propagation for the gradients for all parameters.

    Arguments:
    data -- M x Dx matrix, where each row is a training example.
    labels -- M x Dy matrix, where each row is a one-hot vector.
    params -- Model parameters, these are unpacked for you.
    dimensions -- A tuple of input dimension, number of hidden units
                  and output dimension
    """

    ### Unpack network parameters (do not modify)
    ofs = 0
    Dx, H, Dy = (dimensions[0], dimensions[1], dimensions[2])
    num_examples = data.shape[0]

    W1 = np.reshape(params[ofs:ofs+ Dx * H], (Dx, H))
    ofs += Dx * H
    b1 = np.reshape(params[ofs:ofs + H], (1, H))
    ofs += H
    W2 = np.reshape(params[ofs:ofs + H * Dy], (H, Dy))
    ofs += H * Dy
    b2 = np.reshape(params[ofs:ofs + Dy], (1, Dy))

    ### YOUR CODE HERE: forward propagation
    hidden = sigmoid(np.dot(data, W1) + b1)  
    prediction = softmax(np.dot(hidden, W2) + b2)
    cost = -np.sum(np.log(prediction) * labels)
    # labels = labels == True

    # true_class_prob = np.choose( labels, probs)
    # true_class_prob = probs[range(num_examples), labels]
    # print true_class_prob
    # print probs

    # cost = np.sum(logged * labels)/num_examples
    print "cost is %f"%(cost) 

    dscores = prediction - labels


    
    #Backpropagate
    gradW2 = np.dot(hidden.T, dscores)
    gradb2 = np.sum(dscores, axis=0, keepdims=True)
    dscores = np.dot(dscores,W2.T) * sigmoid_grad(hidden)
    gradW1 = np.dot(data.T, dscores)
    gradb1 = np.sum(dscores, axis = 0, keepdims=True)

    ### END YOUR CODE

    ### YOUR CODE HERE: backward propagation
    #correct_logprobs = -np.log(scores 
    ### END YOUR CODE


    ### Stack gradients (do not modify)
    grad = np.concatenate((gradW1.flatten(), gradb1.flatten(),
        gradW2.flatten(), gradb2.flatten()))

    return cost, grad


def sanity_check():
    """
    Set up fake data and parameters for the neural network, and test using
    gradcheck.
    """
    print "Running sanity check..."

    N = 20
    dimensions = [10, 5, 10]
    data = np.random.randn(N, dimensions[0])   # each row will be a datum
    labels = np.zeros((N, dimensions[2]))
    for i in xrange(N):
        labels[i, random.randint(0,dimensions[2]-1)] = 1 

    params = np.random.randn((dimensions[0] + 1) * dimensions[1] + (
        dimensions[1] + 1) * dimensions[2], )

    gradcheck_naive(lambda params:
        forward_backward_prop(data, labels, params, dimensions), params)


def your_sanity_checks():
    """
    Use this space add any additional sanity checks by running:
        python q2_neural.py
    This function will not be called by the autograder, nor will
    your additional tests be graded.
    """
    print "Running your sanity checks..."
    ### YOUR CODE HERE
    ### END YOUR CODE


if __name__ == "__main__":
    sanity_check()
    your_sanity_checks()

以上是关于python 前向和后向传播神经网络的主要内容,如果未能解决你的问题,请参考以下文章

在 pandas 数据帧中使用前向和后向填充填充缺失值(ffill 和 bfill)

神经网络中 2 个隐藏层的反向传播和前向传播

CuDNN简介

elmo模型

前向链接与后向链接

卷积神经网络前向传播和BP后向传播计算步骤