DataWhale 动手学深度学习PyTorch版-task3+4+5:文本预处理;语言模型;循环神经网络基础

Posted haiyanli

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了DataWhale 动手学深度学习PyTorch版-task3+4+5:文本预处理;语言模型;循环神经网络基础相关的知识,希望对你有一定的参考价值。

课程引用自伯禹平台:https://www.boyuai.com/elites/course/cZu18YmweLv10OeV

《动手学深度学习》官方网址:http://zh.gluon.ai/ ——面向中文读者的能运行、可讨论的深度学习教科书。

第二次打卡:

Task03: 过拟合、欠拟合及其解决方案;梯度消失、梯度爆炸;循环神经网络进阶

Task04:机器翻译及相关技术;注意力机制与Seq2seq模型;Transformer

Task05:卷积神经网络基础;leNet;卷积神经网络进阶

有部分内容学过了,所以着重学习了RNN的代码、CNN的简洁实现,记录内容如下:

VGG11的实现:

def vgg_block(num_convs, in_channels, out_channels): #卷积层个数,输入通道数,输出通道数
    blk = []
    for i in range(num_convs):
        if i == 0:
            blk.append(nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1))
        else:
            blk.append(nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1))
        blk.append(nn.ReLU())
    blk.append(nn.MaxPool2d(kernel_size=2, stride=2)) # 这里会使宽高减半
    return nn.Sequential(*blk)
 
conv_arch = ((1, 1, 64), (1, 64, 128), (2, 128, 256), (2, 256, 512), (2, 512, 512))
# 经过5个vgg_block, 宽高会减半5次, 变成 224/32 = 7
fc_features = 512 * 7 * 7 # c * w * h
fc_hidden_units = 4096 # 任意
 
def vgg(conv_arch, fc_features, fc_hidden_units=4096):
    net = nn.Sequential()
    # 卷积层部分
    for i, (num_convs, in_channels, out_channels) in enumerate(conv_arch):
        # 每经过一个vgg_block都会使宽高减半
        net.add_module("vgg_block_" + str(i+1), vgg_block(num_convs, in_channels, out_channels))
    # 全连接层部分
    net.add_module("fc", nn.Sequential(d2l.FlattenLayer(),
                                 nn.Linear(fc_features, fc_hidden_units),
                                 nn.ReLU(),
                                 nn.Dropout(0.5),
                                 nn.Linear(fc_hidden_units, fc_hidden_units),
                                 nn.ReLU(),
                                 nn.Dropout(0.5),
                                 nn.Linear(fc_hidden_units, 10)
                                ))
    return net
 
net = vgg(conv_arch, fc_features, fc_hidden_units)
X = torch.rand(1, 1, 224, 224)

# named_children获取一级子模块及其名字(named_modules会返回所有子模块,包括子模块的子模块)
for name, blk in net.named_children(): 
    X = blk(X)
    print(name, ‘output shape: ‘, X.shape)
 
vgg_block_1 output shape:  torch.Size([1, 64, 112, 112])
vgg_block_2 output shape:  torch.Size([1, 128, 56, 56])
vgg_block_3 output shape:  torch.Size([1, 256, 28, 28])
vgg_block_4 output shape:  torch.Size([1, 512, 14, 14])
vgg_block_5 output shape:  torch.Size([1, 512, 7, 7])
fc output shape:  torch.Size([1, 10])
 
ratio = 8
small_conv_arch = [(1, 1, 64//ratio), (1, 64//ratio, 128//ratio), (2, 128//ratio, 256//ratio), 
                   (2, 256//ratio, 512//ratio), (2, 512//ratio, 512//ratio)]
net = vgg(small_conv_arch, fc_features // ratio, fc_hidden_units // ratio)
print(net)
batchsize=16
#batch_size = 64
# 如出现“out of memory”的报错信息,可减小batch_size或resize
# train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=224)

lr, num_epochs = 0.001, 5
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
d2l.train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs)
 

以上是关于DataWhale 动手学深度学习PyTorch版-task3+4+5:文本预处理;语言模型;循环神经网络基础的主要内容,如果未能解决你的问题,请参考以下文章

《动手学深度学习》(PyTorch版)

你期待已久的《动手学深度学习》(PyTorch版)来啦!

PyTorch版《动手学深度学习》开源了,最美DL书遇上最赞DL框架

PyTorch版《动手学深度学习》开源了,最美DL书遇上最赞DL框架

《动手学深度学习》环境配置(PyTorch版)

动手学深度学习PyTorch版-task02