深度学习---从入门到放弃pytorch基础
Posted 佩瑞
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了深度学习---从入门到放弃pytorch基础相关的知识,希望对你有一定的参考价值。
深度学习—从入门到放弃(一)pytorch
Tensor
类似于numpy的array,pandas的dataframe;在pytorch里的数据结构是tensor,即张量
tensor简单操作
1.Flatten and reshape
###
Original z:
tensor([[ 0, 1],
[ 2, 3],
[ 4, 5],
[ 6, 7],
[ 8, 9],
[10, 11]])
Flattened z:
tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
Reshaped (3x4) z:
tensor([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
###
2.Squeezing tensors
当我们处理类似于x.shape=[1,10]或[256,1,3]这样的高维数据时,单纯输入下x[0]可能无法输出对应的点数据,所以我们需要用torch.squeeze()提取某一个具体维度
x = torch.randn(1, 10)
x = x.squeeze(0)#取到了第一行的x的数据
print(x.shape)
print(f"x[0]: {x[0]}")
###
torch.Size([10])
x[0]: -0.7390837073326111
###
3.permute
torch.permute()可以用来重新排列维度之间的顺序
x = torch.rand(3, 48, 64)
x = x.permute(1, 2, 0)
###
torch.Size([48, 64, 3])
###
4.Concatenation
tensor和tensor之间按维度的拼接
x = torch.arange(12, dtype=torch.float32).reshape((3, 4))
y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
#行连接
cat_rows = torch.cat((x, y), dim=0)
#列连接
cat_cols = torch.cat((x, y), dim=1)
###
行连接: shape[6, 4]
tensor([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[ 2., 1., 4., 3.],
[ 1., 2., 3., 4.],
[ 4., 3., 2., 1.]])
列连接: shape[3, 8]
tensor([[ 0., 1., 2., 3., 2., 1., 4., 3.],
[ 4., 5., 6., 7., 1., 2., 3., 4.],
[ 8., 9., 10., 11., 4., 3., 2., 1.]])
###
GPU vs CPU
在处理大规模与高速数据时,CPU很难满足需要,而深度学习往往就需要处理大规模的数据,所以我们需要灵活的选择CPU或GPU
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \\n"
"If you want to enable it, in the menu under `Runtime` -> \\n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \\n"
"If you want to disable it, in the menu under `Runtime` -> \\n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
DEVICE = set_device()
简单神经网络
Pytorch有一个 nn.Module类专门用于构建深度学习网络,我们需要从 nn.Module中继承并实现一些重要的方法:
- init
在该__init__方法中,我们需要定义网络的结构。在这里,我们将指定网络由哪些层组成,将使用哪些激活函数等。 - forward
所有神经网络模块都需要实现该forward方法。它指定了当数据通过网络时网络需要进行的计算。 - predict
这不是神经网络模块的强制性方法,但可用于快速从网络中获得最可能的标签 - train
这也不是强制性方法,但可用于训练网络中的参数
# Inherit from nn.Module - the base class for neural network modules provided by Pytorch
class NaiveNet(nn.Module):
# Define the structure of your network
def __init__(self):
super(NaiveNet, self).__init__()
# The network is defined as a sequence of operations
self.layers = nn.Sequential(
nn.Linear(2, 16), # Transformation from the input to the hidden layer
nn.ReLU(), # Activation function (ReLU) is a non-linearity which is widely used because it reduces computation. The function returns 0 if it receives any
# negative input, but for any positive value x, it returns that value back.
nn.Linear(16, 2), # Transformation from the hidden to the output layer
)
# Specify the computations performed on the data
def forward(self, x):
# Pass the data through the layers
return self.layers(x)
# Choose the most likely label predicted by the network
def predict(self, x):
# Pass the data through the networks
output = self.forward(x)
# Choose the label with the highest score
return torch.argmax(output, 1)
# Implement the train function given a training dataset X and correcsponding labels y
def train(model, X, y):
# The Cross Entropy Loss is suitable for classification problems
loss_function = nn.CrossEntropyLoss()
# Create an optimizer (Stochastic Gradient Descent) that will be used to train the network
learning_rate = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Number of epochs
epochs = 15000
# List of losses for visualization
losses = []
for i in range(epochs):
# Pass the data through the network and compute the loss
# We'll use the whole dataset during the training instead of using batches
# in to order to keep the code simple for now.
y_logits = model.forward(X)
loss = loss_function(y_logits, y)
# Clear the previous gradients and compute the new ones
optimizer.zero_grad()
loss.backward()
# Adapt the weights of the network
optimizer.step()
# Store the loss
losses.append(loss.item())
# Print the results at every 1000th epoch
if i % 1000 == 0:
print(f"Epoch {i} loss is {loss.item()}")
plot_decision_boundary(model, X, y, DEVICE)
plt.savefig('frames/{:05d}.png'.format(i))
return losses
# Create a new network instance a train it
model = NaiveNet().to(DEVICE)
losses = train(model, X, y)
以上为一个简单神经网络应用于分类的实例,整个网络的结构如下:1 个大小为 2 的输入层+1 个大小为 16 的隐藏层(ReLU为激活函数)+1 个大小为 2 的输出层
NaiveNet(
(layers): Sequential(
(0): Linear(in_features=2, out_features=16, bias=True)
(1): ReLU()
(2): Linear(in_features=16, out_features=2, bias=True)
)
)
今天大家只需对神经网络的基本结构有一个了解,明天将会系统学习简单线性神经网络的详细结构。也欢迎大家关注公众号奇趣多多一块交流!
深度学习—从入门到放弃(二)简单线性神经网络
以上是关于深度学习---从入门到放弃pytorch基础的主要内容,如果未能解决你的问题,请参考以下文章
对比学习:《深度学习之Pytorch》《PyTorch深度学习实战》+代码
零基础如何入门到精通人工智能Pytorch, 深度学习,如何跟进AI领域的最新算法,如何读论文找代码
零基础如何入门到精通人工智能Pytorch, 深度学习,如何跟进AI领域的最新算法,如何读论文找代码