Tensor基础操作
import torch as t Tensor基础操作 # 构建张量空间,不初始化 x = t.Tensor(5,3) x """ -2.4365e-20 -1.4335e-03 -2.4290e+25 -1.0283e-13 -2.8296e-07 -2.0769e+22 -1.3816e-33 -6.4672e-32 1.4497e-32 1.6020e-19 6.2625e+22 4.7428e+30 4.0095e-08 1.1943e-32 -3.5308e+35 [torch.FloatTensor of size 5x3] """ # 构建张量空间,[0,1]均匀分布初始化 x = t.rand(5,3) x """ 0.9618 0.0669 0.1458 0.3154 0.0680 0.1883 0.1795 0.4173 0.0395 0.7673 0.4906 0.6148 0.0949 0.2366 0.7571 [torch.FloatTensor of size 5x3] """ # 查看矩阵形状,返回时tuple的子类,可以直接索引 print(x.shape) print(x.size()) """ torch.Size([5, 3]) torch.Size([5, 3]) """
Tensor加法操作
- 符号加
- torch.add(out=Tensor)
- Tensor.add(),方法后面不带有下划线时方法不会修改Tensor本身,仅仅返回新的值
- Tensor.add_(),方法后面带有下划线时方法会修改Tensor本身,同时返回新的值
Tensor加法操作 # 加法操作:t.add() y = t.rand(5,3) print(x + y) print(t.add(x, y)) result = t.Tensor(5,3) t.add(x, y, out=result) print(result) """ 1.6288 0.4566 0.9290 0.5943 0.4722 0.7359 0.4316 1.0932 0.7476 1.6499 1.3201 1.5611 0.3274 0.4651 1.5257 [torch.FloatTensor of size 5x3] 1.6288 0.4566 0.9290 0.5943 0.4722 0.7359 0.4316 1.0932 0.7476 1.6499 1.3201 1.5611 0.3274 0.4651 1.5257 [torch.FloatTensor of size 5x3] 1.6288 0.4566 0.9290 0.5943 0.4722 0.7359 0.4316 1.0932 0.7476 1.6499 1.3201 1.5611 0.3274 0.4651 1.5257 [torch.FloatTensor of size 5x3] """ # 加法操作:Tensor自带方法 print(y) # 不改变y本身 print("y.add():\n", y.add(x)) print(y) print("y.add_():\n", y.add_(x)) print(y) """ 0.6670 0.3897 0.7832 0.2788 0.4042 0.5476 0.2521 0.6759 0.7081 0.8825 0.8295 0.9462 0.2325 0.2286 0.7686 [torch.FloatTensor of size 5x3] """ y.add(): """ 1.6288 0.4566 0.9290 0.5943 0.4722 0.7359 0.4316 1.0932 0.7476 1.6499 1.3201 1.5611 0.3274 0.4651 1.5257 [torch.FloatTensor of size 5x3] 0.6670 0.3897 0.7832 0.2788 0.4042 0.5476 0.2521 0.6759 0.7081 0.8825 0.8295 0.9462 0.2325 0.2286 0.7686 [torch.FloatTensor of size 5x3] """ y.add_(): """ 1.6288 0.4566 0.9290 0.5943 0.4722 0.7359 0.4316 1.0932 0.7476 1.6499 1.3201 1.5611 0.3274 0.4651 1.5257 [torch.FloatTensor of size 5x3] 1.6288 0.4566 0.9290 0.5943 0.4722 0.7359 0.4316 1.0932 0.7476 1.6499 1.3201 1.5611 0.3274 0.4651 1.5257 [torch.FloatTensor of size 5x3] """
Tensor索引以及和Numpy.array转换
Tensor对象和numpy的array对象高度相似,不仅可以相互转换,而且:
- 转换前后的两者共享内存,所以他们之间的转换很快,而且几乎不会消耗资源,这意味着一个改变另一个也随之改变
- 两者在调用时可以相互取代(应该是由于两者的内置方法高度相似)
Tensor索引 # Tensor索引和numpy的array类似 x[:, 1] """ 0.0669 0.0680 0.4173 0.4906 0.2366 [torch.FloatTensor of size 5] """ Tensor和numpy转换 a = t.ones_like(x) b = a.numpy() # Tensor->array b """ array([[ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.]], dtype=float32) """ import numpy as np print(x) # Tensor和array的交互很强,一定程度上可以相互替代 a = np.ones_like(x) print(a) """ 0.9618 0.0669 0.1458 0.3154 0.0680 0.1883 0.1795 0.4173 0.0395 0.7673 0.4906 0.6148 0.0949 0.2366 0.7571 [torch.FloatTensor of size 5x3] [[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]] """ b = t.from_numpy(a) # array->Tensor print(a) print(b) b.add_(1) # 两者共享内存 print(a) print(b) """ [[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [torch.FloatTensor of size 5x3] [[ 2. 2. 2.] [ 2. 2. 2.] [ 2. 2. 2.] [ 2. 2. 2.] [ 2. 2. 2.]] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 [torch.FloatTensor of size 5x3] """
最后,实验以下cpu加速,当然,由于我的笔记本没有加速,所以条件是不满足的。
if t.cuda.is_available(): x = x.cuda() y = y.cuda() x+y