连续张量理解和contiguous()方法使用,view和reshape的区别

Posted lishikai

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了连续张量理解和contiguous()方法使用,view和reshape的区别相关的知识,希望对你有一定的参考价值。

连续张量理解和contiguous()方法使用,view和reshape的区别

待办

内存共享:
下边的x内存布局是从0开始的,y内存布局,不是从0开始的张量

 
   For example: when you call transpose(), PyTorch doesn‘t generate new tensor with new layout, it just modifies meta information in Tensor object so offset and stride are for new shape. The transposed tensor and original tensor are indeed sharing the memory!

^[

> x = torch.randn(3,2) y = torch.transpose(x, 0, 1) x[0, 0] = 42
> print(y[0,0])
> # prints 42

]
This is where the concept of contiguous comes in. Above x is contiguous but y is not because its memory layout is different than a tensor of same shape made from scratch. Note that the word "contiguous" is bit misleading because its not that the content of tensor is spread out around disconnected blocks of memory. Here bytes are still allocated in one block of memory but the order of the elements is different!

When you call contiguous(), it actually makes a copy of tensor so the order of elements would be same as if tensor of same shape created from scratch.

Normally you don‘t need to worry about this. If PyTorch expects contiguous tensor but if its not then you will get RuntimeError: input is not contiguous and then you just add a call to contiguous()

view 和 reshape 的区别

Another difference is that reshape() can operate on both contiguous
and non-contiguous tensor while view() can only operate on contiguous
tensor. Also see here about the meaning of contiguous.

如果出现不是连续张量的问题,解决方案

Another difference is that reshape() can operate on both contiguous
and non-contiguous tensor while view() can only operate on contiguous
tensor. Also see here about the meaning of contiguous.

以上是关于连续张量理解和contiguous()方法使用,view和reshape的区别的主要内容,如果未能解决你的问题,请参考以下文章

pytorch进行维度变换以及形状变换

pytorch进行维度变换以及形状变换

pytorch进行维度变换以及形状变换

哪些功能或模块需要连续输入?

理解 pytorch 中的雅可比张量梯度

reshape() 是不是与 contiguous().view() 相同?