如何使用 pytorch 构建多维自动编码器

Posted

技术标签:

【中文标题】如何使用 pytorch 构建多维自动编码器【英文标题】:how to build a multidimensional autoencoder with pytorch 【发布时间】:2019-10-18 15:17:19 【问题描述】:

我遵循了序列自动编码器的这个很好的答案,

LSTM autoencoder always returns the average of the input sequence.

但我在尝试更改代码时遇到了一些问题:

    问题一: 您的解释很专业,但问题与我的有点不同,我附上了一些我从您的示例中更改的代码。我的输入特征是二维的,我的输出与输入相同。 例如:
input_x = torch.Tensor([[0.0,0.0], [0.1,0.1], [0.2,0.2], [0.3,0.3], [0.4,0.4]])
output_y = torch.Tensor([[0.0,0.0], [0.1,0.1], [0.2,0.2], [0.3,0.3], [0.4,0.4]])
the input_x and output_y are same, 5-timesteps, 2-dimensional feature.

        import torch
        import torch.nn as nn
        import torch.optim as optim

        class LSTM(nn.Module):
            def __init__(self, input_dim, latent_dim, num_layers):
                super(LSTM, self).__init__()
               self.input_dim = input_dim
                self.latent_dim = latent_dim
                self.num_layers = num_layers
                self.encoder = nn.LSTM(self.input_dim, self.latent_dim, self.num_layers)

                # I changed here, to 40 dimesion, I think there is some problem 
                # self.decoder = nn.LSTM(self.latent_dim, self.input_dim, self.num_layers)
                self.decoder = nn.LSTM(40, self.input_dim, self.num_layers)

            def forward(self, input):
                # Encode
                _, (last_hidden, _) = self.encoder(input)
                # It is way more general that way
                encoded = last_hidden.repeat(input.shape)
                # Decode
                y, _ = self.decoder(encoded)
               return torch.squeeze(y)

        model = LSTM(input_dim=2, latent_dim=20, num_layers=1)
        loss_function = nn.MSELoss()
        optimizer = optim.Adam(model.parameters())
        y = torch.Tensor([[0.0,0.0], [0.1,0.1], [0.2,0.2], [0.3,0.3], [0.4,0.4]])
        x = y.view(len(y), -1, 2)   # I changed here 

        while True:
            y_pred = model(x)
            optimizer.zero_grad()
            loss = loss_function(y_pred, y)
            loss.backward()
            optimizer.step()
            print(y_pred)

上面的代码可以很好的学习,能不能帮忙复习一下代码并给出一些说明。

当我输入 2 个示例作为模型的输入时,模型无法工作:

例如,更改代码:

y = torch.Tensor([[0.0,0.0], [0.1,0.1], [0.2,0.2], [0.3,0.3], [0.4,0.4]])

到:

y = torch.Tensor([[[0.0,0.0],[0.5,0.5]], [[0.1,0.1], [0.6,0.6]], [[0.2,0.2],[0.7,0.7]], [[0.3,0.3],[0.8,0.8]], [[0.4,0.4],[0.9,0.9]]])

当我计算损失函数时,它抱怨一些错误?谁能帮忙看看

    问题二: 我的训练样本长度不同: 例如:
x1 = [[0.0,0.0], [0.1,0.1], [0.2,0.2], [0.3,0.3], [0.4,0.4]]   #with 5 timesteps
x2 = [[0.5,0.5], [0.6,0.6], [0.7,0.7]] #with only 3 timesteps

如何将这两个训练样本同时输入到模型中进行批量训练。

【问题讨论】:

第 1 题有哪些错误? 如上所述,我有问题一,问题二,如何实现问题二?谢谢你的回复 【参考方案1】:

循环 N 维自动编码器

首先,LSTM 处理 1D 样本,你的是 2D,因为它通常用于用单个向量编码的单词。

不过不用担心,您可以将此2D 样本展平为1D,例如:

import torch

var = torch.randn(10, 32, 100, 100)
var.reshape((10, 32, -1))  # shape: [10, 32, 100 * 100]

请注意它真的不通用,如果你有3D 输入怎么办?下面的片段将此概念推广到样本的任何维度,前提是前面的维度是 batch_sizeseq_len

import torch

input_size = 2

var = torch.randn(10, 32, 100, 100, 35)
var.reshape(var.shape[:-input_size] + (-1,)) # shape: [10, 32, 100 * 100 * 35]

最后,您可以在神经网络中使用它,如下所示。尤其是forward 方法和构造函数参数:

import torch


class LSTM(nn.Module):
    # input_dim has to be size after flattening
    # For 20x20 single input it would be 400
    def __init__(
        self,
        input_dimensionality: int,
        input_dim: int,
        latent_dim: int,
        num_layers: int,
    ):
        super(LSTM, self).__init__()
        self.input_dimensionality: int = input_dimensionality
        self.input_dim: int = input_dim  # It is 1d, remember
        self.latent_dim: int = latent_dim
        self.num_layers: int = num_layers
        self.encoder = torch.nn.LSTM(self.input_dim, self.latent_dim, self.num_layers)
        # You can have any latent dim you want, just output has to be exact same size as input
        # In this case, only encoder and decoder, it has to be input_dim though
        self.decoder = torch.nn.LSTM(self.latent_dim, self.input_dim, self.num_layers)

    def forward(self, input):
        # Save original size first:
        original_shape = input.shape
        # Flatten 2d (or 3d or however many you specified in constructor)
        input = input.reshape(input.shape[: -self.input_dimensionality] + (-1,))

        # Rest goes as in my previous answer
        _, (last_hidden, _) = self.encoder(input)
        encoded = last_hidden.repeat(input.shape)
        y, _ = self.decoder(encoded)

        # You have to reshape output to what the original was
        reshaped_y = y.reshape(original_shape)
        return torch.squeeze(reshaped_y)

请记住,在这种情况下,您必须 reshape 您的输出。它应该适用于任何尺寸。

批处理

当涉及到批处理和不同长度的序列时,情况会稍微复杂一些。

在通过网络推送之前,您必须批量填充每个序列。通常,您填充的值是零,但您可以在 LSTM 中配置它。

您可以查看this link 的示例。您必须使用torch.nn.pack_padded_sequence 等函数才能使其工作,您可以查看this answer。

哦,从 PyTorch 1.1 开始,您无需按长度对序列进行排序即可打包它们。但是当涉及到这个话题时,抓住一些教程,应该会更清楚。

最后:请把你的问题分开。如果您使用单个示例执行自动编码,请继续进行批处理,如果您有问题,请在 *** 上发布一个新问题,谢谢。

【讨论】:

以上是关于如何使用 pytorch 构建多维自动编码器的主要内容,如果未能解决你的问题,请参考以下文章

自动求梯度(pytorch版本)——2.20

如何解决由于 PyTorch 中大小不匹配导致的运行时错误?

如何计算 PyTorch 中注意力分数和编码器输出的加权平均值?

Pytorch使用pytorch进行张量计算自动求导和神经网络构建

我们如何在 PyTorch 中将线性层的输出提供给 Conv2D?

如何使用具有多维序列到序列的 PyTorch Transformer?