WGAN源码解读

Posted 三年一梦

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了WGAN源码解读相关的知识,希望对你有一定的参考价值。

WassersteinGAN源码

  作者的代码包括两部分:models包下包含dcgan.py和mlp.py, 这两个py文件是两种不同的网络结构,在dcgan.py中判别器和生成器都含有卷积网络,而mlp.py中判别器和生成器都只是全连接。  此外main.py为主函数,通过引入import models中的生成器和判别器来完成训练与迭代。

参数说明(main.py中):

parser = argparse.ArgumentParser()
parser.add_argument(\'--dataset\', required=True, help=\'cifar10 | lsun | imagenet | folder | lfw \')
parser.add_argument(\'--dataroot\', required=True, help=\'path to dataset\')
parser.add_argument(\'--workers\', type=int, help=\'number of data loading workers\', default=2)
parser.add_argument(\'--batchSize\', type=int, default=64, help=\'input batch size\')
parser.add_argument(\'--imageSize\', type=int, default=64, help=\'the height / width of the input image to network\')
parser.add_argument(\'--nc\', type=int, default=3, help=\'input image channels\')
parser.add_argument(\'--nz\', type=int, default=100, help=\'size of the latent z vector\')
parser.add_argument(\'--ngf\', type=int, default=64)
parser.add_argument(\'--ndf\', type=int, default=64)
parser.add_argument(\'--niter\', type=int, default=25, help=\'number of epochs to train for\')
parser.add_argument(\'--lrD\', type=float, default=0.00005, help=\'learning rate for Critic, default=0.00005\')
parser.add_argument(\'--lrG\', type=float, default=0.00005, help=\'learning rate for Generator, default=0.00005\')
parser.add_argument(\'--beta1\', type=float, default=0.5, help=\'beta1 for adam. default=0.5\')
parser.add_argument(\'--cuda\'  , action=\'store_true\', help=\'enables cuda\')
parser.add_argument(\'--ngpu\'  , type=int, default=1, help=\'number of GPUs to use\')
parser.add_argument(\'--netG\', default=\'\', help="path to netG (to continue training)")
parser.add_argument(\'--netD\', default=\'\', help="path to netD (to continue training)")
parser.add_argument(\'--clamp_lower\', type=float, default=-0.01)
parser.add_argument(\'--clamp_upper\', type=float, default=0.01)
parser.add_argument(\'--Diters\', type=int, default=5, help=\'number of D iters per each G iter\')
parser.add_argument(\'--noBN\', action=\'store_true\', help=\'use batchnorm or not (only for DCGAN)\')
parser.add_argument(\'--mlp_G\', action=\'store_true\', help=\'use MLP for G\')
parser.add_argument(\'--mlp_D\', action=\'store_true\', help=\'use MLP for D\')
parser.add_argument(\'--n_extra_layers\', type=int, default=0, help=\'Number of extra layers on gen and disc\')
parser.add_argument(\'--experiment\', default=None, help=\'Where to store samples and models\')
parser.add_argument(\'--adam\', action=\'store_true\', help=\'Whether to use adam (default is rmsprop)\')

 

1.models包中的mlp.py:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import torch
import torch.nn as nn

class MLP_G(nn.Module):
    def __init__(self, isize, nz, nc, ngf, ngpu):
        super(MLP_G, self).__init__()
        self.ngpu = ngpu

        main = nn.Sequential(
            # Z goes into a linear of size: ngf
            nn.Linear(nz, ngf),
            nn.ReLU(True),
            nn.Linear(ngf, ngf),
            nn.ReLU(True),
            nn.Linear(ngf, ngf),
            nn.ReLU(True),
            nn.Linear(ngf, nc * isize * isize),
        )
        self.main = main
        self.nc = nc
        self.isize = isize
        self.nz = nz

    def forward(self, input):
        input = input.view(input.size(0), input.size(1))
        if isinstance(input.data, torch.cuda.FloatTensor) and self.ngpu > 1:
            output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
        else:
            output = self.main(input)
        return output.view(output.size(0), self.nc, self.isize, self.isize)


class MLP_D(nn.Module):
    def __init__(self, isize, nz, nc, ndf, ngpu):
        super(MLP_D, self).__init__()
        self.ngpu = ngpu

        main = nn.Sequential(
            # Z goes into a linear of size: ndf
            nn.Linear(nc * isize * isize, ndf),
            nn.ReLU(True),
            nn.Linear(ndf, ndf),
            nn.ReLU(True),
            nn.Linear(ndf, ndf),
            nn.ReLU(True),
            nn.Linear(ndf, 1),
        )
        self.main = main
        self.nc = nc
        self.isize = isize
        self.nz = nz

    def forward(self, input):
        input = input.view(input.size(0),
                           input.size(1) * input.size(2) * input.size(3))
        if isinstance(input.data, torch.cuda.FloatTensor) and self.ngpu > 1:
            output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
        else:
            output = self.main(input)
        output = output.mean(0)
return output.view(1)
mlp.py

  在利用全连接实现的网络中,生成器的结构为四层全连接,伴有4个ReLU激活函数。噪声即生成器的输入,其维度为 nz=100维。所以生成器的输入维度为(batch_size, nz), 输出为图像的尺寸(batch_size, nc, isize, isize)。注意的是torch.nn只支持mini_batch,若想输入单个样本,可利用input.unsqueeze(0)将batch_size设为1。WGAN的判别器与GAN不同之处是最后一层取消了sigmoid,其结构也为4层全连接。判别器的输入为图像的尺寸,同时判别器的输入为生成器的输出,而输出为1维,即batch_size大小的向量,求mean得到一个数。

        此外代码中还对 ngpu>1 的情形下使用Multi-GPU layers: class torch.nn.DataParallel(module, device_ids=None, output_device=None, dim=0) 此容器通过将mini-batch划分到不同的设备上来实现给定module的并行。在forward过程中,module会在每个设备上都复制一遍,每个副本都会处理部分输入。在backward过程中,副本上的梯度会累加到原始module上。

batch的大小应该大于所使用的GPU的数量。还应当是GPU个数的整数倍,这样划分出来的每一块都会有相同的样本数量。

 

2.models包中的dcgan.py

import torch
import torch.nn as nn
import torch.nn.parallel

class DCGAN_D(nn.Module):
    def __init__(self, isize, nz, nc, ndf, ngpu, n_extra_layers=0):
        super(DCGAN_D, self).__init__()
        self.ngpu = ngpu
        assert isize % 16 == 0, "isize has to be a multiple of 16"

        main = nn.Sequential()
        # input is nc x isize x isize
        main.add_module(\'initial.conv.{0}-{1}\'.format(nc, ndf),
                        nn.Conv2d(nc, ndf, 4, 2, 1, bias=False))
        main.add_module(\'initial.relu.{0}\'.format(ndf),
                        nn.LeakyReLU(0.2, inplace=True))
        csize, cndf = isize / 2, ndf

        # Extra layers
        for t in range(n_extra_layers):
            main.add_module(\'extra-layers-{0}.{1}.conv\'.format(t, cndf),
                            nn.Conv2d(cndf, cndf, 3, 1, 1, bias=False))
            main.add_module(\'extra-layers-{0}.{1}.batchnorm\'.format(t, cndf),
                            nn.BatchNorm2d(cndf))
            main.add_module(\'extra-layers-{0}.{1}.relu\'.format(t, cndf),
                            nn.LeakyReLU(0.2, inplace=True))

        while csize > 4:
            in_feat = cndf
            out_feat = cndf * 2
            main.add_module(\'pyramid.{0}-{1}.conv\'.format(in_feat, out_feat),
                            nn.Conv2d(in_feat, out_feat, 4, 2, 1, bias=False))
            main.add_module(\'pyramid.{0}.batchnorm\'.format(out_feat),
                            nn.BatchNorm2d(out_feat))
            main.add_module(\'pyramid.{0}.relu\'.format(out_feat),
                            nn.LeakyReLU(0.2, inplace=True))
            cndf = cndf * 2
            csize = csize / 2

        # state size. K x 4 x 4
        main.add_module(\'final.{0}-{1}.conv\'.format(cndf, 1),
                        nn.Conv2d(cndf, 1, 4, 1, 0, bias=False))
        self.main = main


    def forward(self, input):
        if isinstance(input.data, torch.cuda.FloatTensor) and self.ngpu > 1:
            output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
        else: 
            output = self.main(input)
            
        output = output.mean(0)
        return output.view(1)

class DCGAN_G(nn.Module):
    def __init__(self, isize, nz, nc, ngf, ngpu, n_extra_layers=0):
        super(DCGAN_G, self).__init__()
        self.ngpu = ngpu
        assert isize % 16 == 0, "isize has to be a multiple of 16"

        cngf, tisize = ngf//2, 4
        while tisize != isize:
            cngf = cngf * 2
            tisize = tisize * 2

        main = nn.Sequential()
        # input is Z, going into a convolution
        main.add_module(\'initial.{0}-{1}.convt\'.format(nz, cngf),
                        nn.ConvTranspose2d(nz, cngf, 4, 1, 0, bias=False))
        main.add_module(\'initial.{0}.batchnorm\'.format(cngf),
                        nn.BatchNorm2d(cngf))
        main.add_module(\'initial.{0}.relu\'.format(cngf),
                        nn.ReLU(True))

        csize, cndf = 4, cngf
        while csize < isize//2:
            main.add_module(\'pyramid.{0}-{1}.convt\'.format(cngf, cngf//2),
                            nn.ConvTranspose2d(cngf, cngf//2, 4, 2, 1, bias=False))
            main.add_module(\'pyramid.{0}.batchnorm\'.format(cngf//2),
                            nn.BatchNorm2d(cngf//2))
            main.add_module(\'pyramid.{0}.relu\'.format(cngf//2),
                            nn.ReLU(True))
            cngf = cngf // 2
            csize = csize * 2

        # Extra layers
        for t in range(n_extra_layers):
            main.add_module(\'extra-layers-{0}.{1}.conv\'.format(t, cngf),
                            nn.Conv2d(cngf, cngf, 3, 1, 1, bias=False))
            main.add_module(\'extra-layers-{0}.{1}.batchnorm\'.format(t, cngf),
                            nn.BatchNorm2d(cngf))
            main.add_module(\'extra-layers-{0}.{1}.relu\'.format(t, cngf),
                            nn.ReLU(True))

        main.add_module(\'final.{0}-{1}.convt\'.format(cngf, nc),
                        nn.ConvTranspose2d(cngf, nc, 4, 2, 1, bias=False))
        main.add_module(\'final.{0}.tanh\'.format(nc),
                        nn.Tanh())
        self.main = main

    def forward(self, input):
        if isinstance(input.data, torch.cuda.FloatTensor) and self.ngpu > 1:
            output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
        else: 
            output = self.main(input)
        return output 
###############################################################################
class DCGAN_D_nobn(nn.Module):
    def __init__(self, isize, nz, nc, ndf, ngpu, n_extra_layers=0):
        super(DCGAN_D_nobn, self).__init__()
        self.ngpu = ngpu
        assert isize % 16 == 0, "isize has to be a multiple of 16"

        main = nn.Sequential()
        # input is nc x isize x isize
        # input is nc x isize x isize
        main.add_module(\'initial.conv.{0}-{1}\'.format(nc, ndf),
                        nn.Conv2d(nc, ndf, 4, 2, 1, bias=False))
        main.add_module(\'initial.relu.{0}\'.format(ndf),
                        nn.LeakyReLU(0.2, inplace=True))
        csize, cndf = isize / 2, ndf

        # Extra layers
        for t in range(n_extra_layers):
            main.add_module(\'extra-layers-{0}.{1}.conv\'.format(t, cndf),
                            nn.Conv2d(cndf, cndf, 3, 1, 1, bias=False))
            main.add_module(\'extra-layers-{0}.{1}.relu\'.format(t, cndf),
                            nn.LeakyReLU(0.2, inplace=True))

        while csize > 4:
            in_feat = cndf
            out_feat = cndf * 2
            main.add_module(\'pyramid.{0}-{1}.conv\'.format(in_feat, out_feat),
                            nn.Conv2d(in_feat, out_feat, 4, 2, 1, bias=False))
            main.add_module(\'pyramid.{0}.relu\'.format(out_feat),
                            nn.LeakyReLU(0.2, inplace=True))
            cndf = cndf * 2
            csize = csize / 2

        # state size. K x 4 x 4
        main.add_module(\'final.{0}-{1}.conv\'.format(cndf, 1),
                        nn.Conv2d(cndf, 1, 4, 1, 0, bias=False))
        self.main = main


    def forward(self, input):
        if isinstance(input.data, torch.cuda.FloatTensor) and self.ngpu > 1:
            output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
        else: 
            output = self.main(input)
            
        output = output.mean(0)
        return output.view(1)

class DCGAN_G_nobn(nn.Module):
    def __init__(self, isize, nz, nc, ngf, ngpu, n_extra_layers=0):
        super(DCGAN_G_nobn, self).__init__()
        self.ngpu = ngpu
        assert isize % 16 == 0, "isize has to be a multiple of 16"

        cngf, tisize = ngf//2, 4
        while tisize != isize:
            cngf = cngf * 2
            tisize = tisize * 2

        main = nn.Sequential()
        main.add_module(\'initial.{0}-{1}.convt\'.format(nz, cngf),
                        nn.ConvTranspose2d(nz, cngf, 4, 1, 0, bias=False))
        main.add_module(\'initial.{0}.relu\'.format(cngf),
                        nn.ReLU(True))

        csize, cndf = 4, cngf
        while csize < isize//2:
            main.add_module(\'pyramid.{0}-{1}.convt\'.format(cngf, cngf//2),
                            nn.ConvTranspose2d(cngf, cngf//2, 4, 2, 1, bias=False))
            main.add_module(\'pyramid.{0}.relu\'.format(cngf//2),
                            nn.ReLU(True))
            cngf = cngf // 2
            csize = csize * 2

        # Extra layers
        for t in range(n_extra_layers):
            main.add_module(\'extra-layers-{0}.{1}.conv\'.format(t, cngf),
                            nn.Conv2d(cngf, cngf, 3, 1, 1, bias=False))
            main.add_module(\'extra-layers-{0}.{1}.relu\'.format(t, cngf),
                            nn.ReLU(True))

        main.add_module(\'final.{0}-{1}.convt\'.format(cngf, nc),
                        nn.ConvTranspose2d(cngf, nc, 4, 2, 1, bias=False))
        main.add_module(\'final.{0}.tanh\'.format(nc),
                        nn.Tanh())
        self.main = main

    def forward(self, input):
        if isinstance(input.data, torch.cuda.FloatTensor) and self.ngpu > 1:
            output = nn.parallel.data_parallel(self.main, input,  range(self.ngpu))
        else: 
            output = self.main(input)
return output 
dcgan.py

   此文件中共4个类,分为两组。第一组是DCGAN_D和DCGAN_G, 这两个类都使用了Batch normalization。而另一组是DCGAN_D_nobn和DCGAN_G_nobn, 这两个类都没有使用Batch normalization。首先看判别器,设定了image的尺寸为16的倍数,然后经过一个卷积层和一个LeakyReLU后来到Extra layers, 在这个其他层中当参数 n_extra_layers 为n时, 将Conv-BN-LeakyReLU重复n次,此时判断如果特征图大小 >4, 则再次进行Conv-BN-LeakyReLU操作直到特征图大小 =4,然后进行最后一次卷积核大小为4的卷积,此时输出为1维向量,求均值后得到一个数。

   然后看生成器,生成器用到了反卷积,因为其输入为100维噪声数据(类似向量),输出为图像(类似矩阵)。首先经过ConvTranspose2d-BN-ReLU, 将100维的噪声反卷积为512维。然后经过一系列(3次)ConvTranspose2d-BN-ReLU将特征图维度改为了64通道。此时又来到了Extra layers, 在这个其他层中当参数 n_extra_layers 为n时,  将ConvTranspose2d-BN-ReLU重复n次,注意此时n次反卷积设置为通道数不变的反卷积,所以若经过这n次操作,通道数仍为64维。最后经过ConvTranspose2d-Tanh后,将通道数将为了3,数值大小都在-1至1之间。

   对于两组文件不同之处只有BN的使用与否,所以不必赘述。

 

3.main.py

from __future__ import print_function
import argparse
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
from torch.autograd import Variable
import os

import models.dcgan as dcgan
import models.mlp as mlp

parser = argparse.ArgumentParser()
parser.add_argument(\'--dataset\', required=True, help=\'cifar10 | lsun | imagenet | folder | lfw \')
parser.add_argument(\'--dataroot\', required=True, help=\'path to dataset\')
parser.add_argument(\'--workers\', type=int, help=\'number of data loading workers\', default=2)
parser.add_argument(\'--batchSize\', type=int, default=64, help=\'input batch size\')
parser.add_argument(\'--imageSize\', type=int, default=64, help=\'the height / width of the input image to network\')
parser.add_argument(\'--nc\', type=int, default=3, help=\'input image channels\')
parser.add_argument(\'--nz\', type=int, default=100, help=\'size of the latent z vector\')
parser.add_argument(\'--ngf\', type=int, default=64)
parser.add_argument(\'--ndf\', type=int, default=64)
parser.add_argument(\'--niter\', type=int, default=25, help=\'number of epochs to train for\')
parser.add_argument(\'--lrD\', type=float, default=0.00005, help=\'learning rate for Critic, default=0.00005\')
parser.add_argument(\'--lrG\', type=float, default=0.00005, help=\'learning rate for Generator, default=0.00005\')
parser.add_argument(\'--beta1\', type=float, default=0.5, help=\'beta1 for adam. default=0.5\')
parser.add_argument(\'--cuda\'  , action=\'store_true\', help=\'enables cuda\')
parser.add_argument(\'--ngpu\'  , type=int, default=1, help=\'number of GPUs to use\')
parser.add_argument(\'--netG\', default=\'\', help="path to netG (to continue training)")
parser.add_argument(\'--netD\', default=\'\', help="path to netD (to continue training)")
parser.add_argument(\'--clamp_lower\', type=float, default=-0.01)
parser.add_argument(\'--clamp_upper\', type=float, default=0.01)
parser.add_argument(\'--Diters\', type=int, default=5, help=\'number of D iters per each G iter\')
parser.add_argument(\'--noBN\', action=\'store_true\', help=\'use batchnorm or not (only for DCGAN)\')
parser.add_argument(\'--mlp_G\', action=\'store_true\', help=\'use MLP for G\')
parser.add_argument(\'--mlp_D\', action=\'store_true\', help=\'use MLP for D\')
parser.add_argument(\'--n_extra_layers\', type=int, default=0, help=\'Number of extra layers on gen and disc\')
parser.add_argument(\'--experiment\', default=None, help=\'Where to store samples and models\')
parser.add_argument(\'--adam\', action=\'store_true\', help=\'Whether to use adam (default is rmsprop)\')
opt = parser.parse_args()
print(opt)

if opt.experiment is None:
    opt.experiment = \'samples\'
os.system(\'mkdir {0}\'.format(opt.experiment))

opt.manualSeed = random.randint(1, 10000) # fix seed
print("Random Seed: ", opt.manualSeed)
random.seed(opt.manualSeed)
torch.manual_seed(opt.manualSeed)

cudnn.benchmark = True

if torch.cuda.is_available() and not opt.cuda:
    print("WARNING: You have a CUDA device, so you should probably run with --cuda")

if opt.dataset in [\'imagenet\', \'folder\', \'lfw\']:
    # folder dataset
    dataset = dset.ImageFolder(root=opt.dataroot,
                               transform=transforms.Compose([
                                   transforms.Scale(opt.imageSize),
                                   transforms.CenterCrop(opt.imageSize),
                                   transforms.ToTensor(),
                                   transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
                               ]))
elif opt.dataset == \'lsun\':
    dataset = dset.LSUN(db_path=opt.dataroot, classes=[\'bedroom_train\'],
                        transform=transforms.Compose([
                            transforms.Scale(opt.imageSize),
                            transforms.CenterCrop(opt.imageSize),
                            transforms.ToTensor(),
                            transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
                        ]))
elif opt.dataset == \'cifar10\':
    dataset = dset.CIFAR10(root=opt.dataroot, download=True,
                           transform=transforms.Compose([
                               transforms.Scale(opt.imageSize),
                               transforms.ToTensor(),
                               transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
                           ])
    )
assert dataset
dataloader = torch.utils.data.DataLoader(dataset, batch_size=opt.batchSize,
                                         shuffle=True, num_workers=int(opt.workers))

ngpu = int(opt.ngpu)
nz = int(opt.nz)
ngf = int(opt.ngf)
ndf = int(opt.ndf)
nc = int(opt.nc)
n_extra_layers = int(opt.n_extra_layers)

# custom weights initialization called on netG and netD
def weights_init(m):
    classname = m.__class__.__name__
    if classname.find(\'GAN总结

ArrayPool 源码解读之 byte[] 也能池化?

ArrayPool 源码解读之 byte[] 也能池化?

深度学习(五十四)图片翻译WGAN实验测试

wgan pytorch,pyvision, py-faster-rcnn等的安装使用

SpringMVC源码解读--HandlerMapping代码解读