如何评价 On Unifying Deep Generative Models 这篇 paper?

Posted cx2016

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了如何评价 On Unifying Deep Generative Models 这篇 paper?相关的知识,希望对你有一定的参考价值。

 

如何评价 On Unifying Deep Generative Models 这篇 paper?

[1706.00550] On Unifying Deep Generative Models

Supplementary materials:

学界 | CMU新研究试图统一深度生成模型:搭建GAN和VAE之间的桥梁

Abstract:

Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent study respectively. This paper establishes formal connections between deep generative modeling approaches through a new formulation of GANs and VAEs. We show that GANs and VAEs are essentially minimizing KL divergences with opposite directions and reversed latent/visible treatments, extending the two learning phases of classic wake-sleep algorithm, respectively. The unified view provides a powerful tool to analyze a diverse set of existing model variants, and enables to exchange ideas across research lines in a principled way. For example, we transfer the importance weighting method in VAE literatures for improved GAN learning, and enhance VAEs with an adversarial mechanism. Quantitative experiments show generality and effectiveness of the imported extensions.

这是一个挺有意思的工作。这篇工作试图把近来很火爆的一系列深度生成模型(特别是GAN和VAE)和他们的诸多变种用统一的框架解释。

举个例子,比如GAN,原始的GAN paper是用博弈论来构造出整个模型的,最近也有不少文章试图来从不同角度理解或解释GAN(比如 https://arxiv.org/abs/1606.00709https://arxiv.org/abs/1610.03483)。但是这篇文章从另一个角度出发:把X(比如图片)看做隐变量(latent variable),用Bayesian Inference里面的经典方法变分(variational inference)来解释生成过程(generation),很巧妙。

他这种formulation最大的好处是很容易把一些以前常见的模型联系起来, 比如VAE,wake-sleep等等;这些模型或算法,本身都是从variational inference的角度出发的。有了这个理解以后,很方便把各种靠VI求解的模型或很多现成的VI inference方法和GAN这一套体系结合起来,说不定在各个benchmark上又能搞点新闻出来。

利益相关。。。。我看着一作在我旁边把这个paper写出来的...

 
 

回复评论里的提问:关于我们的工作与最近"试图统一VAE和GAN"工作比如 “Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks” 的关系:

 

我们的论文(最近做了大的更新)里有讨论这些工作(see section 6)。Generally,这些论文提出了新的结合VAE和GAN的deep generative models (DGMs),因此这里的"unifying"指combine两种model变成一个joint model。而我们则是试图对这些DGM models and algorithms建立一个统一的视角("unified view"),发现他们之间的关系,而不是设计新的model instance.

 

Specifically, 这篇Adversarial Variational Bayes主要是用implicit distribution作为VAE的inference model (而标准的VAE是assume 一个explicit inference distribution,比如Gaussian)。为了learn这个implicit distribution, 论文用了adversarial loss (因为implicit distribution不支持likelihood evaluation, 传统的reconstruction loss不适用)。最近类似的工作还有很多,比如我们论文里已经cite的[26,36,49,54], 大体思想都是用implicit distribution作为inference model。这些工作可以看做是我们提出的统一视角的一个特例,具体的说(专有名词太多就用英文写了):

 

Briefly, these works are instances of the general idea proposed in our paper, i.e., symmetric modeling of generation and inference (section 6) — we can apply implicit distributions and adversarial loss for *generation* (i.e., GANs). Symmetrically, we can apply implicit distributions and adversarial loss for *inference* with exact the same formulation, which are essentially what these works do. For example, if we let z be the observed data and x the latent code, InfoGAN is exactly a VAE with implicit distribution as its inference model.

 

The idea of symmetric view of generation and inference is one of the key insights of our work. It helps reveal the connections between GANs and ADA, as well as the resemblance of GANs to variational inference.

 

===============分割线以下为原答案===========

 

谢谢关注我们的工作。我们会对论文初稿继续改进,对不足之处也欢迎大家指正和交流。

这个工作里我们的目标不是提出新的模型,而是希望对deep generative model (DGM)的几类基本方法重新formulate,揭示他们间的关系,建立统一的interpretation。统一的框架主要有两个好处:

(1)对已有模型以及种类繁多的变种有更好或者新的理解,把握算法演进的脉络;

(2)促进 后续研究中,各个本来相互独立的DGM研究方向的融合。期待论文提出的分析框架能促进后续更多的DGM算法/模型的提出。

对于(1),论文的主要结论是: GANs 和 VAEs 大体上是在minimize 不同方向的KL Divergence。 *Roughly speaking*, 对优化generator P来说,GANs 做 min_{P} KL(P||Q),VAEs 做 min_{P} KL(Q||P)。由此带来一些insights:

1) GAN 的这个形式和贝叶斯推断的variational inference类似:把P看做inference model,Q看做posterior。因此我们是在用*inference*来解释*generation*。这一点在论文最后的discussion section有更具体的讨论。

2) 优化两个方向的KL,正好和经典的wake sleep算法的两个phase对应。GAN可以看做sleep phase的extension,VAE可以看做wake phase的extension。

3) 根据KL的不对称性质,GANs优化的KL(P||Q)决定了GANs倾向于miss mode,而VAEs倾向于cover mode。这点在之前的一些论文 e.g. [1][29],也有涉及。

 

对于(2),我们举了两个例子,来说明各种加强VAEs的方法能直接应用在GANs上来提高GANs,反之,之前用来提高GANs的方法也能用来提高VAEs。前者,我们从importance weighted VAEs出发可以很轻松推导出importance weighted GANs;后者,我们将GANs的对抗机制直接复制到VAEs上。实验基本没调过参数,不过对base model基本都有提高。

 

利益相关:作者之一

以上是关于如何评价 On Unifying Deep Generative Models 这篇 paper?的主要内容,如果未能解决你的问题,请参考以下文章

罗技Unifying“优联”技术

罗技Unifying“优联”技术

如何评价微软发布了sqlserver on linux

Open source packages on Deep Reinforcement Learning

PP: Deep clustering based on a mixture of autoencoders

Review of Image Super-resolution Reconstruction Based on Deep Learning