ICLR2021 图表示学习图神经网络论文一览(1/3)
Posted 深度学习与图网络
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了ICLR2021 图表示学习图神经网络论文一览(1/3)相关的知识,希望对你有一定的参考价值。
Homophily, Heterophily
1. Combining Label Propagation and Simple Models out-performs Graph Neural Networks
概述:训练时间和参数量百倍降低,直接使用标签进行预测,性能竟超GNN
2. How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision
概述:通过对注意力进行点乘等方式得到边存在的概率,用概率去增加一个新的自监督任务,也就是说通过增加与注意力有关的自监督学习来增强GAT中的注意力机制
3. Adaptive Universal Generalized PageRank Graph Neural Network
概述:本文针对GNN存在的两个问题:(1)只适用于homophilic graph;(2)Over-smoothing,提出了一种Generalized PageRank(GPR)GNN架构来解决这些问题,该架构可以自适应地学习GPR权重,从而共同优化节点特征和拓扑信息提取。
Oversmoothing, Oversquashing
4. On the Bottleneck of Graph Neural Networks and its Practical Implications
概述:本文指出了GNN中的存在的Oversquashing现象,并将其与Bottleneck 联系起来。Oversquashing现象是指来自指数增长感受域的信息被压缩为固定长度的节点向量,简单说远距离的消息不能进行有效传递。作者得出的结论是,在邻居数量呈指数级增长和长距离依赖的情况下,消息传递可能效率不高。换句话说,本文要解决的是:如何进行远距离的消息传递。
5. Simple Spectral Graph Convolution
概述:作者提出了一种改进的马尔可夫扩散核GCN。该方法旨在通过Simple Graph Convolution和APPNP)来解决随着深度增加(平滑)而导致GCN性能下降的问题。
Boosting
6. AdaGCN: Adaboosting Graph Convolutional Networks into Deep Models
概述:在本文中,通过将AdaBoost集成到网络计算中,提出了一种新颖的类RNN深度图神经网络架构。提出的图卷积网络AdaGCN(Adaboosting图卷积网络)具有有效提取来自当前节点的高阶邻居知识的能力。
7. Boost then Convolve: Gradient Boosting Meets Graph Neural Networks
概述:这篇论文通过结合梯度提升提出了一种GNN模型。在提出的BGNN中,通过梯度提升模型学习图上的输入特征。然后,经过梯度增强,处理增强之后的特征为GNN模型的新特征。
Spectral Methods
8. Learning Parametrised Graph Shift Operators
概述:作者观察到,图神经网络中使用的一系列拉普拉斯算子可以嵌入参数族中,因此可以通过学习过程确定所用拉普拉斯算子的精确形式。简单说:该研究提出了一种更加general的框架,统一了之前的多个模型
9. Graph Coarsening with Neural Networks
概述:本文研究了图粗化策略,提出了一种为Coarsening 分配权重的方法。通过关注拉普拉斯算子的性质,提出了合适的投影/升力算子。
10. Analyzing the Expressive Power of Graph Neural Networks in a Spectral Perspective
概述:通过谱的方法分析图网络,将之前的很多方法统一到一种框架下
Expressivity
11. On Graph Neural Networks versus Graph-Augmented MLPs
概述:本文研究了图神经网络(GNN)的一种变体,GA-MLP。GA-MLP首先使用图上的多跳运算扩展节点特征,然后应用MLP,GNN。简单说,跟原来的GNN相比主要多了前面的步骤:首先通过对输入表示应用A,A^2,…,A^k的线性变换来获得增强嵌入,从而捕获更大的邻域。
GNN,尤其是较深的GNN,很难训练,而GA-MLP的结构简单,训练起来很容易,并且在许多任务上都显示出有竞争力的表现。本文深入探讨了几个问题(图同构,节点分类和社区检测)
12. Expressive Power of Invariant and Equivariant Graph Neural Networks
概述:证明了GNN的approximation guarantees,
13. Graph Convolution with Low-rank Learnable Local Filters
概述:本文提出了L3Net,它是一种具有低秩可学习局部滤波器的图卷积。L3Net适用于空间和频谱图卷积。对网格数据,面部识别和动作识别进行了实验。还测试了它对图噪声的鲁棒性。
Generalisability
14. How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks
概述:这篇论文的平均得分达到了最高,全部得分为[9, 8, 9, 9]
为了解释为何不同神经网络的外推能力不尽相同,论文作者详细探究了使用梯度下降训练的神经网络是如何外推的。直觉上来说,神经网络在训练分布之外的表现是任意的、不可预料的[7],但事实上,如果网络用梯度下降算法进行训练,则它的外推能力是有规律可循的。在我们评价神经网络的外推能力前,我们需要先确定一个指标来衡量它。为此,论文作者定义了外推误差这一概念。一个模型的外推误差越小,则其外推能力越强。作者基于此讨论了MLP和GNN的具备外推能力的条件。
15. A PAC-Bayesian Approach to Generalization Bounds for Graph Neural Networks
概述:在本文中,作者通过PAC-Bayesian方法推导了图神经网络(GNN)的两个主要类别的generalization bounds, 即图卷积网络(GCN)和消息传递GNN
16. INT: An Inequality Benchmark for Evaluating Generalization in Theorem Proving
概述:一个不等式定理证明基准,旨在测试代理的泛化能力,
Efficient Training
17. Graph Traversal with Tensor Functionals: A Meta-Algorithm for Scalable Learning
概述:作者提出了对图的近似操作,简单说,是通过sampling random trees
对图进行近似。该方法以元算法的形式呈现,可以在学习图形表示的情况下应用于一系列问题
18. Degree-Quant: Quantization-Aware Training for Graph Neural Networks
概述:高效训练GCN
未完待续~
Graph-Level Representations
Wasserstein Embedding for Graph Learning
Ac:curate Learning of Graph Representations with Graph Multiset Pooling
Code Representations
Retrieval-Augmented Generation for Code Summarization via Hybrid GNN
GraphCodeBERT: Pre-training Code Representations with Data Flow
Language-Agnostic Representation Learning of Source Code from Structure and Context
Time Series
Discrete Graph Structure Learning for Forecasting Multiple Time Series
Graph Edit Networks
Temporal Structures
Spatio-Temporal Graph Scattering Transform
Learning continuous-time PDEs from sparse data with graph neural networks
Inductive Representation Learning in Temporal Networks via Causal Anonymous Walks
Explainable Subgraph Reasoning for Forecasting on Temporal Knowledge Graphs
Semantic Graphs
Generative Scene Graph Networks
Learning Reasoning Paths over Semantic Graphs for Video-grounded Dialogues
Natural Language Processing
DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
Molecules
Conformation-Guided Molecular Representation with Hamiltonian Neural Networks
MARS: Markov Molecular Sampling for Multi-objective Drug Discovery
Proteins
Learning from Protein Structure with Geometric Vector Perceptrons
Intrinsic-Extrinsic Convolution and Pooling for Learning on 3D Protein Structures
Neural representation and generation for RNA secondary structures
Physics
Grounding Physical Object and Event Concepts Through Dynamic Visual Reasoning
Combining Physics and Machine Learning for Network Flow Estimation
Isometric Transformation Invariant and Equivariant Graph Convolutional Networks
Meshes
Learning Mesh-Based Simulation with Graph Networks
Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric graphs
Reinforcement Learning
Learning to Represent Action Values as a Hypergraph on the Action Vertices
Winning the L2RPN Challenge: Power Grid Management via Semi-Markov Afterstate Actor-Critic
My Body is a Cage: the Role of Morphology in Graph-Based Incompatible Control
Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning
Graphical Models
CopulaGNN: Towards Integrating Representational and Correlational Roles of Graphs in Graph Neural Networks
Lossless Compression of Structured Convolutional Models via Lifting
Directed Acyclic Graph Neural Networks
Miscellaneous
Collective Robustness Certificates
Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets
On Dyadic Fairness: Exploring and Mitigating Bias in Graph Connections
Learning Hyperbolic Representations of Topological Features
Graph Information Bottleneck for Subgraph Recognition
以上是关于ICLR2021 图表示学习图神经网络论文一览(1/3)的主要内容,如果未能解决你的问题,请参考以下文章
近期必读的五篇ICLR 2021图神经网络(GNN)相关论文和代码
深度学习热度下降,图神经网络BERT崛起,ICLR 2020提交论文主题分析
学习笔记 | 2023 ICLR ParetoGNN 多任务自监督图神经网络实现更强的任务泛化