图学习学术速递[2021/10/8]
Posted 专注人工智能
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了图学习学术速递[2021/10/8]相关的知识,希望对你有一定的参考价值。
Graph相关(图学习|图神经网络|图优化等)(4篇)
[ 1 ] Joint inference of multiple graphs with hidden variables from stationary graph signals
标题:基于平稳图信号的多个隐变量图的联合推断
链接:https://arxiv.org/abs/2110.03666
作者:Samuel Rey,Andrei Buciulea,Madeline Navarro,Santiago Segarra,Antonio G. Marques
机构:⋆Dept. of Signal Theory and Communications, King Juan Carlos University, Madrid, Spain, †Dept. of Electrical and Computer Engineering, Rice University, Houston, USA
备注:Paper submitted to ICASSP 2022
摘要:从节点观测集学习图是一个突出的问题,正式称为图拓扑推理。然而,目前的方法通常局限于推断单个网络,并且假设所有节点的观测都是可用的。首先,许多当代设置涉及多个相关网络,其次,通常情况下,只观察到一部分节点,而其余节点保持隐藏状态。基于这些事实,我们介绍了一种联合图拓扑推理方法,该方法对隐藏变量的影响进行建模。在假设观测信号在所寻找的图上是平稳的并且图是密切相关的情况下,多个网络的联合估计允许我们利用这种关系来提高学习图的质量。此外,我们还面临着一个具有挑战性的问题,即如何对隐藏节点的影响进行建模,以使其有害影响最小化。为了获得一种可行的方法,我们利用手头的设置的特殊结构,并利用不同图之间的相似性,这会影响观察到的节点和隐藏的节点。为了验证所提出的方法,对合成图和真实图进行了数值模拟。
摘要:Learning graphs from sets of nodal observations represents a prominent problem formally known as graph topology inference. However, current approaches are limited by typically focusing on inferring single networks, and they assume that observations from all nodes are available. First, many contemporary setups involve multiple related networks, and second, it is often the case that only a subset of nodes is observed while the rest remain hidden. Motivated by these facts, we introduce a joint graph topology inference method that models the influence of the hidden variables. Under the assumptions that the observed signals are stationary on the sought graphs and the graphs are closely related, the joint estimation of multiple networks allows us to exploit such relationships to improve the quality of the learned graphs. Moreover, we confront the challenging problem of modeling the influence of the hidden nodes to minimize their detrimental effect. To obtain an amenable approach, we take advantage of the particular structure of the setup at hand and leverage the similarity between the different graphs, which affects both the observed and the hidden nodes. To test the proposed method, numerical simulations over synthetic and real-world graphs are provided.
[ 2 ] Training Stable Graph Neural Networks Through Constrained Learning
标题:基于约束学习的稳定图神经网络训练
链接:https://arxiv.org/abs/2110.03576
作者:Juan Cervino,Luana Ruiz,Alejandro Ribeiro
机构:Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, USA
摘要:图神经网络(GNN)依靠图卷积从网络数据中学习特征。GNN对基础图的不同类型的扰动是稳定的,这是它们从图过滤器继承的特性。在本文中,我们利用GNNs的稳定性属性作为类型点,以寻求分布中稳定的表示。我们提出了一种新的约束学习方法,通过在选择扰动范围内对GNN的稳定性条件施加约束。我们在真实世界的数据中展示了我们的框架,证实了我们能够获得更稳定的表示,同时不影响预测的整体准确性。
摘要:Graph Neural Networks (GNN) rely on graph convolutions to learn features from network data. GNNs are stable to different types of perturbations of the underlying graph, a property that they inherit from graph filters. In this paper we leverage the stability property of GNNs as a typing point in order to seek for representations that are stable within a distribution. We propose a novel constrained learning approach by imposing a constraint on the stability condition of the GNN within a perturbation of choice. We showcase our framework in real world data, corroborating that we are able to obtain more stable representations while not compromising the overall accuracy of the predictor.
[ 3 ] Distributed Optimization of Graph Convolutional Network using Subgraph Variance
标题:基于子图方差的图卷积网络分布式优化
链接:https://arxiv.org/abs/2110.02987
作者:Taige Zhao,Xiangyu Song,Jianxin Li,Wei Luo,Imran Razzak
机构: Imran Razzak arewith the School of Information Technology, Deakin University
摘要:近年来,图卷积网络(GCN)在从图结构数据学习方面取得了巨大的成功。随着图形节点和边的不断增加,单处理器的GCN训练已经不能满足对时间和内存的需求,这导致了分布式GCN训练框架的研究热潮。然而,现有的分布式GCN训练框架需要处理器之间的巨大通信成本,因为需要从其他处理器收集和传输大量相关节点和边缘信息,以便进行GCN训练。为了解决这个问题,我们提出了一个基于图扩充的分布式GCN框架(GAD)。特别是,GAD有两个主要组件,GAD分区和GAD优化器。我们首先提出了一种基于图增强的分区(GAD分区),该分区可以将原始图划分为增强子图,通过选择和存储尽可能少的其他处理器的重要节点来减少通信,同时保证训练的准确性。此外,我们进一步设计了基于子图方差的重要性计算公式,并提出了一种新的加权全局一致性方法,统称为GAD优化器。该优化器自适应地降低具有大方差的子图的重要性,以减少GAD划分引入的额外方差对分布式GCN训练的影响。在四个大规模真实数据集上进行的大量实验表明,与最先进的方法相比,我们的框架显著降低了通信开销(50%),提高了分布式GCN训练的收敛速度(2倍),并在最小冗余的基础上略微提高了精度(0.45%)。
摘要:In recent years, Graph Convolutional Networks (GCNs) have achieved great success in learning from graph-structured data. With the growing tendency of graph nodes and edges, GCN training by single processor cannot meet the demand for time and memory, which led to a boom into distributed GCN training frameworks research. However, existing distributed GCN training frameworks require enormous communication costs between processors since multitudes of dependent nodes and edges information need to be collected and transmitted for GCN training from other processors. To address this issue, we propose a Graph Augmentation based Distributed GCN framework(GAD). In particular, GAD has two main components, GAD-Partition and GAD-Optimizer. We first propose a graph augmentation-based partition (GAD-Partition) that can divide original graph into augmented subgraphs to reduce communication by selecting and storing as few significant nodes of other processors as possible while guaranteeing the accuracy of the training. In addition, we further design a subgraph variance-based importance calculation formula and propose a novel weighted global consensus method, collectively referred to as GAD-Optimizer. This optimizer adaptively reduces the importance of subgraphs with large variances for the purpose of reducing the effect of extra variance introduced by GAD-Partition on distributed GCN training. Extensive experiments on four large-scale real-world datasets demonstrate that our framework significantly reduces the communication overhead (50%), improves the convergence speed (2X) of distributed GCN training, and slight gain in accuracy (0.45%) based on minimal redundancy compared to the state-of-the-art methods.
[ 4 ] A Few-shot Learning Graph Multi-Trajectory Evolution Network for Forecasting Multimodal Baby Connectivity Development from a Baseline Timepoint
标题:从基线时间点预测多模态婴儿连通性发育的Few-Shot学习图多轨迹进化网络
链接:https://arxiv.org/abs/2110.03535
作者:Alaa Bessadok,Ahmed Nebli,Mohamed Ali Mahjoub,Gang Li,Weili Lin,Dinggang Shen,Islem Rekik
机构:ID ,⋆, BASIRA Lab, Istanbul Technical University, Istanbul, Turkey, Higher Institute of Informatics and Communication Technologies, University of, National Engineering School of Sousse, University of Sousse, LATIS- Laboratory of
摘要:绘制婴儿出生后第一年的连接体进化轨迹对于理解婴儿大脑的动态连接发展起着至关重要的作用。这种分析需要获取纵向连接组数据集。然而,由于各种困难,新生儿和产后扫描很少获得。一小部分工作集中在预测婴儿大脑的进化轨迹,从一个单一模式衍生的新生儿大脑连接体。尽管前景看好,但大型训练数据集对于促进模型学习和从不同模式(即功能和形态连接体)推广到多轨迹预测至关重要。在这里,我们前所未有地探索这个问题:我们能否设计一些基于快照学习的框架来预测不同模式下的脑图轨迹?为此,我们提出了一种图形多轨迹进化网络(GmTE网络),它采用了教师-学生模式,教师网络在纯新生儿大脑图形上学习,学生网络在给定一组不同时间点的模拟大脑图形上学习。据我们所知,这是第一个为脑图多轨迹生长预测量身定制的师生架构,该架构基于少量镜头学习并推广到图形神经网络(GNN)。为了提高学生网络的性能,我们引入了一种局部拓扑感知的蒸馏损失,它迫使学生网络的预测图拓扑与教师网络一致。实验结果表明,与基准测试方法相比,性能有了显著提高。因此,我们的GmTE网络可以用来预测不同模式的非典型脑连接轨迹演变。我们的代码位于https://github.com/basiralab/GmTE-Net。
摘要:Charting the baby connectome evolution trajectory during the first year after birth plays a vital role in understanding dynamic connectivity development of baby brains. Such analysis requires acquisition of longitudinal connectomic datasets. However, both neonatal and postnatal scans are rarely acquired due to various difficulties. A small body of works has focused on predicting baby brain evolution trajectory from a neonatal brain connectome derived from a single modality. Although promising, large training datasets are essential to boost model learning and to generalize to a multi-trajectory prediction from different modalities (i.e., functional and morphological connectomes). Here, we unprecedentedly explore the question: Can we design a few-shot learning-based framework for predicting brain graph trajectories across different modalities? To this aim, we propose a Graph Multi-Trajectory Evolution Network (GmTE-Net), which adopts a teacher-student paradigm where the teacher network learns on pure neonatal brain graphs and the student network learns on simulated brain graphs given a set of different timepoints. To the best of our knowledge, this is the first teacher-student architecture tailored for brain graph multi-trajectory growth prediction that is based on few-shot learning and generalized to graph neural networks (GNNs). To boost the performance of the student network, we introduce a local topology-aware distillation loss that forces the predicted graph topology of the student network to be consistent with the teacher network. Experimental results demonstrate substantial performance gains over benchmark methods. Hence, our GmTE-Net can be leveraged to predict atypical brain connectivity trajectory evolution across various modalities. Our code is available at https: //github.com/basiralab/GmTE-Net.
因上求缘,果上努力~~~~ 作者:CBlair,转载请注明原文链接:https://www.cnblogs.com/BlairGrowing/p/15381282.html
以上是关于图学习学术速递[2021/10/8]的主要内容,如果未能解决你的问题,请参考以下文章