Graph Representation Learning学习笔记-chapter2
Posted Dodo·D·Caster
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Graph Representation Learning学习笔记-chapter2相关的知识,希望对你有一定的参考价值。
Chapter2 Background and Traditional Approaches
2.1 Graph Statistics and Kernel Methods
Node-level
degree 度 : 考虑邻居数量
-
d
u
d_u
du : node u’s degree
- take into account how many neighbors a node has
centrality 中心性 : 考虑邻居数量+重要程度
-
e u e_u eu : eigenvector centrality
- proportional to the average centrality of its neighbors
- take into account how important a node’s neighbors are
- it ranks the likelihood that a node is visited on a random walk of infinite length on the graph
-
x
i
=
c
∑
j
=
1
n
a
i
j
x
j
x_i=c \\sum_j=1^na_ijx_j
xi=c∑j=1naijxj
- x 表示节点特征。初始值为节点的度
- aij 表示邻接矩阵值
- c 为常量
betweeness centrality
- measures how often a node lies on the shortest path between two other nodes
closeness centrality
- measures the average shortest path length between a node and all other nodes
clustering coefficient 聚类系数 :考虑邻居间关系是否紧密
-
c
u
c_u
cu : clustering coefficient (0~1)
- measures the proportion of closed triangles in a node’s local neighborhood
- measures how tightly clustered a node’s neighborhood is
Graph-level
Bag of nodes
- just aggregate node-level statistics(如度、中心性和聚类系数) and use the aggregated information as a graph-level representation
- Drawbacks :
- entirely based upon local node-level information
- can miss important global properties in the graph
iterative neighborhood aggregation
-
extract node-level features that contain more information than just their local ego graph, and then to aggregate these richer features into a graph-level representation.
-
example : Weisfeiler Lehman (WL) algorithm and kernel
- 初始化标签 : 通常为度
- 不断迭代,每一次都聚合邻居的特征得到新的标签
the WL kernel is computed by measuring the difference between the resultant label sets for two graphs.
估计图同构的方法之一 :用wl算法跑k轮,检查两个图是否有相同的标签集合
WL算法
我的理解是:WL算法经过k轮聚合后的标签能够很好地保证不同节点的特征不同,故能够用来进行图的对比,从而看两个图是否同构 (因为WL的聚合方式采用了哈希的方法,而不是求平均或最大值等。
graphlets
- simply count the occurrence of different small subgraph structures
- graphlet kernel involves
- enumerating all possible graph structures of a particular size
- counting how many times they occur in the full graph.
path-based methods
- enumerate all possible graphlets
- examines the different kinds of paths that occur in the graph
- random walk kernel
- run random walks
- count the occurrence of different degree sequences
- shortest-path kernel
- only use shortest paths between paths
- advs : extract rich structural information & avoid many of the combinatorial pitfalls of graph data
- random walk kernel
2.2 Neighborhood Overlap Detection
Previous discussed statistics do not quantify the relationships between nodes, that is to say, they are not very useful for the task of relation prediction.
个人理解:领域重叠其实讲的是节点间的相似性。在图中,如果两个节点之间的领域重叠越多(比如它们的共同邻居越多),那它们就越相似,它们之间存在边的可能性也就越大。
Local overlap statistics 局部重叠/相似
- functions of the number of common neighbors two nodes share
- 计数共同邻居数量
- 量化节点领域间的重叠并且减少由于节点度引起的偏差
- for example : Sorensen index, Salton index, Jaccard overlap
- 计数共同邻居数量+考虑共同邻居的重要程度
- for example
- Resource Allocation (RA) index : counts the inverse degrees of the common neighbors
- Adamic-Adar (AA) index : use the inverse logarithm of the degrees
- for example
Global overlap statistics 全局重叠/相似
- 在local的基础上考虑到了两个节点在领域内没有重叠但仍然处于图中同一个社区的情况
Katz index
- 计数一对节点间不同长度的路径数量,其中不同长度的路径有不同的权重
-
S
K
a
t
z
[
u
,
v
]
=
∑
i
=
1
∞
β
i
A
i
[
u
,
v
]
S_Katz[u,v] = \\sum_i=1^\\infty\\beta ^i A^i[u,v]
SKatz[u,v]=∑i=1∞βiAi[u,v]
- β \\beta β :用户定义的用来控制不同长短路径的权重。一般来说,路径越短权重越大
Leicht, Holme, and Newman (LHN) similarity
- Katz index对节点的度有着强偏差,对于度很多的节点其路径也会很多(sum后的值还是后很大),而LHN指标解决了这个问题
LHN推理过程不是很理解
Random walk methods
- consider random walks rather than exact counts of paths over the graph
- Personalized PageRank algorithm
- 相似程度和从一个节点random walk到另一节点的概率成正比
2.3 Graph Laplacians and Spectral Methods
Graph Laplacians
- unnormalized laplacian
- 定义 :L = D - A
- L : laplacian matrix 拉普拉斯矩阵
- D : degree matrix 度矩阵
- A : adjacency matrix 邻接矩阵
- 定义 :L = D - A
- L的性质
- 对称+半正定
- $x^TLx=∑_(u,v)∈ \\mathcalE(X[u]−X[v])^2$
- 有|v|个非负特征值
- 定理:拉普拉斯矩阵中0特征值的出现次数=图中连通区域的个数
- normalized laplacian
- symmetric normalized laplacian
- L s y m = D − 1 2 L D − 1 2 L_sym=D^- \\frac12LD^- \\frac12 Lsym=D−21LD−21
- random walk laplacian
- L R W = D − 1 L L_RW=D^-1L LRW=D−1L
- symmetric normalized laplacian
Graph Cuts and Clustering 图割和聚类
Graph cuts
- 把一个图分成k个不重叠的子集 A 1 , . . . , A k A_1,...,A_k A1,...,Ak
方法1
- it tends to simply make clusters that consist of a single node
方法2-解决了孤立节点问题
- enforce that the partitions are all reasonably large
方法3
- enforce that all clusters have a similar number of edges incident to their nodes
Generalized spectral clustering
- steps
- find the K smallest eigenvectors of L (exclude the smallest)
e ∣ v ∣ − 1 , e ∣ v ∣ − 2 , . . . , e ∣ v ∣ − K e_|v|-1,e_|v|-2,...,e_|v|-K e∣v∣−1,e∣v∣−2,...,e∣v∣−K - form the matrix U ∈ R ∣ V ∣ × ( K − 1 ) U\\in R^|V|\\times (K-1) U∈R∣V∣×(K−1) with the eigenvectors from step 1 as columns
- represent each node by its corresponding row in the matrix U
i.e. Z u = U [ u ] , ∀ u ∈ V Z_u = U[u], \\forall u \\in V Zu=U[u],∀u∈V - run K-means clustering on the embeddings Z u , ∀ u ∈ V Z_u, \\forall u \\in V Zu,∀u∈V
- find the K smallest eigenvectors of L (exclude the smallest)
以上是关于Graph Representation Learning学习笔记-chapter2的主要内容,如果未能解决你的问题,请参考以下文章
Graph Representation Learning学习笔记-chapter1
Graph Representation Learning学习笔记-chapter1
Graph Representation Learning学习笔记-chapter3
Graph Representation Learning学习笔记-chapter5