目录

  • ​​前言​​
  • ​​专业名词​​
  • ​​笔记​​
  • ​​GRAPH AUTOENCODERS 图形自动编码器​​
  • ​​network embedding 网络嵌入​​
  • ​​Deep NeuralNetwork for Graph Representations 图形表示的深层神经网络(DNGR)​​
  • ​​Structural Deep Network Embedding 结构深层网络嵌入(SDNE)​​
  • ​​Varia-tional Graph Autoencoder 可变图形自动编码器(VGAE)​​
  • ​​Adversarially Regularized VariationalGraph Autoencoder 逆正则化变量图自动编码器(ARVGA)​​
  • ​​GraphSage​​
  • ​​DGI​​
  • ​​结语​​

【综述】A Comprehensive Survey on Graph NeuralNetworks(3)_损失函数

前言

Hello!小伙伴!
非常感谢您阅读海轰的文章,倘若文中有错误的地方,欢迎您指出~
 
自我介绍 ଘ(੭ˊᵕˋ)੭
昵称:海轰
标签:程序猿|C++选手|学生
简介:因C语言结识编程,随后转入计算机专业,获得过国家奖学金,有幸在竞赛中拿过一些国奖、省奖…已保研。
学习经验:扎实基础 + 多做笔记 + 多敲代码 + 多思考 + 学好英语!
 
唯有努力💪
 
文章仅作为自己的学习笔记 用于知识体系建立以及复习
知其然 知其所以然!

专业名词

  • negative cross entropy 负交叉熵

笔记

GRAPH AUTOENCODERS 图形自动编码器

Graph autoencoders (GAEs) are deep neural architectureswhich map nodes into a latent feature space and decode graphinformation from latent representations. GAEs can be used tolearn network embeddings or generate new graphs.

图形自动编码器(GAE)是一种深层神经结构,它将节点映射到潜在的特征空间,并从潜在的表示中解码图形信息。GAE可用于学习网络嵌入生成新图形

network embedding 网络嵌入

A network embedding is a low-dimensional vector rep-resentation of a node which preserves a node’s topologicalinformation. GAEs learn network embeddings using an en-coder to extract network embeddings and using a decoder toenforce network embeddings to preserve the graph topologicalinformation such as the PPMI matrix and the adjacency matrix

网络嵌入是节点的低维向量表示,它保留了节点的拓扑信息。GAE学习网络嵌入,使用编码器提取网络嵌入,并使用解码器加强网络嵌入以保留图形拓扑信息,如PPMI矩阵和邻接矩阵

Deep NeuralNetwork for Graph Representations 图形表示的深层神经网络(DNGR)

Deep NeuralNetwork for Graph Representations (DNGR) [59] uses astacked denoising autoencoder [108] to encode and decodethe PPMI matrix via multi-layer perceptrons

DNGR[59]使用打包去噪自动编码器[108]通过多层感知器对PPMI矩阵进行编码和解码

Structural Deep Network Embedding 结构深层网络嵌入(SDNE)

Structural Deep Network Embedding (SDNE) [60] uses astacked autoencoder to preserve the node first-order proximityand second-order proximity jointly.

结构化深度网络嵌入(SDNE)[60]使用一个打包的自动编码器来联合保持节点的一阶近似和二阶近似


SDNE proposes two lossfunctions on the outputs of the encoder and the outputsof the decoder separately.

SDNE分别在编码器和解码器的输出端提出了两个损耗函数


The first loss function enablesthe learned network embeddings to preserve the node first-order proximity by minimizing the distance between a node’snetwork embedding and its neighbors’ network embeddings.The first loss functionL1stis defined as

第一损失函数通过最小化节点网络嵌入与其邻居网络嵌入之间的距离,使学习到的网络嵌入保持节点的一阶邻近性。第一个损失函数仍然定义为

【综述】A Comprehensive Survey on Graph NeuralNetworks(3)_编码器_02


The second loss function enablesthe learned network embeddings to preserve the node second-order proximity by minimizing the distance between a node’sinputs and its reconstructed inputs. Concretely, the second loss function L2nd is defined as

第二损失函数通过最小化节点输入和重构输入之间的距离,使学习到的网络嵌入保持节点的二阶邻近性。具体地说,第二个损失函数l2ndis定义为

【综述】A Comprehensive Survey on Graph NeuralNetworks(3)_损失函数_03


DNGR [59] and SDNE [60] only consider node structuralinformation which is about the connectivity between pairs ofnodes. They ignore nodes may contain feature information thatdepicts the attributes of nodes themselves

DNGR〔59〕和SDNE〔60〕只考虑节点结构对节点间连通性的信息。它们忽略节点可能包含描述节点自身属性的特征信息

Graph Autoencoder(GAE*3) [61] leverages GCN [22] to encode node structuralinformation and node feature information at the same time

图形自动编码器(GAE*3)[61]利用GCN[22]同时对节点结构信息和节点特征信息进行编码

The encoder of GAE* consists of two graph convolutionallayers, which takes the form

GAE*的编码器由两个图卷积层组成,其形式为

【综述】A Comprehensive Survey on Graph NeuralNetworks(3)_卷积神经网络_04

Simply reconstructing the graph adjacency matrix may leadto overfitting due to the capacity of the autoencoders

由于自动编码器的容量,简单地重建图邻接矩阵可能会导致过度拟合

Varia-tional Graph Autoencoder 可变图形自动编码器(VGAE)
Adversarially Regularized VariationalGraph Autoencoder 逆正则化变量图自动编码器(ARVGA)

(ARVGA) [62], [109] employs the trainingscheme of a generative adversarial networks (GAN

(ARVGA)[62],[109]采用了生成性对抗网络(GAN)的训练模式


To alleviate the data sparsity problem in learningnetwork embedding, another line of works convert a graph intosequences by random permutations or random walks. In sucha way, those deep learning approaches which are applicableto sequences can be directly used to process graphs.

为了缓解学习网络嵌入中的数据稀疏问题,另一系列工作通过随机排列或随机游动将图形转换为序列。这样,那些适用于序列的深度学习方法就可以直接用于图形处理

GraphSage

Similar as GAE*, GraphSage [42] encodes node featureswith two graph convolutional layers. Instead of optimizingthe reconstruction error, GraphSage shows that the relationalinformation between two nodes can be preserved by negativesampling with the loss

与GAE*类似,GraphSage[42]使用两个图形卷积层对节点特征进行编码。GraphSage没有优化重建误差,而是显示了通过带损耗的负采样可以保留两个节点之间的关系信息

DGI

DGI [56] alternatively drives local network embeddings tocapture global structural information by maximizing localmutual information. It shows a distinct improvement overGraphSage [42] experimentally.

DGI[56]通过最大化局部互信息,驱动局部网络嵌入以获取全局结构信息。它在实验上显示了对图形图像[42]的明显改善。

For the aforementioned methods, they essentially learnnetwork embeddings by solving a link prediction problem.

对于上述方法,它们本质上是通过解决链路预测问题来学习网络嵌入

To alleviate the data sparsity problem in learningnetwork embedding, another line of works convert a graph intosequences by random permutations or random walks.

为了缓解学习网络嵌入中的数据稀疏问题,另一系列工作通过随机排列随机游动将图形转换为序列

结语

说明:

  • 参考《A Comprehensive Survey on Graph NeuralNetworks》

文章仅作为个人学习笔记记录,记录从0到1的一个过程

希望对您有一点点帮助,如有错误欢迎小伙伴指正

【综述】A Comprehensive Survey on Graph NeuralNetworks(3)_Graph_05