基于图神经网络的节点表征学习
- 引言
- MLP、GCN和GAT。
- 数据集
- MLP
- GCN
- GAT
- 比较GCN与GAT
- 画图显示
- 测试结果
- 分析结果
- 作业
- ChebConv
- 完整代码
- 参考文献
引言
在做节点预测或边预测任务中,首先需要生成节点表征。节点表征是完成相关工作的必要之处。通过以下几个例子,我们去加深对图节点表征的理解。
例子主要通过以4个步骤进行分析图节点的表征学习:
- 分析数据集。
- 构建模型。
- 训练模型并测试。
- 显示分析结果。
MLP、GCN和GAT。
通过这3个模型去比较图神经网络的节点表征学习的优势。
数据集
本次采用的数据集均为PYG自带的Planetoid中的Cora。
from torch_geometric.datasets import Planetoid
from torch_geometric.transforms import NormalizeFeatures
dataset = Planetoid(root='data/Planetoid', name='Cora', transform=NormalizeFeatures())
data = dataset[0]
# Data(edge_index=[2, 10556], test_mask=[2708], train_mask=[2708], val_mask=[2708], x=[2708, 1433], y=[2708])MLP
MLP是神经网络的代表之作。MLP多层感知器(Multi-layerPerceptron)是一种前向结构的人工神经网络ANN,映射一组输入向量到一组输出向量。MLP可以被看做是一个有向图,由多个节点层组成,每一层全连接到下一层。除了输入节点,每个节点都是一个带有非线性激活函数的神经元。使用BP反向传播算法的监督学习方法来训练MLP。MLP是感知器的推广,克服了感知器不能对线性不可分数据进行识别的弱点。

构造MLP
# MLP图节点分类神经网络
import torch
from torch.nn import Linear
import torch.nn.functional as F
class MLP(torch.nn.Module):
def __init__(self, hidden_channels):
super(MLP, self).__init__()
torch.manual_seed(12345)
self.lin1 = Linear(dataset.num_features, hidden_channels)
self.lin2 = Linear(hidden_channels, dataset.num_classes)
def forward(self, x):
x = self.lin1(x)
x = x.relu()
x = F.dropout(x, p=0.5, training=self.training)
x = self.lin2(x)
return x
model = MLP(hidden_channels=16)
print(model)训练MLP并测试:
model = MLP(hidden_channels=16)
criterion = torch.nn.CrossEntropyLoss() # Define loss criterion.
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4) # Define optimizer.
def train():
model.train()
optimizer.zero_grad() # Clear gradients.
out = model(data.x) # Perform a single forward pass.
loss = criterion(out[data.train_mask], data.y[data.train_mask]) # Compute the loss solely based on the training nodes.
loss.backward() # Derive gradients.
optimizer.step() # Update parameters based on gradients.
return loss
for epoch in range(1, 201):
loss = train()
print(f'Epoch: {epoch:03d}, Loss: {loss:.4f}')
def test():
model.eval()
out = model(data.x)
pred = out.argmax(dim=1) # Use the class with highest probability.
test_correct = pred[data.test_mask] == data.y[data.test_mask] # Check against ground-truth labels.
test_acc = int(test_correct.sum()) / int(data.test_mask.sum()) # Derive ratio of correct predictions.
return test_acc
test_acc = test()
print(f'Test Accuracy: {test_acc:.4f}')测试结果为: 0.5900。
GCN
GCN 来自于论文《Semi-supervised Classification with Graph Convolutional Network》。

GCN是一个神经网络层。其公式如下:
构建GCN网络
# GCN图节点分类神经网络
from torch_geometric.nn import GCNConv
class GCN(torch.nn.Module):
def __init__(self, hidden_channels):
super(GCN, self).__init__()
torch.manual_seed(12345)
self.conv1 = GCNConv(dataset.num_features, hidden_channels)
self.conv2 = GCNConv(hidden_channels, dataset.num_classes)
def forward(self, x, edge_index):
x = self.conv1(x, edge_index)
x = x.relu()
x = F.dropout(x, p=0.5, training=self.training)
x = self.conv2(x, edge_index)
return x
model = GCN(hidden_channels=16)
print(model)GAT
GAT来自于《Graph Attention Networks》。其公式如下:
构建GAT网络
# 构造GAT节点分类神经网络
import torch
from torch.nn import Linear
import torch.nn.functional as F
from torch_geometric.nn import GATConv
class GAT(torch.nn.Module):
def __init__(self, hidden_channels):
super(GAT, self).__init__()
torch.manual_seed(12345)
self.conv1 = GATConv(dataset.num_features, hidden_channels)
self.conv2 = GATConv(hidden_channels, dataset.num_classes)
def forward(self, x, edge_index):
x = self.conv1(x, edge_index)
x = x.relu()
x = F.dropout(x, p=0.5, training=self.training)
x = self.conv2(x, edge_index)
return x比较GCN与GAT
画图显示
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
def visualize(h, color):
z = TSNE(n_components=2).fit_transform(out.detach().cpu().numpy())
plt.figure(figsize=(10,10))
plt.xticks([])
plt.yticks([])
plt.scatter(z[:, 0], z[:, 1], s=70, c=color, cmap="Set2")
plt.show()
通过训练后的GCN模型输出的节点表征的图:

通过训练后的GAT模型输出的节点表征的图:

测试结果
模型 | GCN | GAT | MLP |
测试结果 | 0.8140 | 0.7380 | 0.5900 |
分析结果
MLP神经网络只考虑了节点自身属性,忽略了节点之间的连接关系。其他2种模型均考虑了节点之间的关系。节点的邻接信息在取得更好的准确率方面起着关键作用。
作业
使用PyG中不同的图卷积模块在PyG的不同数据集上实现节点分类或回归任务。在这里我选择使用ChebConv在Cora数据集上实现节点分类任务。
ChebConv
ChebConv核心在于:采用切比雪夫多项式代替谱域的卷积核。公式如下:

构造ChebConv神经网络
# 构造ChebConv节点分类神经网络
import torch
from torch.nn import Linear
import torch.nn.functional as F
from torch_geometric.nn import ChebConv
class Cheb(torch.nn.Module):
def __init__(self, hidden_channels):
super(Cheb, self).__init__()
torch.manual_seed(12345)
self.conv1 = ChebConv(dataset.num_features, hidden_channels,K=5)
self.conv2 = ChebConv(hidden_channels, dataset.num_classes,K=5)
def forward(self, x, edge_index):
x = self.conv1(x, edge_index)
x = x.relu()
x = F.dropout(x, p=0.5, training=self.training)
x = self.conv2(x, edge_index)
return x显示结果:

准确率为:0.7940
完整代码
# 获取并分析数据集
from torch_geometric.datasets import Planetoid
from torch_geometric.transforms import NormalizeFeatures
dataset = Planetoid(root='data/Planetoid', name='Cora', transform=NormalizeFeatures())
data = dataset[0]
# 可视化节点表征分布的方法
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
def visualize(h, color):
z = TSNE(n_components=2).fit_transform(out.detach().cpu().numpy())
plt.figure(figsize=(10,10))
plt.xticks([])
plt.yticks([])
plt.scatter(z[:, 0], z[:, 1], s=70, c=color, cmap="Set2")
plt.show()
# 构造ChebConv节点分类神经网络
import torch
from torch.nn import Linear
import torch.nn.functional as F
from torch_geometric.nn import ChebConv
class Cheb(torch.nn.Module):
def __init__(self, hidden_channels):
super(Cheb, self).__init__()
torch.manual_seed(12345)
self.conv1 = ChebConv(dataset.num_features, hidden_channels,K=5)
self.conv2 = ChebConv(hidden_channels, dataset.num_classes,K=5)
def forward(self, x, edge_index):
x = self.conv1(x, edge_index)
x = x.relu()
x = F.dropout(x, p=0.5, training=self.training)
x = self.conv2(x, edge_index)
return x
# 训练GAT图节点分类器
model = Cheb(hidden_channels=16)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
criterion = torch.nn.CrossEntropyLoss()
def train():
model.train()
model.cuda()
optimizer.zero_grad() # Clear gradients.
out = model(data.x.cuda(), data.edge_index.cuda()) # Perform a single forward pass.
loss = criterion(out[data.train_mask], data.y[data.train_mask].cuda()) # Compute the loss solely based on the training nodes.
loss.backward() # Derive gradients.
optimizer.step() # Update parameters based on gradients.
return loss
for epoch in range(1, 2001):
loss = train()
print(f'Epoch: {epoch:03d}, Loss: {loss:.4f}')
# 测试GAT图节点分类器
def test():
model.eval()
out = model(data.x.cuda(), data.edge_index.cuda())
pred = out.argmax(dim=1) # Use the class with highest probability.
test_correct = pred[data.test_mask] == data.y[data.test_mask].cuda() # Check against ground-truth labels.
test_acc = int(test_correct.sum()) / int(data.test_mask.sum()) # Derive ratio of correct predictions.
return test_acc
test_acc = test()
print(f'Test Accuracy: {test_acc:.4f}')
# 可视化训练过的GAT模型输出的节点表征
model.eval()
out = model(data.x.cuda(), data.edge_index.cuda())
visualize(out, color=data.y)
















