5.10 批量归一化

本节我们介绍批量归一化(batch normalization)层,它能让较深的神经网络的训练变得更加容易 [1]。在3.16节(实战Kaggle比赛:预测房价)里,我们对输入数据做了标准化处理:处理后的任意一个特征在数据集中所有样本上的均值为0、标准差为1。标准化处理输入数据使各个特征的分布相近:这往往更容易训练出有效的模型。

通常来说,数据标准化预处理对于浅层模型就足够有效了。随着模型训练的进行,当每层中参数更新时,靠近输出层的输出较难出现剧烈变化。但对深层神经网络来说,即使输入数据已做标准化,训练中模型参数的更新依然很容易造成靠近输出层输出的剧烈变化。这种计算数值的不稳定性通常令我们难以训练出有效的深度模型。

批量归一化的提出正是为了应对深度模型训练的挑战。在模型训练时,批量归一化利用小批量上的均值和标准差,不断调整神经网络中间输出,从而使整个神经网络在各层的中间输出的数值更稳定。批量归一化和下一节将要介绍的残差网络为训练和设计深度模型提供了两类重要思路

5.10.1 批量归一化层

对全连接层和卷积层做批量归一化的方法稍有不同。下面我们将分别介绍这两种情况下的批量归一化。

5.10.1.1 对全连接层做批量归一化

我们先考虑如何对全连接层做批量归一化。通常,我们将批量归一化层置于全连接层中的仿射变换和激活函数之间。设全连接层的输入为深度学习中的批量归一化 批量归一化处理_深度学习中的批量归一化m×p×q个元素的均值和方差。

5.10.1.3 预测时的批量归一化

使用批量归一化训练时,我们可以将批量大小设得大一点,从而使批量内样本的均值和方差的计算都较为准确。将训练好的模型用于预测时,我们希望模型对于任意输入都有确定的输出。因此,单个样本的输出不应取决于批量归一化所需要的随机小批量中的均值和方差。一种常用的方法是通过移动平均估算整个训练数据集的样本均值和方差,并在预测时使用它们得到确定的输出。可见,和丢弃层一样,批量归一化层在训练模式和预测模式下的计算结果也是不一样的。

5.10.2 从零开始实现

下面我们自己实现批量归一化层。

import time
import torch
from torch import nn, optim
import torch.nn.functional as F
import sys
 sys.path.append("…")
import d2lzh_pytorch as d2l
 device = torch.device(‘cuda’ if torch.cuda.is_available() else ‘cpu’)def batch_norm(is_training, X, gamma, beta, moving_mean, moving_var, eps, momentum):
# 判断当前模式是训练模式还是预测模式
if not is_training:
# 如果是在预测模式下,直接使用传入的移动平均所得的均值和方差
 X_hat = (X - moving_mean) / torch.sqrt(moving_var + eps)
else:
assert len(X.shape) in (2, 4)
if len(X.shape)  2:
# 使用全连接层的情况,计算特征维上的均值和方差
 mean = X.mean(dim=0)
 var = ((X - mean)  2).mean(dim=0)
else:
# 使用二维卷积层的情况,计算通道维上(axis=1)的均值和方差。这里我们需要保持
# X的形状以便后面可以做广播运算
 mean = X.mean(dim=0, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True)
 var = ((X - mean)  2).mean(dim=0, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True)
# 训练模式下用当前的均值和方差做标准化
 X_hat = (X - mean) / torch.sqrt(var + eps)
# 更新移动平均的均值和方差
 moving_mean = momentum  moving_mean + (1.0 - momentum)  mean
 moving_var = momentum  moving_var + (1.0 - momentum)  var
 Y = gamma * X_hat + beta # 拉伸和偏移
return Y, moving_mean, moving_varCopy to clipboardErrorCopied

接下来,我们自定义一个BatchNorm层。它保存参与求梯度和迭代的拉伸参数gamma和偏移参数beta,同时也维护移动平均得到的均值和方差,以便能够在模型预测时被使用。BatchNorm实例所需指定的num_features参数对于全连接层来说应为输出个数,对于卷积层来说则为输出通道数。该实例所需指定的num_dims参数对于全连接层和卷积层来说分别为2和4。

class BatchNorm(nn.Module):
def init(self, num_features, num_dims):
super(BatchNorm, self).init()
if num_dims  2:

shape = (1, num_features)
else:

shape = (1, num_features, 1, 1)
# 参与求梯度和迭代的拉伸和偏移参数,分别初始化成0和1

self.gamma = nn.Parameter(torch.ones(shape))

self.beta = nn.Parameter(torch.zeros(shape))
# 不参与求梯度和迭代的变量,全在内存上初始化成0

self.moving_mean = torch.zeros(shape)

self.moving_var = torch.zeros(shape)

<span >def</span> <span >forward</span><span >(</span>self<span >,</span> X<span >)</span><span >:</span>
    <span ># 如果X不在内存上,将moving_mean和moving_var复制到X所在显存上</span>
    <span >if</span> self<span >.</span>moving_mean<span >.</span>device <span >!=</span> X<span >.</span>device<span >:</span>
        self<span >.</span>moving_mean <span >=</span> self<span >.</span>moving_mean<span >.</span>to<span >(</span>X<span >.</span>device<span >)</span>
        self<span >.</span>moving_var <span >=</span> self<span >.</span>moving_var<span >.</span>to<span >(</span>X<span >.</span>device<span >)</span>
    <span ># 保存更新过的moving_mean和moving_var, Module实例的traning属性默认为true, 调用.eval()后设成false</span>
    Y<span >,</span> self<span >.</span>moving_mean<span >,</span> self<span >.</span>moving_var <span >=</span> batch_norm<span >(</span>self<span >.</span>training<span >,</span> 
        X<span >,</span> self<span >.</span>gamma<span >,</span> self<span >.</span>beta<span >,</span> self<span >.</span>moving_mean<span >,</span>
        self<span >.</span>moving_var<span >,</span> eps<span >=</span><span >1e</span><span >-</span><span >5</span><span >,</span> momentum<span >=</span><span >0.9</span><span >)</span>
    <span >return</span> Y</code><button class="docsify-copy-code-button"><span >Copy to clipboard</span><span >Error</span><span >Copied</span></button></pre><h3 id="_51021-使用批量归一化层的lenet"><a href="#/chapter05_CNN/5.10_batch-norm?id=_51021-%e4%bd%bf%e7%94%a8%e6%89%b9%e9%87%8f%e5%bd%92%e4%b8%80%e5%8c%96%e5%b1%82%e7%9a%84lenet" data-id="_51021-使用批量归一化层的lenet" ><span>5.10.2.1 使用批量归一化层的LeNet</span></a></h3><p>下面我们修改5.5节(卷积神经网络(LeNet))介绍的LeNet模型,从而应用批量归一化层。我们在所有的卷积层或全连接层之后、激活层之前加入批量归一化层。</p><pre v-pre="" data-lang="python"><code class="lang-python">net <span >=</span> nn<span >.</span>Sequential<span >(</span>
        nn<span >.</span>Conv2d<span >(</span><span >1</span><span >,</span> <span >6</span><span >,</span> <span >5</span><span >)</span><span >,</span> <span ># in_channels, out_channels, kernel_size</span>
        BatchNorm<span >(</span><span >6</span><span >,</span> num_dims<span >=</span><span >4</span><span >)</span><span >,</span>
        nn<span >.</span>Sigmoid<span >(</span><span >)</span><span >,</span>
        nn<span >.</span>MaxPool2d<span >(</span><span >2</span><span >,</span> <span >2</span><span >)</span><span >,</span> <span ># kernel_size, stride</span>
        nn<span >.</span>Conv2d<span >(</span><span >6</span><span >,</span> <span >16</span><span >,</span> <span >5</span><span >)</span><span >,</span>
        BatchNorm<span >(</span><span >16</span><span >,</span> num_dims<span >=</span><span >4</span><span >)</span><span >,</span>
        nn<span >.</span>Sigmoid<span >(</span><span >)</span><span >,</span>
        nn<span >.</span>MaxPool2d<span >(</span><span >2</span><span >,</span> <span >2</span><span >)</span><span >,</span>
        d2l<span >.</span>FlattenLayer<span >(</span><span >)</span><span >,</span>
        nn<span >.</span>Linear<span >(</span><span >16</span><span >*</span><span >4</span><span >*</span><span >4</span><span >,</span> <span >120</span><span >)</span><span >,</span>
        BatchNorm<span >(</span><span >120</span><span >,</span> num_dims<span >=</span><span >2</span><span >)</span><span >,</span>
        nn<span >.</span>Sigmoid<span >(</span><span >)</span><span >,</span>
        nn<span >.</span>Linear<span >(</span><span >120</span><span >,</span> <span >84</span><span >)</span><span >,</span>
        BatchNorm<span >(</span><span >84</span><span >,</span> num_dims<span >=</span><span >2</span><span >)</span><span >,</span>
        nn<span >.</span>Sigmoid<span >(</span><span >)</span><span >,</span>
        nn<span >.</span>Linear<span >(</span><span >84</span><span >,</span> <span >10</span><span >)</span>
    <span >)</span></code><button class="docsify-copy-code-button"><span >Copy to clipboard</span><span >Error</span><span >Copied</span></button></pre><p>下面我们训练修改后的模型。</p><pre v-pre="" data-lang="python"><code class="lang-python">batch_size <span >=</span> <span >256</span>

train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size=batch_size)

lr, num_epochs = 0.001, 5
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
d2l.train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs)Copy to clipboardErrorCopied

输出:

training on  cuda

epoch 1, loss 0.0039, train acc 0.790, test acc 0.835, time 2.9 sec

epoch 2, loss 0.0018, train acc 0.866, test acc 0.821, time 3.2 sec

epoch 3, loss 0.0014, train acc 0.879, test acc 0.857, time 2.6 sec

epoch 4, loss 0.0013, train acc 0.886, test acc 0.820, time 2.7 sec

epoch 5, loss 0.0012, train acc 0.891, test acc 0.859, time 2.8 sec

最后我们查看第一个批量归一化层学习到的拉伸参数gamma和偏移参数beta

net[1].gamma.view((-1,)), net[1].beta.view((-1,))

输出:

(tensor([ 1.2537,  1.2284,  1.0100,  1.0171,  0.9809,  1.1870], device=‘cuda:0’),

tensor([ 0.0962,  0.3299, -0.5506,  0.1522, -0.1556,  0.2240], device=‘cuda:0’))

5.10.3 简洁实现

与我们刚刚自己定义的BatchNorm类相比,Pytorch中nn模块定义的BatchNorm1dBatchNorm2d类使用起来更加简单,二者分别用于全连接层和卷积层,都需要指定输入的num_features参数值。下面我们用PyTorch实现使用批量归一化的LeNet。

net = nn.Sequential(

nn.Conv2d(1, 6, 5), # in_channels, out_channels, kernel_size

nn.BatchNorm2d(6),

nn.Sigmoid(),

nn.MaxPool2d(2, 2), # kernel_size, stride

nn.Conv2d(6, 16, 5),

nn.BatchNorm2d(16),

nn.Sigmoid(),

nn.MaxPool2d(2, 2),

d2l.FlattenLayer(),

nn.Linear(1644, 120),

nn.BatchNorm1d(120),

nn.Sigmoid(),

nn.Linear(120, 84),

nn.BatchNorm1d(84),

nn.Sigmoid(),

nn.Linear(84, 10)
)

使用同样的超参数进行训练。

batch_size = 256

train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size=batch_size)
lr, num_epochs = 0.001, 5
 optimizer = torch.optim.Adam(net.parameters(), lr=lr)
 d2l.train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs)Copy to clipboardErrorCopied

输出:

training on  cuda

epoch 1, loss 0.0054, train acc 0.767, test acc 0.795, time 2.0 sec

epoch 2, loss 0.0024, train acc 0.851, test acc 0.748, time 2.0 sec

epoch 3, loss 0.0017, train acc 0.872, test acc 0.814, time 2.2 sec

epoch 4, loss 0.0014, train acc 0.883, test acc 0.818, time 2.1 sec

epoch 5, loss 0.0013, train acc 0.889, test acc 0.734, time 1.8 sec

小结

  • 在模型训练时,批量归一化利用小批量上的均值和标准差,不断调整神经网络的中间输出,从而使整个神经网络在各层的中间输出的数值更稳定。
  • 对全连接层和卷积层做批量归一化的方法稍有不同。
  • 批量归一化层和丢弃层一样,在训练模式和预测模式的计算结果是不一样的。
  • PyTorch提供了BatchNorm类方便使用。

参考文献

[1] Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.