文章目录
- 亮点
- 残差结构
- 计算量
- 虚线残差结构
- 代码解析
- resnet18/34的残差结构
- resnet50/101/152的残差结构Bottleneck
- 一层layer的结构(_make_layer()函数)
- ResNet主网络
- 代码仓库
亮点
- 引入了残差结构
- 使用Batch Normalization加速训练(丢弃dropout)
这两个方法,解决了梯度消失和梯度爆炸等问题,使得构建深层网络成为可能
残差结构
计算量
左边是ResNet18/34的残差结构,右边是ResNet101/152的残差结构
- 左边计算量:3x3x256x256+3x3x256x256=1179648
- 右边计算量:1x1x256x64+3x3x64x64+1x1x64x256=69632
由此可见,右边一个残差结构的计算量更少,原因是右边的残差结构,使用阿一个1*1的卷积核,用来降维和升维
虚线残差结构
上图有两个残差结构,它们的区别在于:
- 左图的输入(Input)直接和输出(Output)相加,而右图的输入(Input2)需要经过一个1*1的卷积核,才能与输出(Output2)相加。
- 虚线残差第一个卷积的步距
stride=2
,而实线残差结构stride=1
下图中,1号框的64
表示卷积核的个数,等于输出深度。2号框的既是虚线残差结构
代码解析
resnet18/34的残差结构
BasicBlock如下图所示,它包括实线残差和虚线残差两种结构
# resnet18和resnet34的残差结构
class BasicBlock(nn.Module):
# 卷积核个数改变的倍数,如果一样,则expansion=1
expansion = 1
# ----------------------------残差结构--------------------------------------------
# in_channel 输入特征矩阵的深度
# out_channel 输出特征矩阵的深度,既是卷积核的个数
# stride 步距,当stride=1时,宽高不变,当stride=2时,宽高为原来的一半
# downsample 下采样,默认为None。只有使用到虚线残差结构才设为True
# bias=False 使用BN结构时,不需要使用偏置(bias)
# -------------------------------------------------------------------------------
def __init__(self, in_channel, out_channel, stride=1, downsample=None, **kwargs):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=out_channel,
kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(out_channel)
self.relu = nn.ReLU()
self.conv2 = nn.Conv2d(in_channels=out_channel, out_channels=out_channel,
kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channel)
self.downsample = downsample
def forward(self, x):
identity = x
# 判断self.downsample是否为空,如果不为空,则进行实线的残差结构
if self.downsample is not None:
identity = self.downsample(x)
# 主支线的正向传播
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
# 主线和残差相加
out += identity
out = self.relu(out)
return out
下面的代码段,对应的是一次卷积操作
self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=out_channel,kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(out_channel)
self.relu = nn.ReLU()
重复两次,既有
# ----------------------------残差结构--------------------------------------------
# in_channel 输入特征矩阵的深度
# out_channel 输出特征矩阵的深度,既是卷积核的个数
# stride 步距,当stride=1时,宽高不变,当stride=2时,宽高为原来的一半
# downsample 下采样,默认为None。只有使用到虚线残差结构才设为True
# bias=False 使用BN结构时,不需要使用偏置(bias)
# -------------------------------------------------------------------------------
def __init__(self, in_channel, out_channel, stride=1, downsample=None, **kwargs):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=out_channel,
kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(out_channel)
self.relu = nn.ReLU()
self.conv2 = nn.Conv2d(in_channels=out_channel, out_channels=out_channel,
kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channel)
self.downsample = downsample
self.downsample = downsample
对应的下采样,即是用一个1*1的卷积实现虚线残差结构。如果self.downsample
为空,机型实线残差。如果不为空,则进行虚线的残差结构
out += identity
进行残差结构+主线结构
resnet50/101/152的残差结构Bottleneck
class Bottleneck(nn.Module):
# 卷积核个数改变的倍数,这里主干上的输出深度为输入深度的4倍,则expansion=4
expansion = 4
def __init__(self, in_channel, out_channel, stride=1, downsample=None,
groups=1, width_per_group=64):
super(Bottleneck, self).__init__()
width = int(out_channel * (width_per_group / 64.)) * groups
self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=width,
kernel_size=1, stride=1, bias=False) # squeeze channels
self.bn1 = nn.BatchNorm2d(width)
# -----------------------------------------
self.conv2 = nn.Conv2d(in_channels=width, out_channels=width, groups=groups,
kernel_size=3, stride=stride, bias=False, padding=1)
self.bn2 = nn.BatchNorm2d(width)
# -----------------------------------------
self.conv3 = nn.Conv2d(in_channels=width, out_channels=out_channel*self.expansion,
kernel_size=1, stride=1, bias=False) # unsqueeze channels
self.bn3 = nn.BatchNorm2d(out_channel*self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
Bottleneck与BasicBlock相似,仅在几个地方有些许差异
-
expansion = 4
:conv1和conv3的输出深度分别为64、256,它们相差了4倍,即是expansion,它在self.conv3 = nn.Conv2d(in_channels=width, out_channels=out_channel*self.expansion,kernel_size=1, stride=1, bias=False)
中被使用 -
self.conv2 = nn.Conv2d(in_channels=width, out_channels=width, groups=groups,kernel_size=3, stride=stride, bias=False, padding=1)
:注意stride=stride
,它对应着虚线残差结构中的conv2,stride=2
一层layer的结构(_make_layer()函数)
# ----------------------------一个layer的结构--------------------------------------
# block 残差结构,有BasicBlock、Bottleneck
# channel 与blocks_num对应,残差结构的卷积核数目,为一个列表。如resnet18为[2,2,2,2]
# block_num 该层一共包含了几个残差块,即使执行的次数
# -------------------------------------------------------------------------------
def _make_layer(self, block, channel, block_num, stride=1):
downsample = None
# 判断是resnet18/resnet34还是resnet50/resnet101/resnet152
if stride != 1 or self.in_channel != channel * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.in_channel, channel * block.expansion, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(channel * block.expansion))
# 若有downsample则为虚线残差结构,否则,仍是虚线残差结构
layers = []
layers.append(block(self.in_channel,
channel,
downsample=downsample,
stride=stride,
groups=self.groups,
width_per_group=self.width_per_group))
self.in_channel = channel * block.expansion
# 实线残差结构
for _ in range(1, block_num):
layers.append(block(self.in_channel,
channel,
groups=self.groups,
width_per_group=self.width_per_group))
return nn.Sequential(*layers)
block:残差结构,有BasicBlock、Bottleneck
channel:与blocks_num对应,残差结构的卷积核数目,为一个列表。如resnet18为[2,2,2,2]
block_num:该层一共包含了几个残差块,即使执行的次数
-
downsample = None
默认下采样为空,即默认不执行1*1卷积核的虚线残差结构 -
if stride != 1 or self.in_channel != channel * block.expansion:
默认in_channel=64
,channel=64
;对于18、34来说:expansion=1
,所以self.in_channel != channel * block.expansion
成立,故不进入if语句。对于50、101来说:expansion=4
,所以不相等,进入if语句 - if语句就是一个1*1的卷积操作
下图的【💔红色】1对应着代码1部分;【💚绿色】2代表的代码2部分
ResNet主网络
class ResNet(nn.Module):
def __init__(self,
block,
blocks_num,
num_classes=1000,
include_top=True,
groups=1,
width_per_group=64):
super(ResNet, self).__init__()
self.include_top = include_top
self.in_channel = 64
self.groups = groups
self.width_per_group = width_per_group
self.conv1 = nn.Conv2d(3, self.in_channel, kernel_size=7, stride=2,
padding=3, bias=False)
self.bn1 = nn.BatchNorm2d(self.in_channel)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, blocks_num[0])
self.layer2 = self._make_layer(block, 128, blocks_num[1], stride=2)
self.layer3 = self._make_layer(block, 256, blocks_num[2], stride=2)
self.layer4 = self._make_layer(block, 512, blocks_num[3], stride=2)
if self.include_top:
self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) # output size = (1, 1)
self.fc = nn.Linear(512 * block.expansion, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')