一、introduction

Res2Net由南开大学程明明组2019年提出,主要贡献是对ResNet模型中的block模块进行了改进,计算负载不增加,特征提取能力更强大。

二、网络结构

回顾ResNet网络结构:

左图是ResNet网络中的block模块,右图是论文中新提出来的Res2Net模块。简单来说,Res2Net就是将3×3卷积层的输入分成了四个部分,网络内部又以残差式的风格进行连接。

pytorch图网络 pytorch构建网络_pytorch


计算公式如下所示:

pytorch图网络 pytorch构建网络_pytorch_02


y1 = x1;

y2 = x2*(3x3)= K2;

y3 =(K2 + x3)*(3x3)= K3 ;

y4 =(K3 + x4)*(3x3)= K4

三、设计思路

1、首先我们来讲讲ResNet残差网络的设计思路。早期的神经网络结构设计非常简单,通过卷积层、池化层和全连接层的线性堆叠来提取图片中的特征;我们可以将神经网络看作是一个大型的数学公式,输入是一个矩阵X,输出是Y,对X进行的操作主要有线性变换和非线性变换,将其统一看作F,那么就有Y=F(X)

为了使得训练之后的神经网络能够具有更好地识别效果,在输入X之前我们往往会对图片进行预处理、归一化等操作,将像素值归一到0 ~ 1(或者-1 ~ 1)之间,使得输入在同一个量级上。

同样地,我们也可以在网络之中引入类似的处理方法,也就是ResNet中提出的残差连接,Y=F(X)+X,这样网络在训练时就只需要学习到关于X的一个偏差就可以(F(X)=Y-X),极大增加了网络的可训练性。
这种思想在深度学习领域的应用极其广泛,并且都取得了不错的结果。比如yolo v3中只预测box框的偏差而不直接预测box框的坐标;在卷积层之后加上BN层;

2、Res2Net的贡献

Res2Net提出了一个新的概念:尺度(scale)

CNN网络中除了深度,宽度和基数等现有维度之外,尺度也是一个必不可少的因素。将输入X拆分成四个部分,每个部分通过不同的卷积层之后再融合到一起,得到的输出会获得更大的感受野,而且一些额外的计算开销可以忽略。(在实际实验中,运行速度会慢20%左右)

Res2Net模块可以很好地与现有模型进行融合

pytorch图网络 pytorch构建网络_ide_03


比如3×3卷积层的数量可以任意调整,在1×1网络的最后可以加上SE block(关于SE block后续会进行讲解)。

四、代码实现

import torch
from torch import nn

#需要分类的类别数
classes=5

#SE模块
class SEModule(nn.Module):
    def __init__(self, channels, reduction=16):
        super(SEModule, self).__init__()
        self.avg_pool = nn.AdaptiveAvgPool2d(1)
        self.fc1 = nn.Conv2d(channels, channels // reduction, kernel_size=1, padding=0)
        self.relu = nn.ReLU(inplace=True)
        self.fc2 = nn.Conv2d(channels // reduction, channels, kernel_size=1, padding=0)
        self.sigmoid = nn.Sigmoid()

    def forward(self, input):
        x = self.avg_pool(input)
        x = self.fc1(x)
        x = self.relu(x)
        x = self.fc2(x)
        x = self.sigmoid(x)
        return input * x

class Res2NetBottleneck(nn.Module):
    expansion = 4  #残差块的输出通道数=输入通道数*expansion
    def __init__(self, inplanes, planes, downsample=None, stride=1, scales=4, groups=1, se=True,  norm_layer=True):
        #scales为残差块中使用分层的特征组数,groups表示其中3*3卷积层数量,SE模块和BN层
        super(Res2NetBottleneck, self).__init__()

        if planes % scales != 0: #输出通道数为4的倍数
            raise ValueError('Planes must be divisible by scales')
        if norm_layer:  #BN层
            norm_layer = nn.BatchNorm2d

        bottleneck_planes = groups * planes
        self.scales = scales
        self.stride = stride
        self.downsample = downsample
        #1*1的卷积层,在第二个layer时缩小图片尺寸
        self.conv1 = nn.Conv2d(inplanes, bottleneck_planes, kernel_size=1, stride=stride)
        self.bn1 = norm_layer(bottleneck_planes)
        #3*3的卷积层,一共有3个卷积层和3个BN层
        self.conv2 = nn.ModuleList([nn.Conv2d(bottleneck_planes // scales, bottleneck_planes // scales,
                                              kernel_size=3, stride=1, padding=1, groups=groups) for _ in range(scales-1)])
        self.bn2 = nn.ModuleList([norm_layer(bottleneck_planes // scales) for _ in range(scales-1)])
        #1*1的卷积层,经过这个卷积层之后输出的通道数变成
        self.conv3 = nn.Conv2d(bottleneck_planes, planes * self.expansion, kernel_size=1, stride=1)
        self.bn3 = norm_layer(planes * self.expansion)
        self.relu = nn.ReLU(inplace=True)
        #SE模块
        self.se = SEModule(planes * self.expansion) if se else None

    def forward(self, x):
        identity = x

        #1*1的卷积层
        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        #scales个(3x3)的残差分层架构
        xs = torch.chunk(out, self.scales, 1) #将x分割成scales块
        ys = []
        for s in range(self.scales):
            if s == 0:
                ys.append(xs[s])
            elif s == 1:
                ys.append(self.relu(self.bn2[s-1](self.conv2[s-1](xs[s]))))
            else:
                ys.append(self.relu(self.bn2[s-1](self.conv2[s-1](xs[s] + ys[-1]))))
        out = torch.cat(ys, 1)

        #1*1的卷积层
        out = self.conv3(out)
        out = self.bn3(out)

        #加入SE模块
        if self.se is not None:
            out = self.se(out)
        #下采样
        if self.downsample:
            identity = self.downsample(identity)

        out += identity
        out = self.relu(out)

        return out

class Res2Net(nn.Module):
    def __init__(self, layers, num_classes, width=16, scales=4, groups=1,
                 zero_init_residual=True, se=True, norm_layer=True):
        super(Res2Net, self).__init__()
        if norm_layer:  #BN层
            norm_layer = nn.BatchNorm2d
        #通道数分别为64,128,256,512
        planes = [int(width * scales * 2 ** i) for i in range(4)]
        self.inplanes = planes[0]

        #7*7的卷积层,3*3的最大池化层
        self.conv1 = nn.Conv2d(3, planes[0], kernel_size=7, stride=2, padding=3,
                               bias=False)
        self.bn1 = norm_layer(planes[0])
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        #四个残差块
        self.layer1 = self._make_layer(Res2NetBottleneck, planes[0], layers[0], stride=1, scales=scales, groups=groups, se=se, norm_layer=norm_layer)
        self.layer2 = self._make_layer(Res2NetBottleneck, planes[1], layers[1], stride=2, scales=scales, groups=groups, se=se, norm_layer=norm_layer)
        self.layer3 = self._make_layer(Res2NetBottleneck, planes[2], layers[2], stride=2, scales=scales, groups=groups, se=se, norm_layer=norm_layer)
        self.layer4 = self._make_layer(Res2NetBottleneck, planes[3], layers[3], stride=2, scales=scales, groups=groups, se=se, norm_layer=norm_layer)
        #自适应平均池化,全连接层
        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
        self.fc = nn.Linear(planes[3] * Res2NetBottleneck.expansion, num_classes)

        #初始化
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
            elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)
        #零初始化每个剩余分支中的最后一个BN,以便剩余分支从零开始,并且每个剩余块的行为类似于一个恒等式
        if zero_init_residual:
            for m in self.modules():
                if isinstance(m, Res2NetBottleneck):
                    nn.init.constant_(m.bn3.weight, 0)

    def _make_layer(self, block, planes, blocks, stride=1, scales=4, groups=1, se=True, norm_layer=True):
        if norm_layer:
            norm_layer = nn.BatchNorm2d

        downsample = None  #下采样,可缩小图片尺寸
        if stride != 1 or self.inplanes != planes * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride),
                norm_layer(planes * block.expansion),
            )

        layers = []
        layers.append(block(self.inplanes, planes, downsample, stride=stride, scales=scales, groups=groups, se=se, norm_layer=norm_layer))
        self.inplanes = planes * block.expansion
        for _ in range(1, blocks):
            layers.append(block(self.inplanes, planes, scales=scales, groups=groups, se=se, norm_layer=norm_layer))

        return nn.Sequential(*layers)

    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        x = self.avgpool(x)
        x = x.view(x.size(0), -1)
        logits = self.fc(x)
        probas = nn.functional.softmax(logits, dim=1)

        return probas