论文下载:https://arxiv.org/pdf/1709.01507.pdf
代码下载:https://github.com/hujie-frank/SENet


文章目录

  • 视觉注意力机制——SENet
  • 一、SENet
  • 二、代码分析
  • 1.SE-ResNet
  • 2.SE-Inception



视觉注意力机制——SENet

通常将软注意力机制:空间域、通道域、混合域、卷积域。

(1) 空间域——将图片中的的空间信息做相应的空间变换得到相应的权重分布,从而能将关键的信息提取出来。代表作有:Spatial Attention Module。
(2) 通道域——简单的说就是给每个通道上的信号都增加一个权重,来代表该通道与关键信息的相关性,通常权重越大,二者的相关性越高。代表作有:SELayer, Channel Attention Module。
(3) 混合域——通俗的讲就是在通道和空间上共同处理,先在空间上得到权重分布,在到通道上得到权重分布。代表作有:Spatial Attention Module+ Channel Attention Module。
(3) 卷积域——是在卷积核上做处理,得到权重分布,这是一种更高级的玩法,代表作有:SKUnit

注意力机制加在BP神经网络 注意力机制senet_网络

一、SENet

论文下载:paper-https://arxiv.org/abs/1709.01507 代码下载:github-https://github.com/hujie-frank/SENet

注意力机制加在BP神经网络 注意力机制senet_网络_02

SENet是发表在2017年CVPR上的文章,是Momenta公司的杰作,SENet也是最后一届ImageNet图像识别的冠军。论文提出的SE模块思想简单,易于实现,并且很容易可以加载到现有的网络模型框架中。SENet主要是通过建模通道之间的相互依赖关系,自适应地重新校准通道的特征响应,换句话说,就是学习了通道之间的相关性,筛选出了针对通道的注意力,整个网络计算量不大,而且效果很好。本片论文的核心内容就是提出了SELayer,一种即插即用的模块,可以结合多种网络使用。

下面我们来分析一下SELayer在论文中实现过程:

注意力机制加在BP神经网络 注意力机制senet_神经网络_03


解释:上图是SENet的基本单元,图中的Ftr是传统的卷积结构,XU是Ftr的输入tensor的shape(C’xH’xW’)和输出tensor的shape(CxHxW)。SELyer结构:对U先做一个Global Average Pooling(图中的Fsq(.),文中称Squeeze过程),输出的1x1xC数据再经过两级全连接(图中的Fex(.),文中称Excitation过程),最后用sigmoid(论文中的self-gating mechanism)限制到[0,1]的范围,把这个值作为*Fscale(…)*乘到U的C个通道上, 作为下一级的输入数据。

原理:通过控制scale的大小,把重要的特征增强,不重要的特征减弱,从而让需要的特征指向性更强。通过对卷积后的特征图进行处理,得到一个和通道数一样的一维向量作为每个通道的评价分数,然后将该分数分别施加到对应的通道上。

我们一起从代码上看:

from torch import nn
import torch
from torch.autograd import Variable
import torch.nn.functional  as F

class SELayer(nn.Module):
    """
    Squeeze:1.对输入的特征图做自适应全局平均池化
            2.然后在打平,选择最简单的全局平均池化,使其具有全局的感受野,使网络底层也能利用全局信息。
    Excitation:1.使用全连接层,对Squeeze的结果做非线性转化,它是类似于神经网络中门的机制,
                 通过参数来为每个特整层生成相应的权重,其中参数学习用来显示地建立通道间的,
                 相关性
    特征重标定:使用Excitation得到的结果为权重,乘以输入的特征图
    """
    def __init__(self, channel, reduction=16):

        super(SELayer, self).__init__()
        self.avg_pool = nn.AdaptiveAvgPool2d(1)
        self.fc1 = nn.Sequential(
            nn.Linear(channel, channel // reduction, bias=False),
            nn.ReLU(inplace=True))
        self.fc2 =nn.Sequential(
            nn.Linear(channel // reduction, channel, bias=False),
            nn.Sigmoid())

    def forward(self, x):
        b, c, _, _ = x.size()
        # (b, c, h, w) --> (b, c, 1, 1)
        y = self.avg_pool(x)
        # (b, c, 1, 1) --> (b, c*1*1)
        y = y.view(b, c)
        # 压缩
        y = self.fc1(y)
        # 扩张
        y = self.fc2(y)
        y = y.view(b, c, 1, 1)
        # 伸张维度
        return x * y.expand_as(x)

if __name__ == '__main__':
    x = torch.rand(2, 64, 512, 512)
    conv = SELayer(64)
    out = conv(x)
    criterion = nn.L1Loss()
    loss = criterion(out, x)
    loss.backward()
    print('out shape : {}'.format(out.shape))
    print('loss value : {}'.format(loss))

二、代码分析

1.SE-ResNet

在原文中作者把SELayer模块插入到了ResNet中做了分析:

注意力机制加在BP神经网络 注意力机制senet_注意力机制加在BP神经网络_04

我们一起来看一下这部分代码:

import torch.nn as nn
from torch.hub import load_state_dict_from_url
from torchvision.models import ResNet
# from senet.se_module import SELayer

#############################################################################
#                        定义一个3*3的卷积                                    #
#############################################################################
def conv3x3(input, output, stride=1):
    x = nn.Conv2d(input, output, kernel_size=3,
                  stride=stride, padding=1, bias=False)
    return x



#############################################################################
#                          SELayer                                          #
#############################################################################
class SELayer(nn.Module):
    def __init__(self, channel, reduction=16):
        super(SELayer, self).__init__()
        self.avg_pool = nn.AdaptiveAvgPool2d(1) # 自适应平均池化 1*1
        self.fc = nn.Sequential(
            nn.Linear(channel, channel // reduction, bias=False),
            nn.ReLU(inplace=True),
            nn.Linear(channel // reduction, channel, bias=False),
            nn.Sigmoid()
        )

    def forward(self, x):
        b, c, _, _ = x.size()
        y = self.avg_pool(x)
        y = y.view(b, c) # 返回一个有相同数据但大小不同的tensor
        y = self.fc(y)
        y = y.view(b, c, 1, 1)
        y = x * y.expand_as(x) # 将tensor扩展为参数tensor的大小
        return  y



##############################################################################
#                           SE-BasicBlockblock                               #
##############################################################################
class SEBasicBlock(nn.Module):
    expansion = 1

    def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
                 base_width=64, dilation=1, norm_layer=None,
                 *, reduction=16):
        super(SEBasicBlock, self).__init__()
        self.conv1 = conv3x3(inplanes, planes, stride)
        self.bn1 = nn.BatchNorm2d(planes)
        self.relu = nn.ReLU(inplace=True)

        self.conv2 = conv3x3(planes, planes, 1)
        self.bn2 = nn.BatchNorm2d(planes)
        self.se = SELayer(planes, reduction)

        self.downsample = downsample
        self.stride = stride

    def forward(self, x):
        # (3*3Conv + BN + RuLU) + (3*3Conv + BN ) +SE
        residual = x
        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)
        out = self.se(out)

        if self.downsample is not None:
            residual = self.downsample(x)

        out += residual
        out = self.relu(out)

        return out



##########################################################################
#                        SE-Bottleneck                                    #
##########################################################################
class SEBottleneck(nn.Module):
    expansion = 4

    def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
                 base_width=64, dilation=1, norm_layer=None,
                 *, reduction=16):
        super(SEBottleneck, self).__init__()
        self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
        self.bn1 = nn.BatchNorm2d(planes)
        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
                               padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(planes)
        self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
        self.bn3 = nn.BatchNorm2d(planes * 4)
        self.relu = nn.ReLU(inplace=True)
        self.se = SELayer(planes * 4, reduction)
        self.downsample = downsample
        self.stride = stride

    def forward(self, x):
        residual = x
        # (1*1Conv+BN+ReLU) + (3*3Conv+BN+ReLU) + (1*1Conv+BN) + SE
        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)
        out = self.relu(out)

        out = self.conv3(out)
        out = self.bn3(out)
        out = self.se(out)

        if self.downsample is not None:
            residual = self.downsample(x)

        out += residual
        out = self.relu(out)

        return out


def se_resnet18(num_classes=1_000):
    """Constructs a ResNet-18 model.

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = ResNet(SEBasicBlock, [2, 2, 2, 2], num_classes=num_classes)
    model.avgpool = nn.AdaptiveAvgPool2d(1) # 最后输出的网络
    return model


def se_resnet34(num_classes=1_000):
    """Constructs a ResNet-34 model.

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = ResNet(SEBasicBlock, [3, 4, 6, 3], num_classes=num_classes)
    model.avgpool = nn.AdaptiveAvgPool2d(1)
    return model


def se_resnet50(num_classes=1_000, pretrained=False):
    """Constructs a ResNet-50 model.

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = ResNet(SEBottleneck, [3, 4, 6, 3], num_classes=num_classes)
    model.avgpool = nn.AdaptiveAvgPool2d(1)
    if pretrained:
        model.load_state_dict(load_state_dict_from_url(
            "https://github.com/moskomule/senet.pytorch/releases/download/archive/seresnet50-60a8950a85b2b.pkl"))
    return model


def se_resnet101(num_classes=1_000):
    """Constructs a ResNet-101 model.

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = ResNet(SEBottleneck, [3, 4, 23, 3], num_classes=num_classes)
    model.avgpool = nn.AdaptiveAvgPool2d(1)
    return model


def se_resnet152(num_classes=1_000):
    """Constructs a ResNet-152 model.

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = ResNet(SEBottleneck, [3, 8, 36, 3], num_classes=num_classes)
    model.avgpool = nn.AdaptiveAvgPool2d(1)
    return model


class CifarSEBasicBlock(nn.Module):
    def __init__(self, inplanes, planes, stride=1, reduction=16):
        super(CifarSEBasicBlock, self).__init__()
        self.conv1 = conv3x3(inplanes, planes, stride)
        self.bn1 = nn.BatchNorm2d(planes)
        self.relu = nn.ReLU(inplace=True)
        self.conv2 = conv3x3(planes, planes)
        self.bn2 = nn.BatchNorm2d(planes)
        self.se = SELayer(planes, reduction)
        if inplanes != planes:
            self.downsample = nn.Sequential(nn.Conv2d(inplanes, planes, kernel_size=1, stride=stride, bias=False),
                                            nn.BatchNorm2d(planes))
        else:
            self.downsample = lambda x: x
        self.stride = stride

    def forward(self, x):
        residual = self.downsample(x)
        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)
        out = self.se(out)

        out += residual
        out = self.relu(out)

        return out


class CifarSEResNet(nn.Module):
    def __init__(self, block, n_size, num_classes=10, reduction=16):
        super(CifarSEResNet, self).__init__()
        self.inplane = 16
        self.conv1 = nn.Conv2d(
            3, self.inplane, kernel_size=3, stride=1, padding=1, bias=False)
        self.bn1 = nn.BatchNorm2d(self.inplane)
        self.relu = nn.ReLU(inplace=True)
        self.layer1 = self._make_layer(
            block, 16, blocks=n_size, stride=1, reduction=reduction)
        self.layer2 = self._make_layer(
            block, 32, blocks=n_size, stride=2, reduction=reduction)
        self.layer3 = self._make_layer(
            block, 64, blocks=n_size, stride=2, reduction=reduction)
        self.avgpool = nn.AdaptiveAvgPool2d(1)
        self.fc = nn.Linear(64, num_classes)
        self.initialize()

    def initialize(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight)
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)

    def _make_layer(self, block, planes, blocks, stride, reduction):
        strides = [stride] + [1] * (blocks - 1)
        layers = []
        for stride in strides:
            layers.append(block(self.inplane, planes, stride, reduction))
            self.inplane = planes

        return nn.Sequential(*layers)

    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)

        x = self.avgpool(x)
        x = x.view(x.size(0), -1)
        x = self.fc(x)

        return x


class CifarSEPreActResNet(CifarSEResNet):
    def __init__(self, block, n_size, num_classes=10, reduction=16):
        super(CifarSEPreActResNet, self).__init__(
            block, n_size, num_classes, reduction)
        self.bn1 = nn.BatchNorm2d(self.inplane)
        self.initialize()

    def forward(self, x):
        x = self.conv1(x)
        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)

        x = self.bn1(x)
        x = self.relu(x)

        x = self.avgpool(x)
        x = x.view(x.size(0), -1)
        x = self.fc(x)


def se_resnet20(**kwargs):
    """Constructs a ResNet-18 model.

    """
    model = CifarSEResNet(CifarSEBasicBlock, 3, **kwargs)
    return model


def se_resnet32(**kwargs):
    """Constructs a ResNet-34 model.

    """
    model = CifarSEResNet(CifarSEBasicBlock, 5, **kwargs)
    return model


def se_resnet56(**kwargs):
    """Constructs a ResNet-34 model.

    """
    model = CifarSEResNet(CifarSEBasicBlock, 9, **kwargs)
    return model


def se_preactresnet20(**kwargs):
    """Constructs a ResNet-18 model.

    """
    model = CifarSEPreActResNet(CifarSEBasicBlock, 3, **kwargs)
    return model


def se_preactresnet32(**kwargs):
    """Constructs a ResNet-34 model.

    """
    model = CifarSEPreActResNet(CifarSEBasicBlock, 5, **kwargs)
    return model


def se_preactresnet56(**kwargs):
    """Constructs a ResNet-34 model.

    """
    model = CifarSEPreActResNet(CifarSEBasicBlock, 9, **kwargs)
    return model

2.SE-Inception

在原文中作者把SELayer模块插入到了ResNet中做了分析:

注意力机制加在BP神经网络 注意力机制senet_网络_05

我们一起来看一下代码吧

from senet.se_module import SELayer
from torch import nn
from torchvision.models.inception import Inception3


class SEInception3(nn.Module):
    def __init__(self, num_classes, aux_logits=True, transform_input=False):
        super(SEInception3, self).__init__()
        model = Inception3(num_classes=num_classes, aux_logits=aux_logits,
                           transform_input=transform_input)
        model.Mixed_5b.add_module("SELayer", SELayer(192))
        model.Mixed_5c.add_module("SELayer", SELayer(256))
        model.Mixed_5d.add_module("SELayer", SELayer(288))
        model.Mixed_6a.add_module("SELayer", SELayer(288))
        model.Mixed_6b.add_module("SELayer", SELayer(768))
        model.Mixed_6c.add_module("SELayer", SELayer(768))
        model.Mixed_6d.add_module("SELayer", SELayer(768))
        model.Mixed_6e.add_module("SELayer", SELayer(768))
        if aux_logits:
            model.AuxLogits.add_module("SELayer", SELayer(768))
        model.Mixed_7a.add_module("SELayer", SELayer(768))
        model.Mixed_7b.add_module("SELayer", SELayer(1280))
        model.Mixed_7c.add_module("SELayer", SELayer(2048))

        self.model = model

    def forward(self, x):
        _, _, h, w = x.size()
        if (h, w) != (299, 299):
            raise ValueError("input size must be (299, 299)")

        return self.model(x)


def se_inception_v3(**kwargs):
    return SEInception3(**kwargs)

注意力机制加在BP神经网络 注意力机制senet_深度学习_06