文章目录

  • 1.ResNeXt的介绍
  • 2.ResNeXt的实现
  • 1)自定义模型结构参考代码
  • 2)迁移学习模型结构参考代码
  • 3.ResNet与ResNeXt实现完整代码


1.ResNeXt的介绍

ResNeXt是ResNet基础上的改进版本,改进的部分不多,主要将之前的残差结构换成了另外的一个block结构,并且使用了组卷积的概念。

pytorch lightning 哪里设置epoch_深度学习


性能参数指标

pytorch lightning 哪里设置epoch_ResNet_02


可以看见ResNeXt的性能还是有提升的。

pytorch lightning 哪里设置epoch_ResNet_03


在计算量相同的情况下,错误率更低普通的卷积

pytorch lightning 哪里设置epoch_分类网络_04


假设输入矩阵的channel是Cin,而输出的channel的n,同时kernel size是k,那么卷积所需要的参数是Cin * n * k * k组卷积

pytorch lightning 哪里设置epoch_深度学习_05


组卷积的概念是,将输入矩阵的channel平均分为group组,简称g组。那么对于这g组中的每一组的channel就变成了 Cin/g ,如果输出的channel同样是n时,那么对于每一组的channel就需要去卷积一个channel为 n/g 的卷积核,最后g个 n/g 的输出拼接为一个channel为n的最终输出。那么此时所需要的参数量是:((Cin/g) * (n/g) * k * k)*g

可以看见,对比所使用的的参数量,组卷积比普通的卷积少得多,只是原来的1/g。

需要注意:当分组的个数等于输入的channel数,而输出的channels也等于输入的channel时,此时就是MobileNet中的DW卷积。

pytorch lightning 哪里设置epoch_ResNeXt_06

对于以上的三个图,在数学的计算上是完全一致的,但是最后的一个图看起来比较的简洁。

pytorch lightning 哪里设置epoch_ResNeXt_07


残差结构的主要步骤:

  1. 首先对输入的矩阵进行1x1的卷积处理,实现了降维的效果,从channel为256,降到了128.
  2. 然后通过3x3的组卷积,分为32组,进行卷积处理,此时的channel不变,仍然是128.
  3. 最后再进行1x1的卷积处理,实现了升维的效果,从channel为128,升到了256.
  4. 而此时的输出在于输入作相加,实现残差结构。

RexNeXt与ResNet的主要变化就是结构变化:

pytorch lightning 哪里设置epoch_ResNet_08


其中ResNeXt-50(32x4d)的参数如图所示:

pytorch lightning 哪里设置epoch_分类网络_09


ResNeXt-50(32x4d)中的32值的是拆分出来的组数量,而4便是每个拆分出来的组的channels个数。其中group设置32的原因是,其效果最好。而对于浅层次的网络,比如18/34层,ResNeXt提出的改进结构并没有很明显的效果,所以不适用于浅层的网络。

2.ResNeXt的实现

1)自定义模型结构参考代码
import torch
import torch.nn as nn

num_class = 5
resnext50_32x4d_params = [3, 4, 6, 3]
resnext101_32x8d_params = [3, 4, 23, 3]

# 定义Conv1层
def Conv1(in_planes, places, stride=2):
    return nn.Sequential(
        nn.Conv2d(in_channels=in_planes,out_channels=places,kernel_size=7,stride=stride,padding=3, bias=False),
        nn.BatchNorm2d(places),
        nn.ReLU(inplace=True),
        nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
    )

class ResNeXtBlock(nn.Module):
    def __init__(self,in_places,places, stride=1,downsampling=False, expansion = 2, cardinality=32):
        super(ResNeXtBlock,self).__init__()
        self.expansion = expansion
        self.downsampling = downsampling

        # torch.Size([1, 256, 56, 56])
        # torch.Size([1, 512, 28, 28])
        # torch.Size([1, 1024, 14, 14])
        # torch.Size([1, 2048, 7, 7])
        self.bottleneck = nn.Sequential(
            nn.Conv2d(in_channels=in_places, out_channels=places, kernel_size=1, stride=1, bias=False),
            nn.BatchNorm2d(places),
            nn.ReLU(inplace=True),
            nn.Conv2d(in_channels=places, out_channels=places, kernel_size=3, stride=stride, padding=1, bias=False, groups=cardinality),  # 使用了组卷积
            nn.BatchNorm2d(places),
            nn.ReLU(inplace=True),
            nn.Conv2d(in_channels=places, out_channels=places * self.expansion, kernel_size=1, stride=1, bias=False),
            nn.BatchNorm2d(places * self.expansion),
        )

        if self.downsampling:
            self.downsample = nn.Sequential(
                nn.Conv2d(in_channels=in_places, out_channels=places * self.expansion, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(places * self.expansion)
            )
        self.relu = nn.ReLU(inplace=True)

    def forward(self, x):
        residual = x
        out = self.bottleneck(x)

        if self.downsampling:
            residual = self.downsample(x)

        out += residual
        out = self.relu(out)
        return out


class ResNet(nn.Module):
    def __init__(self,blocks, blockkinds, num_classes=num_class):
        super(ResNet,self).__init__()

        self.blockkinds = blockkinds
        self.conv1 = Conv1(in_planes = 3, places= 64)

        if self.blockkinds == ResNeXtBlock:
            self.expansion = 2
            # 64 -> 128
            self.layer1 = self.make_layer(in_places=64, places=128, block=blocks[0], stride=1)
            # 256 -> 256
            self.layer2 = self.make_layer(in_places=256, places=256, block=blocks[1], stride=2)
            # 512 -> 512
            self.layer3 = self.make_layer(in_places=512, places=512, block=blocks[2], stride=2)
            # 1024 -> 1024
            self.layer4 = self.make_layer(in_places=1024, places=1024, block=blocks[3], stride=2)

            self.fc = nn.Linear(2048, num_classes)

        self.avgpool = nn.AvgPool2d(7, stride=1)

        # 初始化网络结构
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                # 采用了何凯明的初始化方法
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)

    def make_layer(self, in_places, places, block, stride):

        layers = []
        # torch.Size([1, 256, 56, 56])
        layers.append(self.blockkinds(in_places, places, stride, downsampling =True))
        for i in range(1, block):
            layers.append(self.blockkinds(places*self.expansion, places))

        return nn.Sequential(*layers)


    def forward(self, x):

        # conv1层
        x = self.conv1(x)   # torch.Size([1, 64, 56, 56])

        # conv2_x层
        x = self.layer1(x)  # torch.Size([1, 256, 56, 56])
        # conv3_x层
        x = self.layer2(x)  # torch.Size([1, 512, 28, 28])
        # conv4_x层
        x = self.layer3(x)  # torch.Size([1, 1024, 14, 14])
        # conv5_x层
        x = self.layer4(x)  # torch.Size([1, 2048, 7, 7])

        x = self.avgpool(x) # torch.Size([1, 2048, 1, 1]) / torch.Size([1, 512])
        x = x.view(x.size(0), -1)   # torch.Size([1, 2048]) / torch.Size([1, 512])
        x = self.fc(x)      # torch.Size([1, 5])

        return x


def ResNeXt50_32x4d():
    return ResNet(resnext50_32x4d_params, ResNeXtBlock)

def ResNeXt101_32x8d():
    return ResNet(resnext101_32x8d_params, ResNeXtBlock)


if __name__ =='__main__':
    # model = ResNeXtBlock(in_places=256, places=128)
    # print(model)
    # model = ResNeXt50_32x4d()
    model = ResNeXt101_32x8d()

    input = torch.randn(1, 3, 224, 224)
    out = model(input)
    print(out.shape)
2)迁移学习模型结构参考代码
from torchvision.models import resnext101_32x8d
from utils import Flatten

if __name__ =='__main__':
	# 迁移学习
    trained_model = resnext101_32x8d(pretrained=True)
    model = nn.Sequential(*list(trained_model.children())[:-1],  # torch.Size([32, 512, 1, 1])
                          Flatten(),  # torch.Size([32, 512])
                          nn.Linear(2048, 5)  # torch.Size([32, 5])
                          )

    input = torch.randn(1, 3, 224, 224)
    out = model(input)
    print(out.shape)

# output:
# Downloading: "https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth" to C:\Users\Acer/.cache\torch\hub\checkpoints\resnext101_32x8d-8ba56ff5.pth
# 100.0%
# torch.Size([1, 5])

3.ResNet与ResNeXt实现完整代码

import torch
import torch.nn as nn

# 分类数目
num_class = 5
# 各层数目
resnet18_params = [2, 2, 2, 2]
resnet34_params = [3, 4, 6, 3]
resnet50_params = [3, 4, 6, 3]
resnet101_params = [3, 4, 23, 3]
resnet152_params = [3, 8, 36, 3]
resnext50_32x4d_params = [3, 4, 6, 3]
resnext101_32x8d_params = [3, 4, 23, 3]

# 定义Conv1层
def Conv1(in_planes, places, stride=2):
    return nn.Sequential(
        nn.Conv2d(in_channels=in_planes,out_channels=places,kernel_size=7,stride=stride,padding=3, bias=False),
        nn.BatchNorm2d(places),
        nn.ReLU(inplace=True),
        nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
    )


# 浅层的残差结构
class BasicBlock(nn.Module):
    def __init__(self,in_places,places, stride=1,downsampling=False, expansion = 1):
        super(BasicBlock,self).__init__()
        self.expansion = expansion
        self.downsampling = downsampling

        # torch.Size([1, 64, 56, 56]), stride = 1
        # torch.Size([1, 128, 28, 28]), stride = 2
        # torch.Size([1, 256, 14, 14]), stride = 2
        # torch.Size([1, 512, 7, 7]), stride = 2
        self.basicblock = nn.Sequential(
            nn.Conv2d(in_channels=in_places, out_channels=places, kernel_size=3, stride=stride, padding=1, bias=False),
            nn.BatchNorm2d(places),
            nn.ReLU(inplace=True),
            nn.Conv2d(in_channels=places, out_channels=places, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(places * self.expansion),
        )

        # torch.Size([1, 64, 56, 56])
        # torch.Size([1, 128, 28, 28])
        # torch.Size([1, 256, 14, 14])
        # torch.Size([1, 512, 7, 7])
        # 每个大模块的第一个残差结构需要改变步长
        if self.downsampling:
            self.downsample = nn.Sequential(
                nn.Conv2d(in_channels=in_places, out_channels=places*self.expansion, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(places*self.expansion)
            )
        self.relu = nn.ReLU(inplace=True)

    def forward(self, x):
        # 实线分支
        residual = x
        out = self.basicblock(x)

        # 虚线分支
        if self.downsampling:
            residual = self.downsample(x)

        out += residual
        out = self.relu(out)
        return out


# 深层的残差结构
class Bottleneck(nn.Module):

    # 注意:默认 downsampling=False
    def __init__(self,in_places,places, stride=1,downsampling=False, expansion = 4):
        super(Bottleneck,self).__init__()
        self.expansion = expansion
        self.downsampling = downsampling

        self.bottleneck = nn.Sequential(
            # torch.Size([1, 64, 56, 56]),stride=1
            # torch.Size([1, 128, 56, 56]),stride=1
            # torch.Size([1, 256, 28, 28]), stride=1
            # torch.Size([1, 512, 14, 14]), stride=1
            nn.Conv2d(in_channels=in_places,out_channels=places,kernel_size=1,stride=1, bias=False),
            nn.BatchNorm2d(places),
            nn.ReLU(inplace=True),
            # torch.Size([1, 64, 56, 56]),stride=1
            # torch.Size([1, 128, 28, 28]), stride=2
            # torch.Size([1, 256, 14, 14]), stride=2
            # torch.Size([1, 512, 7, 7]), stride=2
            nn.Conv2d(in_channels=places, out_channels=places, kernel_size=3, stride=stride, padding=1, bias=False),
            nn.BatchNorm2d(places),
            nn.ReLU(inplace=True),
            # torch.Size([1, 256, 56, 56]),stride=1
            # torch.Size([1, 512, 28, 28]), stride=1
            # torch.Size([1, 1024, 14, 14]), stride=1
            # torch.Size([1, 2048, 7, 7]), stride=1
            nn.Conv2d(in_channels=places, out_channels=places * self.expansion, kernel_size=1, stride=1, bias=False),
            nn.BatchNorm2d(places * self.expansion),
        )

        # torch.Size([1, 256, 56, 56])
        # torch.Size([1, 512, 28, 28])
        # torch.Size([1, 1024, 14, 14])
        # torch.Size([1, 2048, 7, 7])
        if self.downsampling:
            self.downsample = nn.Sequential(
                nn.Conv2d(in_channels=in_places, out_channels=places*self.expansion, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(places*self.expansion)
            )
        self.relu = nn.ReLU(inplace=True)

    def forward(self, x):
        # 实线分支
        residual = x
        out = self.bottleneck(x)

        # 虚线分支
        if self.downsampling:
            residual = self.downsample(x)

        out += residual
        out = self.relu(out)

        return out


# 定义ResNeXt残差结构
class ResNeXtBlock(nn.Module):
    def __init__(self,in_places,places, stride=1,downsampling=False, expansion = 2, cardinality=32):
        super(ResNeXtBlock,self).__init__()
        self.expansion = expansion
        self.downsampling = downsampling

        # torch.Size([1, 256, 56, 56])
        # torch.Size([1, 512, 28, 28])
        # torch.Size([1, 1024, 14, 14])
        # torch.Size([1, 2048, 7, 7])
        self.bottleneck = nn.Sequential(
            nn.Conv2d(in_channels=in_places, out_channels=places, kernel_size=1, stride=1, bias=False),
            nn.BatchNorm2d(places),
            nn.ReLU(inplace=True),
            nn.Conv2d(in_channels=places, out_channels=places, kernel_size=3, stride=stride, padding=1, bias=False, groups=cardinality),  # 使用了组卷积
            nn.BatchNorm2d(places),
            nn.ReLU(inplace=True),
            nn.Conv2d(in_channels=places, out_channels=places * self.expansion, kernel_size=1, stride=1, bias=False),
            nn.BatchNorm2d(places * self.expansion),
        )

        if self.downsampling:
            self.downsample = nn.Sequential(
                nn.Conv2d(in_channels=in_places, out_channels=places * self.expansion, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(places * self.expansion)
            )
        self.relu = nn.ReLU(inplace=True)

    def forward(self, x):
    	# 实现分支
        residual = x
        out = self.bottleneck(x)
		
		# 虚线分支
        if self.downsampling:
            residual = self.downsample(x)

        out += residual
        out = self.relu(out)
        return out


class ResNet(nn.Module):
    def __init__(self,blocks, blockkinds, num_classes=num_class):
        super(ResNet,self).__init__()

        self.blockkinds = blockkinds
        self.conv1 = Conv1(in_planes = 3, places= 64)

        # 对应浅层网络结构
        if self.blockkinds == BasicBlock:
            self.expansion = 1
            # 64 -> 64
            self.layer1 = self.make_layer(in_places=64, places=64, block=blocks[0], stride=1)
            # 64 -> 128
            self.layer2 = self.make_layer(in_places=64, places=128, block=blocks[1], stride=2)
            # 128 -> 256
            self.layer3 = self.make_layer(in_places=128, places=256, block=blocks[2], stride=2)
            # 256 -> 512
            self.layer4 = self.make_layer(in_places=256, places=512, block=blocks[3], stride=2)

            self.fc = nn.Linear(512, num_classes)
            print("blockkinds == BasicBlock")

        # 对应深层网络结构
        if self.blockkinds == Bottleneck:
            self.expansion = 4
            # 64 -> 64
            self.layer1 = self.make_layer(in_places = 64, places= 64, block=blocks[0], stride=1)
            # 256 -> 128
            self.layer2 = self.make_layer(in_places = 256,places=128, block=blocks[1], stride=2)
            # 512 -> 256
            self.layer3 = self.make_layer(in_places=512,places=256, block=blocks[2], stride=2)
            # 1024 -> 512
            self.layer4 = self.make_layer(in_places=1024,places=512, block=blocks[3], stride=2)

            self.fc = nn.Linear(2048, num_classes)
            print("blockkinds == Bottleneck")

        # 对应ResNeXt结构
        if self.blockkinds == ResNeXtBlock:
            self.expansion = 2
            # 64 -> 128
            self.layer1 = self.make_layer(in_places=64, places=128, block=blocks[0], stride=1)
            # 256 -> 256
            self.layer2 = self.make_layer(in_places=256, places=256, block=blocks[1], stride=2)
            # 512 -> 512
            self.layer3 = self.make_layer(in_places=512, places=512, block=blocks[2], stride=2)
            # 1024 -> 1024
            self.layer4 = self.make_layer(in_places=1024, places=1024, block=blocks[3], stride=2)

            self.fc = nn.Linear(2048, num_classes)
            print("blockkinds == ResNeXtBlock")

        self.avgpool = nn.AvgPool2d(7, stride=1)

        # 初始化网络结构
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                # 采用了何凯明的初始化方法
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)

    def make_layer(self, in_places, places, block, stride):

        layers = []

        # torch.Size([1, 64, 56, 56])  -> torch.Size([1, 256, 56, 56]), stride=1 故w,h不变
        # torch.Size([1, 256, 56, 56]) -> torch.Size([1, 512, 28, 28]), stride=2 故w,h变
        # torch.Size([1, 512, 28, 28]) -> torch.Size([1, 1024, 14, 14]),stride=2 故w,h变
        # torch.Size([1, 1024, 14, 14]) -> torch.Size([1, 2048, 7, 7]), stride=2 故w,h变
        # 此步需要通过虚线分支,downsampling=True
        layers.append(self.blockkinds(in_places, places, stride, downsampling =True))

        # torch.Size([1, 256, 56, 56]) -> torch.Size([1, 256, 56, 56])
        # torch.Size([1, 512, 28, 28]) -> torch.Size([1, 512, 28, 28])
        # torch.Size([1, 1024, 14, 14]) -> torch.Size([1, 1024, 14, 14])
        # torch.Size([1, 2048, 7, 7]) -> torch.Size([1, 2048, 7, 7])
        # print("places*self.expansion:", places*self.expansion)
        # print("block:", block)
        # 此步需要通过实线分支,downsampling=False, 每个大模块的第一个残差结构需要改变步长
        for i in range(1, block):
            layers.append(self.blockkinds(places*self.expansion, places))

        return nn.Sequential(*layers)


    def forward(self, x):

        # conv1层
        x = self.conv1(x)   # torch.Size([1, 64, 56, 56])

        # conv2_x层
        x = self.layer1(x)  # torch.Size([1, 256, 56, 56])
        # conv3_x层
        x = self.layer2(x)  # torch.Size([1, 512, 28, 28])
        # conv4_x层
        x = self.layer3(x)  # torch.Size([1, 1024, 14, 14])
        # conv5_x层
        x = self.layer4(x)  # torch.Size([1, 2048, 7, 7])

        x = self.avgpool(x) # torch.Size([1, 2048, 1, 1]) / torch.Size([1, 512])
        x = x.view(x.size(0), -1)   # torch.Size([1, 2048]) / torch.Size([1, 512])
        x = self.fc(x)      # torch.Size([1, 5])

        return x

def ResNet18():
    return ResNet(resnet18_params, BasicBlock)

def ResNet34():
    return ResNet(resnet34_params, BasicBlock)

def ResNet50():
    return ResNet(resnet50_params, Bottleneck)

def ResNet101():
    return ResNet(resnet101_params, Bottleneck)

def ResNet152():
    return ResNet(resnet152_params, Bottleneck)

def ResNeXt50_32x4d():
    return ResNet(resnext50_32x4d_params, ResNeXtBlock)

def ResNeXt101_32x8d():
    return ResNet(resnext101_32x8d_params, ResNeXtBlock)

if __name__=='__main__':
    # model = torchvision.models.resnet50()

    # 模型测试
    # model = ResNet18()
    # model = ResNet34()
    # model = ResNet50()
    # model = ResNet101()
    # model = ResNet152()
    # model = ResNeXt50_32x4d()
    model = ResNeXt101_32x8d()
    # print(model)

    input = torch.randn(1, 3, 224, 224)
    out = model(input)
    print(out.shape)