函数原型

nn.Conv2d(in_channels,   #输入通道数(int)
out_channels, #输出通道数,等于卷积核个数(int)
kernel_size, #卷积核尺寸(int or tuple)
stride=1, # 步长(int or tuple, optional)
padding=0, # 零填充(int or tuple, optional)
dilation=1, # 空洞卷积大小(int or tuple, optional)
groups=1, # 分组卷积设置(int, optional)
bias=True, # 偏置(bool, optional)
padding_mode='zeros') #(string, optional)

卷积计算

输出的shape:
batch:大小不变
in_channels:=》out_channels(由自己定义的卷积核来定)
h, w: (d_bef + 2*padding - kennel_size)/ stride + 1
# d_bef:该维度的原始值大小;
# kennel_size:卷积核在该维度的大小;
# padding:填充值在该维度的大小;
# stride:步长在该维度的大小.

代码示例

import torch
import torch.nn as nn
import torch.nn.functional as F

# 定义一个 全为 1 的张量
# 输入(batch, in_channels, h, w)
x = torch.ones(2, 3, 5, 5)
print(x.shape) # torch.Size([2, 3, 2, 5])

# 定义一个卷积方式
conv = nn.Conv2d(3,5,3)
print(conv)

# 进行卷积
y = conv(x)
print(y.shape)

效果展示

torch.nn.Conv2d() 用法讲解_卷积