optimizer.param_groups: 是长度为2的list,其中的元素是2个字典;
optimizer.param_groups[0]: 长度为6的字典,包括[‘amsgrad’, ‘params’, ‘lr’, ‘betas’, ‘weight_decay’, ‘eps’]这6个参数;
optimizer.param_groups[1]: 好像是表示优化器的状态的一个字典;
import torch
import torch.optim as optim
w1 = torch.randn(3, 3)
w1.requires_grad = True
w2 = torch.randn(3, 3)
w2.requires_grad = True
o = optim.Adam([w1])
print(o.param_groups)
[{'amsgrad': False,
'betas': (0.9, 0.999),
'eps': 1e-08,
'lr': 0.001,
'params': [tensor([[ 2.9064, -0.2141, -0.4037],
[-0.5718, 1.0375, -0.6862],
[-0.8372, 0.4380, -0.1572]])],
'weight_decay': 0}]
Per the docs, the add_param_group method accepts a param_group parameter that is a dict. Example of use:
import torch
import torch.optim as optim
w1 = torch.randn(3, 3)
w1.requires_grad = True
w2 = torch.randn(3, 3)
w2.requires_grad = True
o = optim.Adam([w1])
print(o.param_groups)
gives
[{'amsgrad': False,
'betas': (0.9, 0.999),
'eps': 1e-08,
'lr': 0.001,
'params': [tensor([[ 2.9064, -0.2141, -0.4037],
[-0.5718, 1.0375, -0.6862],
[-0.8372, 0.4380, -0.1572]])],
'weight_decay': 0}]
now
o.add_param_group({'params': w2})
print(o.param_groups)
[{'amsgrad': False,
'betas': (0.9, 0.999),
'eps': 1e-08,
'lr': 0.001,
'params': [tensor([[ 2.9064, -0.2141, -0.4037],
[-0.5718, 1.0375, -0.6862],
[-0.8372, 0.4380, -0.1572]])],
'weight_decay': 0},
{'amsgrad': False,
'betas': (0.9, 0.999),
'eps': 1e-08,
'lr': 0.001,
'params': [tensor([[-0.0560, 0.4585, -0.7589],
[-0.1994, 0.4557, 0.5648],
[-0.1280, -0.0333, -1.1886]])],
'weight_decay': 0}]
# 动态修改学习率
for param_group in optimizer.param_groups:
param_group["lr"] = lr
# 得到学习率optimizer.param_groups[0]["lr"]
# print('查看optimizer.param_groups结构:')
# i_list=[i for i in optimizer.param_groups[0].keys()]
# print(i_list)
['amsgrad', 'params', 'lr', 'betas', 'weight_decay', 'eps']
参考:
https://blog.csdn.net/bc521bc/article/details/85864555
https://blog.csdn.net/AWhiteDongDong/article/details/106143413