PyTorch – How to get learning rate during training?
Question:
While training, I’d like to know the value of learning_rate.
What should I do?
It’s my code, like this:
my_optimizer = torch.optim.SGD(my_model.parameters(),
lr=0.001,
momentum=0.99,
weight_decay=2e-3)
Thank you.
Answers:
For only one parameter group like in the example you’ve given, you can use this function and call it during training to get the current learning rate:
def get_lr(optimizer):
for param_group in optimizer.param_groups:
return param_group['lr']
Alternatively, you may use an lr_scheduler
along with your optimizer and simply call the built-in lr_scheduler.get_lr()
method.
Here is an example:
my_optimizer = torch.optim.Adam( my_model.parameters(),
lr = 0.001,
weight_decay = 0.002)
my_lr_scheduler = torch.optim.lr_scheduler.StepLR( my_optimizer,
step_size = 50,
gamma = 0.1)
# train
...
my_optimizer.step()
my_lr_scheduler.step()
# get learning rate
my_lr = my_lr_scheduler.get_lr()
# or
my_lr = my_lr_scheduler.optimizer.param_groups[0]['lr']
The added benefit for using lr_scheduler
is more controls on changing lr over time; lr_decay, etc.
For lr_scheduler args, refer to pytorch docs.
Use
optimizer.param_groups[-1]['lr']
As of PyTorch 1.13.0, one can access the list of learning rates via the method scheduler.get_last_lr()
– or directly scheduler.get_last_lr()[0]
if you only use a single learning rate.
Said method can be found in the schedulers’ base class LRScheduler
(See their code). It actually returns the attribute scheduler._last_lr
in the base class as Zahra has mentioned but calling the method should be more preferred.
Edit: Thanks @igorkf for the reply
While training, I’d like to know the value of learning_rate.
What should I do?
It’s my code, like this:
my_optimizer = torch.optim.SGD(my_model.parameters(),
lr=0.001,
momentum=0.99,
weight_decay=2e-3)
Thank you.
For only one parameter group like in the example you’ve given, you can use this function and call it during training to get the current learning rate:
def get_lr(optimizer):
for param_group in optimizer.param_groups:
return param_group['lr']
Alternatively, you may use an lr_scheduler
along with your optimizer and simply call the built-in lr_scheduler.get_lr()
method.
Here is an example:
my_optimizer = torch.optim.Adam( my_model.parameters(),
lr = 0.001,
weight_decay = 0.002)
my_lr_scheduler = torch.optim.lr_scheduler.StepLR( my_optimizer,
step_size = 50,
gamma = 0.1)
# train
...
my_optimizer.step()
my_lr_scheduler.step()
# get learning rate
my_lr = my_lr_scheduler.get_lr()
# or
my_lr = my_lr_scheduler.optimizer.param_groups[0]['lr']
The added benefit for using lr_scheduler
is more controls on changing lr over time; lr_decay, etc.
For lr_scheduler args, refer to pytorch docs.
Use
optimizer.param_groups[-1]['lr']
As of PyTorch 1.13.0, one can access the list of learning rates via the method scheduler.get_last_lr()
– or directly scheduler.get_last_lr()[0]
if you only use a single learning rate.
Said method can be found in the schedulers’ base class LRScheduler
(See their code). It actually returns the attribute scheduler._last_lr
in the base class as Zahra has mentioned but calling the method should be more preferred.
Edit: Thanks @igorkf for the reply