avalanche.training.plugins.LRSchedulerPlugin
- class avalanche.training.plugins.LRSchedulerPlugin(scheduler, reset_scheduler=True, reset_lr=True, metric=None, step_granularity: Literal['epoch', 'iteration'] = 'epoch', first_epoch_only=False, first_exp_only=False)[source]
Learning Rate Scheduler Plugin.
This plugin manages learning rate scheduling inside of a strategy using the PyTorch scheduler passed to the constructor. The step() method of the scheduler is called after each training epoch or iteration.
Metric-based schedulers (like ReduceLROnPlateau) are supported as well.
- __init__(scheduler, reset_scheduler=True, reset_lr=True, metric=None, step_granularity: Literal['epoch', 'iteration'] = 'epoch', first_epoch_only=False, first_exp_only=False)[source]
Creates a
LRSchedulerPlugin
instance.- Parameters:
scheduler – a learning rate scheduler that can be updated through a step() method and can be reset by setting last_epoch=0.
reset_scheduler – If True, the scheduler is reset at the end of the experience. Defaults to True.
reset_lr – If True, the optimizer learning rate is reset to its original value. Default to True.
metric – the metric to use. Must be set when using metric-based scheduling (like ReduceLROnPlateau). Only “train_loss” and “val_loss” are supported at the moment. Beware that, when using “val_loss”, the periodic evaluation flow must be enabled in the strategy. By default, the eval_every parameter of the base strategy is -1, which means that the validation set is never evaluated. Set that value to 1 to obtain the correct results. Also, when using metric=”val_loss”, remember to pass a proper validation stream to the strategy train method, otherwise the periodic evaluation stream will use the training set to compute the validation loss.
step_granularity – defines how often the scheduler’s step() method will be called. Defaults to ‘epoch’. Valid values are ‘epoch’ and ‘iteration’.
first_epoch_only – if True, the scheduler will only be stepped in the first epoch of each training experience. This is not mutually exclusive with first_exp_only: by setting both values to True, the scheduler will be stepped only in the very first epoch of the whole training stream.
first_exp_only – if True, the scheduler will only be considered in the first training experience.
Methods
__init__
(scheduler[, reset_scheduler, ...])Creates a
LRSchedulerPlugin
instance.after_backward
(strategy, *args, **kwargs)Called after criterion.backward() by the BaseTemplate.
after_eval
(strategy, **kwargs)Called after eval by the BaseTemplate.
after_eval_dataset_adaptation
(strategy, ...)Called after eval_dataset_adaptation by the BaseTemplate.
after_eval_exp
(strategy, *args, **kwargs)Called after eval_exp by the BaseTemplate.
after_eval_forward
(strategy, *args, **kwargs)Called after model.forward() by the BaseTemplate.
after_eval_iteration
(strategy, **kwargs)Called after the end of an iteration by the BaseTemplate.
after_forward
(strategy, *args, **kwargs)Called after model.forward() by the BaseTemplate.
after_train_dataset_adaptation
(strategy, ...)Called after train_dataset_adapatation by the BaseTemplate.
after_training
(strategy, **kwargs)Called after train by the BaseTemplate.
after_training_epoch
(strategy, **kwargs)Called after train_epoch by the BaseTemplate.
after_training_exp
(strategy, **kwargs)Called after train_exp by the BaseTemplate.
after_training_iteration
(strategy, **kwargs)Called after the end of a training iteration by the BaseTemplate.
after_update
(strategy, *args, **kwargs)Called after optimizer.update() by the BaseTemplate.
before_backward
(strategy, *args, **kwargs)Called before criterion.backward() by the BaseTemplate.
before_eval
(strategy, *args, **kwargs)Called before eval by the BaseTemplate.
before_eval_dataset_adaptation
(strategy, ...)Called before eval_dataset_adaptation by the BaseTemplate.
before_eval_exp
(strategy, *args, **kwargs)Called before eval_exp by the BaseTemplate.
before_eval_forward
(strategy, *args, **kwargs)Called before model.forward() by the BaseTemplate.
before_eval_iteration
(strategy, *args, **kwargs)Called before the start of a training iteration by the BaseTemplate.
before_forward
(strategy, *args, **kwargs)Called before model.forward() by the BaseTemplate.
before_train_dataset_adaptation
(strategy, ...)Called before train_dataset_adapatation by the BaseTemplate.
before_training
(strategy, **kwargs)Called before train by the BaseTemplate.
before_training_epoch
(strategy, *args, **kwargs)Called before train_epoch by the BaseTemplate.
before_training_exp
(strategy, *args, **kwargs)Called before train_exp by the BaseTemplate.
before_training_iteration
(strategy, **kwargs)Called before the start of a training iteration by the BaseTemplate.
before_update
(strategy, *args, **kwargs)Called before optimizer.update() by the BaseTemplate.
Attributes
supports_distributed
A flag describing whether this plugin supports distributed training.