class avalanche.training.plugins.EarlyStoppingPlugin(patience: int, val_stream_name: str, metric_name: str = 'Top1_Acc_Stream', mode: str = 'max', peval_mode: str = 'epoch', margin: float = 0.0, verbose=False)[source]

Early stopping and model checkpoint plugin.

The plugin checks a metric and stops the training loop when the accuracy on the metric stopped progressing for patience epochs. After training, the best model’s checkpoint is loaded.


The plugin checks the metric value, which is updated by the strategy during the evaluation. This means that you must ensure that the evaluation is called frequently enough during the training loop.

For example, if you set patience=1, you must also set eval_every=1 in the BaseTemplate, otherwise the metric won’t be updated after every epoch/iteration. Similarly, peval_mode must have the same value.

__init__(patience: int, val_stream_name: str, metric_name: str = 'Top1_Acc_Stream', mode: str = 'max', peval_mode: str = 'epoch', margin: float = 0.0, verbose=False)[source]


  • patience – Number of epochs to wait before stopping the training.

  • val_stream_name – Name of the validation stream to search in the metrics. The corresponding stream will be used to keep track of the evolution of the performance of a model.

  • metric_name – The name of the metric to watch as it will be reported in the evaluator.

  • mode – Must be “max” or “min”. max (resp. min) means that the given metric should me maximized (resp. minimized).

  • peval_mode – one of {‘epoch’, ‘iteration’}. Decides whether the early stopping should happen after patience epochs or iterations (Default=’epoch’).

  • margin – a minimal margin of improvements required to be considered best than a previous one. It should be an float, the default value is 0. That means that any improvement is considered better.

  • verbose – If True, prints a message for each update (default: False).


__init__(patience, val_stream_name[, ...])


after_backward(strategy, *args, **kwargs)

Called after criterion.backward() by the BaseTemplate.

after_eval(strategy, *args, **kwargs)

Called after eval by the BaseTemplate.

after_eval_dataset_adaptation(strategy, ...)

Called after eval_dataset_adaptation by the BaseTemplate.

after_eval_exp(strategy, *args, **kwargs)

Called after eval_exp by the BaseTemplate.

after_eval_forward(strategy, *args, **kwargs)

Called after model.forward() by the BaseTemplate.

after_eval_iteration(strategy, *args, **kwargs)

Called after the end of an iteration by the BaseTemplate.

after_forward(strategy, *args, **kwargs)

Called after model.forward() by the BaseTemplate.

after_train_dataset_adaptation(strategy, ...)

Called after train_dataset_adapatation by the BaseTemplate.

after_training(strategy, *args, **kwargs)

Called after train by the BaseTemplate.

after_training_epoch(strategy, *args, **kwargs)

Called after train_epoch by the BaseTemplate.

after_training_exp(strategy, *args, **kwargs)

Called after train_exp by the BaseTemplate.

after_training_iteration(strategy, *args, ...)

Called after the end of a training iteration by the BaseTemplate.

after_update(strategy, *args, **kwargs)

Called after optimizer.update() by the BaseTemplate.

before_backward(strategy, *args, **kwargs)

Called before criterion.backward() by the BaseTemplate.

before_eval(strategy, *args, **kwargs)

Called before eval by the BaseTemplate.

before_eval_dataset_adaptation(strategy, ...)

Called before eval_dataset_adaptation by the BaseTemplate.

before_eval_exp(strategy, *args, **kwargs)

Called before eval_exp by the BaseTemplate.

before_eval_forward(strategy, *args, **kwargs)

Called before model.forward() by the BaseTemplate.

before_eval_iteration(strategy, *args, **kwargs)

Called before the start of a training iteration by the BaseTemplate.

before_forward(strategy, *args, **kwargs)

Called before model.forward() by the BaseTemplate.

before_train_dataset_adaptation(strategy, ...)

Called before train_dataset_adapatation by the BaseTemplate.

before_training(strategy, *args, **kwargs)

Called before train by the BaseTemplate.

before_training_epoch(strategy, **kwargs)

Called before train_epoch by the BaseTemplate.

before_training_exp(strategy, **kwargs)

Called before train_exp by the BaseTemplate.

before_training_iteration(strategy, **kwargs)

Called before the start of a training iteration by the BaseTemplate.

before_update(strategy, *args, **kwargs)

Called before optimizer.update() by the BaseTemplate.



A flag describing whether this plugin supports distributed training.