avalanche.training.plugins.FromScratchTrainingPlugin

class avalanche.training.plugins.FromScratchTrainingPlugin(reset_optimizer: bool = True)[source]

From Scratch Training Plugin.

This plugin resets the strategy’s model weights and optimizer state after each experience. It expects the strategy to have a single model and optimizer. It can be used with the Naive strategy to produce “from-scratch training” baselines.

__init__(reset_optimizer: bool = True)[source]

Creates a FromScratchTrainingPlugin instance.

Parameters:

reset_optimizer – if True, the startegy’s optimizer state is reset after each experience.

Methods

__init__([reset_optimizer])

Creates a FromScratchTrainingPlugin instance.

after_backward(strategy, *args, **kwargs)

Called after criterion.backward() by the BaseTemplate.

after_eval(strategy, *args, **kwargs)

Called after eval by the BaseTemplate.

after_eval_dataset_adaptation(strategy, ...)

Called after eval_dataset_adaptation by the BaseTemplate.

after_eval_exp(strategy, *args, **kwargs)

Called after eval_exp by the BaseTemplate.

after_eval_forward(strategy, *args, **kwargs)

Called after model.forward() by the BaseTemplate.

after_eval_iteration(strategy, *args, **kwargs)

Called after the end of an iteration by the BaseTemplate.

after_forward(strategy, *args, **kwargs)

Called after model.forward() by the BaseTemplate.

after_train_dataset_adaptation(strategy, ...)

Called after train_dataset_adapatation by the BaseTemplate.

after_training(strategy, *args, **kwargs)

Called after train by the BaseTemplate.

after_training_epoch(strategy, *args, **kwargs)

Called after train_epoch by the BaseTemplate.

after_training_exp(strategy, *args, **kwargs)

Called after train_exp by the BaseTemplate.

after_training_iteration(strategy, *args, ...)

Called after the end of a training iteration by the BaseTemplate.

after_update(strategy, *args, **kwargs)

Called after optimizer.update() by the BaseTemplate.

before_backward(strategy, *args, **kwargs)

Called before criterion.backward() by the BaseTemplate.

before_eval(strategy, *args, **kwargs)

Called before eval by the BaseTemplate.

before_eval_dataset_adaptation(strategy, ...)

Called before eval_dataset_adaptation by the BaseTemplate.

before_eval_exp(strategy, *args, **kwargs)

Called before eval_exp by the BaseTemplate.

before_eval_forward(strategy, *args, **kwargs)

Called before model.forward() by the BaseTemplate.

before_eval_iteration(strategy, *args, **kwargs)

Called before the start of a training iteration by the BaseTemplate.

before_forward(strategy, *args, **kwargs)

Called before model.forward() by the BaseTemplate.

before_train_dataset_adaptation(strategy, ...)

Called before train_dataset_adapatation by the BaseTemplate.

before_training(strategy, *args, **kwargs)

Called before train by the BaseTemplate.

before_training_epoch(strategy, *args, **kwargs)

Called before train_epoch by the BaseTemplate.

before_training_exp(strategy, *args, **kwargs)

Called after train_exp by the BaseTemplate.

before_training_iteration(strategy, *args, ...)

Called before the start of a training iteration by the BaseTemplate.

before_update(strategy, *args, **kwargs)

Called before optimizer.update() by the BaseTemplate.

Attributes

supports_distributed

A flag describing whether this plugin supports distributed training.