avalanche.training.plugins.RWalkPlugin

class avalanche.training.plugins.RWalkPlugin(ewc_lambda: float = 0.1, ewc_alpha: float = 0.9, delta_t: int = 10)[source]

Riemannian Walk (RWalk) plugin. RWalk computes the importance of each weight at the end of every training iteration, and updates each parameter’s importance online using moving average. During training on each minibatch, the loss is augmented with a penalty which keeps the value of a parameter close to the value it had on previous experiences, proportional to a score that has high values if small changes cause large improvements in the loss. This plugin does not use task identities.

Note

To reproduce the results of the paper in class-incremental scenarios, this plug-in should be used in conjunction with a replay strategy (e.g., ReplayPlugin).

__init__(ewc_lambda: float = 0.1, ewc_alpha: float = 0.9, delta_t: int = 10)[source]
Parameters
  • ewc_lambda – hyperparameter to weigh the penalty inside the total loss. The larger the lambda, the larger the regularization.

  • ewc_alpha – Specify the moving average factor for the importance matrix, as defined RWalk paper (a.k.a. EWC++). Higher values lead to higher weight to newly computed importances. Must be in [0, 1]. Defaults to 0.9.

  • delta_t – Specify the iterations interval in which the parameter scores are updated. Defaults to 10.

Methods

__init__([ewc_lambda, ewc_alpha, delta_t])

param ewc_lambda

hyperparameter to weigh the penalty inside the total

after_backward(strategy, *args, **kwargs)

Called after criterion.backward() by the BaseTemplate.

after_eval(strategy, *args, **kwargs)

Called after eval by the BaseTemplate.

after_eval_dataset_adaptation(strategy, ...)

Called after eval_dataset_adaptation by the BaseTemplate.

after_eval_exp(strategy, *args, **kwargs)

Called after eval_exp by the BaseTemplate.

after_eval_forward(strategy, *args, **kwargs)

Called after model.forward() by the BaseTemplate.

after_eval_iteration(strategy, *args, **kwargs)

Called after the end of an iteration by the BaseTemplate.

after_forward(strategy, *args, **kwargs)

Called after model.forward() by the BaseTemplate.

after_train_dataset_adaptation(strategy, ...)

Called after train_dataset_adapatation by the BaseTemplate.

after_training(strategy, *args, **kwargs)

Called after train by the BaseTemplate.

after_training_epoch(strategy, *args, **kwargs)

Called after train_epoch by the BaseTemplate.

after_training_exp(strategy, *args, **kwargs)

Called after train_exp by the BaseTemplate.

after_training_iteration(strategy, *args, ...)

Called after the end of a training iteration by the BaseTemplate.

after_update(strategy, *args, **kwargs)

Called after optimizer.update() by the BaseTemplate.

before_backward(strategy, *args, **kwargs)

Called before criterion.backward() by the BaseTemplate.

before_eval(strategy, *args, **kwargs)

Called before eval by the BaseTemplate.

before_eval_dataset_adaptation(strategy, ...)

Called before eval_dataset_adaptation by the BaseTemplate.

before_eval_exp(strategy, *args, **kwargs)

Called before eval_exp by the BaseTemplate.

before_eval_forward(strategy, *args, **kwargs)

Called before model.forward() by the BaseTemplate.

before_eval_iteration(strategy, *args, **kwargs)

Called before the start of a training iteration by the BaseTemplate.

before_forward(strategy, *args, **kwargs)

Called before model.forward() by the BaseTemplate.

before_train_dataset_adaptation(strategy, ...)

Called before train_dataset_adapatation by the BaseTemplate.

before_training(strategy, *args, **kwargs)

Called before train by the BaseTemplate.

before_training_epoch(strategy, *args, **kwargs)

Called before train_epoch by the BaseTemplate.

before_training_exp(strategy, *args, **kwargs)

Called before train_exp by the BaseTemplate.

before_training_iteration(strategy, *args, ...)

Called before the start of a training iteration by the BaseTemplate.

before_update(strategy, *args, **kwargs)

Called before optimizer.update() by the BaseTemplate.