avalanche.training.plugins.LFLPlugin

class avalanche.training.plugins.LFLPlugin(lambda_e)[source]

Less-Forgetful Learning (LFL) Plugin.

LFL satisfies two properties to mitigate catastrophic forgetting. 1) To keep the decision boundaries unchanged 2) The feature space should not change much on target(new) data LFL uses euclidean loss between features from current and previous version of model as regularization to maintain the feature space and avoid catastrophic forgetting. Refer paper https://arxiv.org/pdf/1607.00122.pdf for more details This plugin does not use task identities.

__init__(lambda_e)[source]
Parameters

lambda_e – Euclidean loss hyper parameter

Methods

__init__(lambda_e)

param lambda_e

Euclidean loss hyper parameter

after_backward(strategy, *args, **kwargs)

Called after criterion.backward() by the BaseTemplate.

after_eval(strategy, *args, **kwargs)

Called after eval by the BaseTemplate.

after_eval_dataset_adaptation(strategy, ...)

Called after eval_dataset_adaptation by the BaseTemplate.

after_eval_exp(strategy, *args, **kwargs)

Called after eval_exp by the BaseTemplate.

after_eval_forward(strategy, *args, **kwargs)

Called after model.forward() by the BaseTemplate.

after_eval_iteration(strategy, *args, **kwargs)

Called after the end of an iteration by the BaseTemplate.

after_forward(strategy, *args, **kwargs)

Called after model.forward() by the BaseTemplate.

after_train_dataset_adaptation(strategy, ...)

Called after train_dataset_adapatation by the BaseTemplate.

after_training(strategy, *args, **kwargs)

Called after train by the BaseTemplate.

after_training_epoch(strategy, *args, **kwargs)

Called after train_epoch by the BaseTemplate.

after_training_exp(strategy, **kwargs)

Save a copy of the model after each experience and freeze the prev model and freeze the last layer of current model

after_training_iteration(strategy, *args, ...)

Called after the end of a training iteration by the BaseTemplate.

after_update(strategy, *args, **kwargs)

Called after optimizer.update() by the BaseTemplate.

before_backward(strategy, **kwargs)

Add euclidean loss between prev and current features as penalty

before_eval(strategy, *args, **kwargs)

Called before eval by the BaseTemplate.

before_eval_dataset_adaptation(strategy, ...)

Called before eval_dataset_adaptation by the BaseTemplate.

before_eval_exp(strategy, *args, **kwargs)

Called before eval_exp by the BaseTemplate.

before_eval_forward(strategy, *args, **kwargs)

Called before model.forward() by the BaseTemplate.

before_eval_iteration(strategy, *args, **kwargs)

Called before the start of a training iteration by the BaseTemplate.

before_forward(strategy, *args, **kwargs)

Called before model.forward() by the BaseTemplate.

before_train_dataset_adaptation(strategy, ...)

Called before train_dataset_adapatation by the BaseTemplate.

before_training(strategy, **kwargs)

Check if the model is an instance of base class to ensure get_features() is implemented

before_training_epoch(strategy, *args, **kwargs)

Called before train_epoch by the BaseTemplate.

before_training_exp(strategy, *args, **kwargs)

Called before train_exp by the BaseTemplate.

before_training_iteration(strategy, *args, ...)

Called before the start of a training iteration by the BaseTemplate.

before_update(strategy, *args, **kwargs)

Called before optimizer.update() by the BaseTemplate.

compute_features(model, x)

Compute features from prev model and current model

penalty(x, model, lambda_e)

Compute weighted euclidean loss