avalanche.training.plugins.LFLPlugin

class avalanche.training.plugins.LFLPlugin(lambda_e)[source]

Less-Forgetful Learning (LFL) Plugin.

LFL satisfies two properties to mitigate catastrophic forgetting. 1) To keep the decision boundaries unchanged 2) The feature space should not change much on target(new) data LFL uses euclidean loss between features from current and previous version of model as regularization to maintain the feature space and avoid catastrophic forgetting. Refer paper https://arxiv.org/pdf/1607.00122.pdf for more details This plugin does not use task identities.

__init__(lambda_e)[source]
Parameters

lambda_e – Euclidean loss hyper parameter

Methods

__init__(lambda_e)

param lambda_e

Euclidean loss hyper parameter

after_backward(strategy, **kwargs)

Called after criterion.backward() by the BaseStrategy.

after_eval(strategy, **kwargs)

Called after eval by the BaseStrategy.

after_eval_dataset_adaptation(strategy, **kwargs)

Called after eval_dataset_adaptation by the BaseStrategy.

after_eval_exp(strategy, **kwargs)

Called after eval_exp by the BaseStrategy.

after_eval_forward(strategy, **kwargs)

Called after model.forward() by the BaseStrategy.

after_eval_iteration(strategy, **kwargs)

Called after the end of an iteration by the BaseStrategy.

after_forward(strategy, **kwargs)

Called after model.forward() by the BaseStrategy.

after_train_dataset_adaptation(strategy, ...)

Called after train_dataset_adapatation by the BaseStrategy.

after_training(strategy, **kwargs)

Called after train by the BaseStrategy.

after_training_epoch(strategy, **kwargs)

Called after train_epoch by the BaseStrategy.

after_training_exp(strategy, **kwargs)

Save a copy of the model after each experience and freeze the prev model and freeze the last layer of current model

after_training_iteration(strategy, **kwargs)

Called after the end of a training iteration by the BaseStrategy.

after_update(strategy, **kwargs)

Called after optimizer.update() by the BaseStrategy.

before_backward(strategy, **kwargs)

Add euclidean loss between prev and current features as penalty

before_eval(strategy, **kwargs)

Called before eval by the BaseStrategy.

before_eval_dataset_adaptation(strategy, ...)

Called before eval_dataset_adaptation by the BaseStrategy.

before_eval_exp(strategy, **kwargs)

Called before eval_exp by the BaseStrategy.

before_eval_forward(strategy, **kwargs)

Called before model.forward() by the BaseStrategy.

before_eval_iteration(strategy, **kwargs)

Called before the start of a training iteration by the BaseStrategy.

before_forward(strategy, **kwargs)

Called before model.forward() by the BaseStrategy.

before_train_dataset_adaptation(strategy, ...)

Called before train_dataset_adapatation by the BaseStrategy.

before_training(strategy, **kwargs)

Check if the model is an instance of base class to ensure get_features() is implemented

before_training_epoch(strategy, **kwargs)

Called before train_epoch by the BaseStrategy.

before_training_exp(strategy, **kwargs)

Called before train_exp by the BaseStrategy.

before_training_iteration(strategy, **kwargs)

Called before the start of a training iteration by the BaseStrategy.

before_update(strategy, **kwargs)

Called before optimizer.update() by the BaseStrategy.

compute_features(model, x)

Compute features from prev model and current model

penalty(x, model, lambda_e)

Compute weighted euclidean loss