avalanche.training.plugins.FeatureDistillationPlugin
- class avalanche.training.plugins.FeatureDistillationPlugin(alpha=1, mode='cosine')[source]
- __init__(alpha=1, mode='cosine')[source]
Adds a Distillation loss term on the features of the model, trying to maximize the cosine similarity between current and old features
- Parameters:
alpha – distillation hyperparameter. It can be either a float number or a list containing alpha for each experience.
Methods
__init__
([alpha, mode])Adds a Distillation loss term on the features of the model, trying to maximize the cosine similarity between current and old features
after_backward
(strategy, *args, **kwargs)Called after criterion.backward() by the BaseTemplate.
after_eval
(strategy, *args, **kwargs)Called after eval by the BaseTemplate.
after_eval_dataset_adaptation
(strategy, ...)Called after eval_dataset_adaptation by the BaseTemplate.
after_eval_exp
(strategy, *args, **kwargs)Called after eval_exp by the BaseTemplate.
after_eval_forward
(strategy, *args, **kwargs)Called after model.forward() by the BaseTemplate.
after_eval_iteration
(strategy, *args, **kwargs)Called after the end of an iteration by the BaseTemplate.
after_forward
(strategy, *args, **kwargs)Called after model.forward() by the BaseTemplate.
after_train_dataset_adaptation
(strategy, ...)Called after train_dataset_adapatation by the BaseTemplate.
after_training
(strategy, *args, **kwargs)Called after train by the BaseTemplate.
after_training_epoch
(strategy, *args, **kwargs)Called after train_epoch by the BaseTemplate.
after_training_exp
(strategy, **kwargs)Save a copy of the model after each experience and update self.prev_classes to include the newly learned classes.
after_training_iteration
(strategy, *args, ...)Called after the end of a training iteration by the BaseTemplate.
after_update
(strategy, *args, **kwargs)Called after optimizer.update() by the BaseTemplate.
before_backward
(strategy, **kwargs)Add distillation loss
before_eval
(strategy, *args, **kwargs)Called before eval by the BaseTemplate.
before_eval_dataset_adaptation
(strategy, ...)Called before eval_dataset_adaptation by the BaseTemplate.
before_eval_exp
(strategy, *args, **kwargs)Called before eval_exp by the BaseTemplate.
before_eval_forward
(strategy, *args, **kwargs)Called before model.forward() by the BaseTemplate.
before_eval_iteration
(strategy, *args, **kwargs)Called before the start of a training iteration by the BaseTemplate.
before_forward
(strategy, *args, **kwargs)Called before model.forward() by the BaseTemplate.
before_train_dataset_adaptation
(strategy, ...)Called before train_dataset_adapatation by the BaseTemplate.
before_training
(strategy, *args, **kwargs)Called before train by the BaseTemplate.
before_training_epoch
(strategy, *args, **kwargs)Called before train_epoch by the BaseTemplate.
before_training_exp
(strategy, *args, **kwargs)Called before train_exp by the BaseTemplate.
before_training_iteration
(strategy, *args, ...)Called before the start of a training iteration by the BaseTemplate.
before_update
(strategy, *args, **kwargs)Called before optimizer.update() by the BaseTemplate.
Attributes
supports_distributed
A flag describing whether this plugin supports distributed training.