avalanche.training.LearningWithoutForgetting
- class avalanche.training.LearningWithoutForgetting(alpha=1, temperature=2)[source]
Learning Without Forgetting.
The method applies knowledge distilllation to mitigate forgetting. The teacher is the model checkpoint after the last experience.
- __init__(alpha=1, temperature=2)[source]
- Parameters:
alpha – distillation hyperparameter. It can be either a float number or a list containing alpha for each experience.
temperature – softmax temperature for distillation
Methods
__init__
([alpha, temperature])- param alpha:
distillation hyperparameter. It can be either a float
post_adapt
(agent, exp)Save a copy of the model after each experience and update self.prev_classes to include the newly learned classes.
pre_adapt
(agent, exp)update
(experience, model)Save a copy of the model after each experience and
Attributes
prev_classes_by_task
In Avalanche, targets of different experiences are not ordered.