- class avalanche.training.LearningWithoutForgetting(alpha=1, temperature=2)[source]
Learning Without Forgetting.
The method applies knowledge distilllation to mitigate forgetting. The teacher is the model checkpoint after the last experience.
- __init__(alpha=1, temperature=2)[source]
alpha – distillation hyperparameter. It can be either a float number or a list containing alpha for each experience.
temperature – softmax temperature for distillation
- param alpha:
distillation hyperparameter. It can be either a float
Save a copy of the model after each experience and update self.prev_classes to include the newly learned classes.
In Avalanche, targets of different experiences are not ordered.