avalanche.training.plugins.CWRStarPlugin
- class avalanche.training.plugins.CWRStarPlugin(model, cwr_layer_name=None, freeze_remaining_model=True)[source]
CWR* Strategy.
This plugin does not use task identities.
- __init__(model, cwr_layer_name=None, freeze_remaining_model=True)[source]
- Parameters:
model – the model.
cwr_layer_name – name of the last fully connected layer. Defaults to None, which means that the plugin will attempt an automatic detection.
freeze_remaining_model – If True, the plugin will freeze (set layers in eval mode and disable autograd for parameters) all the model except the cwr layer. Defaults to True.
Methods
__init__
(model[, cwr_layer_name, ...])- param model:
the model.
after_backward
(strategy, *args, **kwargs)Called after criterion.backward() by the BaseTemplate.
after_eval
(strategy, *args, **kwargs)Called after eval by the BaseTemplate.
after_eval_dataset_adaptation
(strategy, ...)Called after eval_dataset_adaptation by the BaseTemplate.
after_eval_exp
(strategy, *args, **kwargs)Called after eval_exp by the BaseTemplate.
after_eval_forward
(strategy, *args, **kwargs)Called after model.forward() by the BaseTemplate.
after_eval_iteration
(strategy, *args, **kwargs)Called after the end of an iteration by the BaseTemplate.
after_forward
(strategy, *args, **kwargs)Called after model.forward() by the BaseTemplate.
after_train_dataset_adaptation
(strategy, ...)Called after train_dataset_adapatation by the BaseTemplate.
after_training
(strategy, *args, **kwargs)Called after train by the BaseTemplate.
after_training_epoch
(strategy, *args, **kwargs)Called after train_epoch by the BaseTemplate.
after_training_exp
(strategy, **kwargs)Called after train_exp by the BaseTemplate.
after_training_iteration
(strategy, *args, ...)Called after the end of a training iteration by the BaseTemplate.
after_update
(strategy, *args, **kwargs)Called after optimizer.update() by the BaseTemplate.
before_backward
(strategy, *args, **kwargs)Called before criterion.backward() by the BaseTemplate.
before_eval
(strategy, *args, **kwargs)Called before eval by the BaseTemplate.
before_eval_dataset_adaptation
(strategy, ...)Called before eval_dataset_adaptation by the BaseTemplate.
before_eval_exp
(strategy, *args, **kwargs)Called before eval_exp by the BaseTemplate.
before_eval_forward
(strategy, *args, **kwargs)Called before model.forward() by the BaseTemplate.
before_eval_iteration
(strategy, *args, **kwargs)Called before the start of a training iteration by the BaseTemplate.
before_forward
(strategy, *args, **kwargs)Called before model.forward() by the BaseTemplate.
before_train_dataset_adaptation
(strategy, ...)Called before train_dataset_adapatation by the BaseTemplate.
before_training
(strategy, *args, **kwargs)Called before train by the BaseTemplate.
before_training_epoch
(strategy, *args, **kwargs)Called before train_epoch by the BaseTemplate.
before_training_exp
(strategy, **kwargs)Called before train_exp by the BaseTemplate.
before_training_iteration
(strategy, *args, ...)Called before the start of a training iteration by the BaseTemplate.
before_update
(strategy, *args, **kwargs)Called before optimizer.update() by the BaseTemplate.
consolidate_weights
()Mean-shift for the target layer weights
freeze_other_layers
()get_cwr_layer
()reset_weights
(cur_clas)reset weights
set_consolidate_weights
()set trained weights
Attributes
supports_distributed
A flag describing whether this plugin supports distributed training.