avalanche.training.plugins.SynapticIntelligencePlugin

class avalanche.training.plugins.SynapticIntelligencePlugin(si_lambda: Union[float, Sequence[float]], eps: float = 1e-07, excluded_parameters: Optional[Sequence[str]] = None, device: Any = 'as_strategy')[source]

Synaptic Intelligence plugin.

This is the Synaptic Intelligence PyTorch implementation of the algorithm described in the paper “Continuous Learning in Single-Incremental-Task Scenarios” (https://arxiv.org/abs/1806.08568)

The original implementation has been proposed in the paper “Continual Learning Through Synaptic Intelligence” (https://arxiv.org/abs/1703.04200).

This plugin can be attached to existing strategies to achieve a regularization effect.

This plugin will require the strategy loss field to be set before the before_backward callback is invoked. The loss Tensor will be updated to achieve the S.I. regularization effect.

__init__(si_lambda: Union[float, Sequence[float]], eps: float = 1e-07, excluded_parameters: Optional[Sequence[str]] = None, device: Any = 'as_strategy')[source]

Creates an instance of the Synaptic Intelligence plugin.

Parameters
  • si_lambda – Synaptic Intelligence lambda term. If list, one lambda for each experience. If the list has less elements than the number of experiences, last lambda will be used for the remaining experiences.

  • eps – Synaptic Intelligence damping parameter.

  • device – The device to use to run the S.I. experiences. Defaults to “as_strategy”, which means that the device field of the strategy will be used. Using a different device may lead to a performance drop due to the required data transfer.

Methods

__init__(si_lambda[, eps, ...])

Creates an instance of the Synaptic Intelligence plugin.

after_backward(strategy, *args, **kwargs)

Called after criterion.backward() by the BaseTemplate.

after_eval(strategy, *args, **kwargs)

Called after eval by the BaseTemplate.

after_eval_dataset_adaptation(strategy, ...)

Called after eval_dataset_adaptation by the BaseTemplate.

after_eval_exp(strategy, *args, **kwargs)

Called after eval_exp by the BaseTemplate.

after_eval_forward(strategy, *args, **kwargs)

Called after model.forward() by the BaseTemplate.

after_eval_iteration(strategy, *args, **kwargs)

Called after the end of an iteration by the BaseTemplate.

after_forward(strategy, *args, **kwargs)

Called after model.forward() by the BaseTemplate.

after_train_dataset_adaptation(strategy, ...)

Called after train_dataset_adapatation by the BaseTemplate.

after_training(strategy, *args, **kwargs)

Called after train by the BaseTemplate.

after_training_epoch(strategy, *args, **kwargs)

Called after train_epoch by the BaseTemplate.

after_training_exp(strategy, **kwargs)

Called after train_exp by the BaseTemplate.

after_training_iteration(strategy, **kwargs)

Called after the end of a training iteration by the BaseTemplate.

after_update(strategy, *args, **kwargs)

Called after optimizer.update() by the BaseTemplate.

allowed_parameters(model, excluded_parameters)

before_backward(strategy, **kwargs)

Called before criterion.backward() by the BaseTemplate.

before_eval(strategy, *args, **kwargs)

Called before eval by the BaseTemplate.

before_eval_dataset_adaptation(strategy, ...)

Called before eval_dataset_adaptation by the BaseTemplate.

before_eval_exp(strategy, *args, **kwargs)

Called before eval_exp by the BaseTemplate.

before_eval_forward(strategy, *args, **kwargs)

Called before model.forward() by the BaseTemplate.

before_eval_iteration(strategy, *args, **kwargs)

Called before the start of a training iteration by the BaseTemplate.

before_forward(strategy, *args, **kwargs)

Called before model.forward() by the BaseTemplate.

before_train_dataset_adaptation(strategy, ...)

Called before train_dataset_adapatation by the BaseTemplate.

before_training(strategy, *args, **kwargs)

Called before train by the BaseTemplate.

before_training_epoch(strategy, *args, **kwargs)

Called before train_epoch by the BaseTemplate.

before_training_exp(strategy, **kwargs)

Called before train_exp by the BaseTemplate.

before_training_iteration(strategy, **kwargs)

Called before the start of a training iteration by the BaseTemplate.

before_update(strategy, *args, **kwargs)

Called before optimizer.update() by the BaseTemplate.

compute_ewc_loss(model, ewc_data, ...[, lambd])

create_syn_data(model, ewc_data, syn_data, ...)

device(strategy)

explode_excluded_parameters(excluded)

Explodes a list of excluded parameters by adding a generic final ".*" wildcard at its end.

extract_grad(model, target, excluded_parameters)

extract_weights(model, target, ...)

init_batch(model, ewc_data, syn_data, ...)

not_excluded_parameters(model, ...)

post_update(model, syn_data, excluded_parameters)

pre_update(model, syn_data, excluded_parameters)

update_ewc_data(net, ewc_data, syn_data, ...)

Attributes

ewc_data

The first dictionary contains the params at loss minimum while the second one contains the parameter importance.