avalanche.training.plugins.SynapticIntelligencePlugin

class avalanche.training.plugins.SynapticIntelligencePlugin(si_lambda: Union[float, Sequence[float]], eps: float = 1e-07, excluded_parameters: Optional[Sequence[str]] = None, device: Any = 'as_strategy')[source]

The Synaptic Intelligence plugin.

This is the Synaptic Intelligence PyTorch implementation of the algorithm described in the paper “Continuous Learning in Single-Incremental-Task Scenarios” (https://arxiv.org/abs/1806.08568)

The original implementation has been proposed in the paper “Continual Learning Through Synaptic Intelligence” (https://arxiv.org/abs/1703.04200).

This plugin can be attached to existing strategies to achieve a regularization effect.

This plugin will require the strategy loss field to be set before the before_backward callback is invoked. The loss Tensor will be updated to achieve the S.I. regularization effect.

__init__(si_lambda: Union[float, Sequence[float]], eps: float = 1e-07, excluded_parameters: Optional[Sequence[str]] = None, device: Any = 'as_strategy')[source]

Creates an instance of the Synaptic Intelligence plugin.

Parameters
  • si_lambda – Synaptic Intelligence lambda term. If list, one lambda for each experience. If the list has less elements than the number of experiences, last lambda will be used for the remaining experiences.

  • eps – Synaptic Intelligence damping parameter.

  • device – The device to use to run the S.I. experiences. Defaults to “as_strategy”, which means that the device field of the strategy will be used. Using a different device may lead to a performance drop due to the required data transfer.

Methods

__init__(si_lambda[, eps, ...])

Creates an instance of the Synaptic Intelligence plugin.

after_backward(strategy, **kwargs)

Called after criterion.backward() by the BaseStrategy.

after_eval(strategy, **kwargs)

Called after eval by the BaseStrategy.

after_eval_dataset_adaptation(strategy, **kwargs)

Called after eval_dataset_adaptation by the BaseStrategy.

after_eval_exp(strategy, **kwargs)

Called after eval_exp by the BaseStrategy.

after_eval_forward(strategy, **kwargs)

Called after model.forward() by the BaseStrategy.

after_eval_iteration(strategy, **kwargs)

Called after the end of an iteration by the BaseStrategy.

after_forward(strategy, **kwargs)

Called after model.forward() by the BaseStrategy.

after_train_dataset_adaptation(strategy, ...)

Called after train_dataset_adapatation by the BaseStrategy.

after_training(strategy, **kwargs)

Called after train by the BaseStrategy.

after_training_epoch(strategy, **kwargs)

Called after train_epoch by the BaseStrategy.

after_training_exp(strategy, **kwargs)

Called after train_exp by the BaseStrategy.

after_training_iteration(strategy, **kwargs)

Called after the end of a training iteration by the BaseStrategy.

after_update(strategy, **kwargs)

Called after optimizer.update() by the BaseStrategy.

allowed_parameters(model, excluded_parameters)

before_backward(strategy, **kwargs)

Called before criterion.backward() by the BaseStrategy.

before_eval(strategy, **kwargs)

Called before eval by the BaseStrategy.

before_eval_dataset_adaptation(strategy, ...)

Called before eval_dataset_adaptation by the BaseStrategy.

before_eval_exp(strategy, **kwargs)

Called before eval_exp by the BaseStrategy.

before_eval_forward(strategy, **kwargs)

Called before model.forward() by the BaseStrategy.

before_eval_iteration(strategy, **kwargs)

Called before the start of a training iteration by the BaseStrategy.

before_forward(strategy, **kwargs)

Called before model.forward() by the BaseStrategy.

before_train_dataset_adaptation(strategy, ...)

Called before train_dataset_adapatation by the BaseStrategy.

before_training(strategy, **kwargs)

Called before train by the BaseStrategy.

before_training_epoch(strategy, **kwargs)

Called before train_epoch by the BaseStrategy.

before_training_exp(strategy, **kwargs)

Called before train_exp by the BaseStrategy.

before_training_iteration(strategy, **kwargs)

Called before the start of a training iteration by the BaseStrategy.

before_update(strategy, **kwargs)

Called before optimizer.update() by the BaseStrategy.

compute_ewc_loss(model, ewc_data, ...[, lambd])

create_syn_data(model, ewc_data, syn_data, ...)

device(strategy)

explode_excluded_parameters(excluded)

Explodes a list of excluded parameters by adding a generic final ".*" wildcard at its end.

extract_grad(model, target, excluded_parameters)

extract_weights(model, target, ...)

init_batch(model, ewc_data, syn_data, ...)

not_excluded_parameters(model, ...)

post_update(model, syn_data, excluded_parameters)

pre_update(model, syn_data, excluded_parameters)

update_ewc_data(net, ewc_data, syn_data, ...)

Attributes

ewc_data

The first dictionary contains the params at loss minimum while the second one contains the parameter importance.