avalanche.training.plugins.AGEMPlugin

class avalanche.training.plugins.AGEMPlugin(patterns_per_experience: int, sample_size: int)[source]

Average Gradient Episodic Memory Plugin.

AGEM projects the gradient on the current minibatch by using an external episodic memory of patterns from previous experiences. If the dot product between the current gradient and the (average) gradient of a randomly sampled set of memory examples is negative, the gradient is projected. This plugin does not use task identities.

__init__(patterns_per_experience: int, sample_size: int)[source]
Parameters
  • patterns_per_experience – number of patterns per experience in the memory.

  • sample_size – number of patterns in memory sample when computing reference gradient.

Methods

__init__(patterns_per_experience, sample_size)

param patterns_per_experience

number of patterns per experience in the

after_backward(strategy, **kwargs)

Project gradient based on reference gradients

after_eval(strategy, **kwargs)

Called after eval by the BaseStrategy.

after_eval_dataset_adaptation(strategy, **kwargs)

Called after eval_dataset_adaptation by the BaseStrategy.

after_eval_exp(strategy, **kwargs)

Called after eval_exp by the BaseStrategy.

after_eval_forward(strategy, **kwargs)

Called after model.forward() by the BaseStrategy.

after_eval_iteration(strategy, **kwargs)

Called after the end of an iteration by the BaseStrategy.

after_forward(strategy, **kwargs)

Called after model.forward() by the BaseStrategy.

after_train_dataset_adaptation(strategy, ...)

Called after train_dataset_adapatation by the BaseStrategy.

after_training(strategy, **kwargs)

Called after train by the BaseStrategy.

after_training_epoch(strategy, **kwargs)

Called after train_epoch by the BaseStrategy.

after_training_exp(strategy, **kwargs)

Update replay memory with patterns from current experience.

after_training_iteration(strategy, **kwargs)

Called after the end of a training iteration by the BaseStrategy.

after_update(strategy, **kwargs)

Called after optimizer.update() by the BaseStrategy.

before_backward(strategy, **kwargs)

Called before criterion.backward() by the BaseStrategy.

before_eval(strategy, **kwargs)

Called before eval by the BaseStrategy.

before_eval_dataset_adaptation(strategy, ...)

Called before eval_dataset_adaptation by the BaseStrategy.

before_eval_exp(strategy, **kwargs)

Called before eval_exp by the BaseStrategy.

before_eval_forward(strategy, **kwargs)

Called before model.forward() by the BaseStrategy.

before_eval_iteration(strategy, **kwargs)

Called before the start of a training iteration by the BaseStrategy.

before_forward(strategy, **kwargs)

Called before model.forward() by the BaseStrategy.

before_train_dataset_adaptation(strategy, ...)

Called before train_dataset_adapatation by the BaseStrategy.

before_training(strategy, **kwargs)

Called before train by the BaseStrategy.

before_training_epoch(strategy, **kwargs)

Called before train_epoch by the BaseStrategy.

before_training_exp(strategy, **kwargs)

Called before train_exp by the BaseStrategy.

before_training_iteration(strategy, **kwargs)

Compute reference gradient on memory sample.

before_update(strategy, **kwargs)

Called before optimizer.update() by the BaseStrategy.

sample_from_memory()

Sample a minibatch from memory.

update_memory(dataset)

Update replay memory with patterns from current experience.