class avalanche.training.plugins.AGEMPlugin(patterns_per_experience: int, sample_size: int)[source]

Average Gradient Episodic Memory Plugin.

AGEM projects the gradient on the current minibatch by using an external episodic memory of patterns from previous experiences. If the dot product between the current gradient and the (average) gradient of a randomly sampled set of memory examples is negative, the gradient is projected. This plugin does not use task identities.

__init__(patterns_per_experience: int, sample_size: int)[source]
  • patterns_per_experience – number of patterns per experience in the memory.

  • sample_size – number of patterns in memory sample when computing reference gradient.


__init__(patterns_per_experience, sample_size)

param patterns_per_experience:

number of patterns per experience in the

after_backward(strategy, **kwargs)

Project gradient based on reference gradients

after_eval(strategy, *args, **kwargs)

Called after eval by the BaseTemplate.

after_eval_dataset_adaptation(strategy, ...)

Called after eval_dataset_adaptation by the BaseTemplate.

after_eval_exp(strategy, *args, **kwargs)

Called after eval_exp by the BaseTemplate.

after_eval_forward(strategy, *args, **kwargs)

Called after model.forward() by the BaseTemplate.

after_eval_iteration(strategy, *args, **kwargs)

Called after the end of an iteration by the BaseTemplate.

after_forward(strategy, *args, **kwargs)

Called after model.forward() by the BaseTemplate.

after_train_dataset_adaptation(strategy, ...)

Called after train_dataset_adapatation by the BaseTemplate.

after_training(strategy, *args, **kwargs)

Called after train by the BaseTemplate.

after_training_epoch(strategy, *args, **kwargs)

Called after train_epoch by the BaseTemplate.

after_training_exp(strategy, **kwargs)

Update replay memory with patterns from current experience.

after_training_iteration(strategy, *args, ...)

Called after the end of a training iteration by the BaseTemplate.

after_update(strategy, *args, **kwargs)

Called after optimizer.update() by the BaseTemplate.

before_backward(strategy, *args, **kwargs)

Called before criterion.backward() by the BaseTemplate.

before_eval(strategy, *args, **kwargs)

Called before eval by the BaseTemplate.

before_eval_dataset_adaptation(strategy, ...)

Called before eval_dataset_adaptation by the BaseTemplate.

before_eval_exp(strategy, *args, **kwargs)

Called before eval_exp by the BaseTemplate.

before_eval_forward(strategy, *args, **kwargs)

Called before model.forward() by the BaseTemplate.

before_eval_iteration(strategy, *args, **kwargs)

Called before the start of a training iteration by the BaseTemplate.

before_forward(strategy, *args, **kwargs)

Called before model.forward() by the BaseTemplate.

before_train_dataset_adaptation(strategy, ...)

Called before train_dataset_adapatation by the BaseTemplate.

before_training(strategy, *args, **kwargs)

Called before train by the BaseTemplate.

before_training_epoch(strategy, *args, **kwargs)

Called before train_epoch by the BaseTemplate.

before_training_exp(strategy, *args, **kwargs)

Called before train_exp by the BaseTemplate.

before_training_iteration(strategy, **kwargs)

Compute reference gradient on memory sample.

before_update(strategy, *args, **kwargs)

Called before optimizer.update() by the BaseTemplate.


Sample a minibatch from memory.

update_memory(dataset[, num_workers])

Update replay memory with patterns from current experience.



A flag describing whether this plugin supports distributed training.