avalanche.training.plugins.GEMPlugin

class avalanche.training.plugins.GEMPlugin(patterns_per_experience: int, memory_strength: float)[source]

Gradient Episodic Memory Plugin. GEM projects the gradient on the current minibatch by using an external episodic memory of patterns from previous experiences. The gradient on the current minibatch is projected so that the dot product with all the reference gradients of previous tasks remains positive. This plugin does not use task identities.

__init__(patterns_per_experience: int, memory_strength: float)[source]
Parameters
  • patterns_per_experience – number of patterns per experience in the memory.

  • memory_strength – offset to add to the projection direction in order to favour backward transfer (gamma in original paper).

Methods

__init__(patterns_per_experience, ...)

param patterns_per_experience

number of patterns per experience in the

after_backward(strategy, **kwargs)

Project gradient based on reference gradients

after_eval(strategy, **kwargs)

Called after eval by the BaseStrategy.

after_eval_dataset_adaptation(strategy, **kwargs)

Called after eval_dataset_adaptation by the BaseStrategy.

after_eval_exp(strategy, **kwargs)

Called after eval_exp by the BaseStrategy.

after_eval_forward(strategy, **kwargs)

Called after model.forward() by the BaseStrategy.

after_eval_iteration(strategy, **kwargs)

Called after the end of an iteration by the BaseStrategy.

after_forward(strategy, **kwargs)

Called after model.forward() by the BaseStrategy.

after_train_dataset_adaptation(strategy, ...)

Called after train_dataset_adapatation by the BaseStrategy.

after_training(strategy, **kwargs)

Called after train by the BaseStrategy.

after_training_epoch(strategy, **kwargs)

Called after train_epoch by the BaseStrategy.

after_training_exp(strategy, **kwargs)

Save a copy of the model after each experience

after_training_iteration(strategy, **kwargs)

Called after the end of a training iteration by the BaseStrategy.

after_update(strategy, **kwargs)

Called after optimizer.update() by the BaseStrategy.

before_backward(strategy, **kwargs)

Called before criterion.backward() by the BaseStrategy.

before_eval(strategy, **kwargs)

Called before eval by the BaseStrategy.

before_eval_dataset_adaptation(strategy, ...)

Called before eval_dataset_adaptation by the BaseStrategy.

before_eval_exp(strategy, **kwargs)

Called before eval_exp by the BaseStrategy.

before_eval_forward(strategy, **kwargs)

Called before model.forward() by the BaseStrategy.

before_eval_iteration(strategy, **kwargs)

Called before the start of a training iteration by the BaseStrategy.

before_forward(strategy, **kwargs)

Called before model.forward() by the BaseStrategy.

before_train_dataset_adaptation(strategy, ...)

Called before train_dataset_adapatation by the BaseStrategy.

before_training(strategy, **kwargs)

Called before train by the BaseStrategy.

before_training_epoch(strategy, **kwargs)

Called before train_epoch by the BaseStrategy.

before_training_exp(strategy, **kwargs)

Called before train_exp by the BaseStrategy.

before_training_iteration(strategy, **kwargs)

Compute gradient constraints on previous memory samples from all experiences.

before_update(strategy, **kwargs)

Called before optimizer.update() by the BaseStrategy.

solve_quadprog(g)

Solve quadratic programming with current gradient g and gradients matrix on previous tasks G.

update_memory(dataset, t, batch_size)

Update replay memory with patterns from current experience.