class avalanche.training.plugins.GenerativeReplayPlugin(generator_strategy=None, untrained_solver: bool = True, replay_size: int | None = None, increasing_replay_size: bool = False)[source]

Experience generative replay plugin.

Updates the current mbatch of a strategy before training an experience by sampling a generator model and concatenating the replay data to the current batch.

In this version of the plugin the number of replay samples is increased with each new experience. Another way to implempent the algorithm is by weighting the loss function and give more importance to the replayed data as the number of experiences increases. This will be implemented as an option for the user soon.

  • generator_strategy – In case the plugin is applied to a non-generative model (e.g. a simple classifier), this should contain an Avalanche strategy for a model that implements a ‘generate’ method (see avalanche.models.generator.Generator). Defaults to None.

  • untrained_solver – if True we assume this is the beginning of a continual learning task and add replay data only from the second experience onwards, otherwise we sample and add generative replay data before training the first experience. Default to True.

  • replay_size – The user can specify the batch size of replays that should be added to each data batch. By default each data batch will be matched with replays of the same number.

  • increasing_replay_size – If set to True, each experience this will double the amount of replay data added to each data batch. The effect will be that the older experiences will gradually increase in importance to the final loss.

__init__(generator_strategy=None, untrained_solver: bool = True, replay_size: int | None = None, increasing_replay_size: bool = False)[source]



__init__([generator_strategy, ...])


after_backward(strategy, *args, **kwargs)

Called after criterion.backward() by the BaseTemplate.

after_eval(strategy, *args, **kwargs)

Called after eval by the BaseTemplate.

after_eval_dataset_adaptation(strategy, ...)

Called after eval_dataset_adaptation by the BaseTemplate.

after_eval_exp(strategy, *args, **kwargs)

Called after eval_exp by the BaseTemplate.

after_eval_forward(strategy, *args, **kwargs)

Called after model.forward() by the BaseTemplate.

after_eval_iteration(strategy, *args, **kwargs)

Called after the end of an iteration by the BaseTemplate.

after_forward(strategy, *args, **kwargs)

Called after model.forward() by the BaseTemplate.

after_train_dataset_adaptation(strategy, ...)

Called after train_dataset_adapatation by the BaseTemplate.

after_training(strategy, *args, **kwargs)

Called after train by the BaseTemplate.

after_training_epoch(strategy, *args, **kwargs)

Called after train_epoch by the BaseTemplate.

after_training_exp(strategy[, num_workers, ...])

Set untrained_solver boolean to False after (the first) experience, in order to start training with replay data from the second experience.

after_training_iteration(strategy, *args, ...)

Called after the end of a training iteration by the BaseTemplate.

after_update(strategy, *args, **kwargs)

Called after optimizer.update() by the BaseTemplate.

before_backward(strategy, *args, **kwargs)

Called before criterion.backward() by the BaseTemplate.

before_eval(strategy, *args, **kwargs)

Called before eval by the BaseTemplate.

before_eval_dataset_adaptation(strategy, ...)

Called before eval_dataset_adaptation by the BaseTemplate.

before_eval_exp(strategy, *args, **kwargs)

Called before eval_exp by the BaseTemplate.

before_eval_forward(strategy, *args, **kwargs)

Called before model.forward() by the BaseTemplate.

before_eval_iteration(strategy, *args, **kwargs)

Called before the start of a training iteration by the BaseTemplate.

before_forward(strategy, *args, **kwargs)

Called before model.forward() by the BaseTemplate.

before_train_dataset_adaptation(strategy, ...)

Called before train_dataset_adapatation by the BaseTemplate.

before_training(strategy, *args, **kwargs)

Checks whether we are using a user defined external generator or we use the strategy's model as the generator.

before_training_epoch(strategy, *args, **kwargs)

Called before train_epoch by the BaseTemplate.

before_training_exp(strategy[, num_workers, ...])

Make deep copies of generator and solver before training new experience.

before_training_iteration(strategy, **kwargs)

Generating and appending replay data to current minibatch before each training iteration.

before_update(strategy, *args, **kwargs)

Called before optimizer.update() by the BaseTemplate.



A flag describing whether this plugin supports distributed training.