avalanche.training.GenerativeReplay

class avalanche.training.GenerativeReplay(model: ~torch.nn.modules.module.Module, optimizer: ~torch.optim.optimizer.Optimizer, criterion=CrossEntropyLoss(), train_mb_size: int = 1, train_epochs: int = 1, eval_mb_size: ~typing.Optional[int] = None, device=None, plugins: ~typing.Optional[~typing.List[~avalanche.core.SupervisedPlugin]] = None, evaluator: ~avalanche.training.plugins.evaluation.EvaluationPlugin = <avalanche.training.plugins.evaluation.EvaluationPlugin object>, eval_every=-1, generator_strategy: ~typing.Optional[~avalanche.training.templates.base.BaseTemplate] = None, replay_size: ~typing.Optional[int] = None, increasing_replay_size: bool = False, **base_kwargs)[source]

Generative Replay Strategy

This implements Deep Generative Replay for a Scholar consisting of a Solver and Generator as described in https://arxiv.org/abs/1705.08690.

The model parameter should contain the solver. As an optional input a generator can be wrapped in a trainable strategy and passed to the generator_strategy parameter. By default a simple VAE will be used as generator.

For the case where the Generator is the model itself that is to be trained, please simply add the GenerativeReplayPlugin() when instantiating your Generator’s strategy.

See GenerativeReplayPlugin for more details. This strategy does not use task identities.

__init__(model: ~torch.nn.modules.module.Module, optimizer: ~torch.optim.optimizer.Optimizer, criterion=CrossEntropyLoss(), train_mb_size: int = 1, train_epochs: int = 1, eval_mb_size: ~typing.Optional[int] = None, device=None, plugins: ~typing.Optional[~typing.List[~avalanche.core.SupervisedPlugin]] = None, evaluator: ~avalanche.training.plugins.evaluation.EvaluationPlugin = <avalanche.training.plugins.evaluation.EvaluationPlugin object>, eval_every=-1, generator_strategy: ~typing.Optional[~avalanche.training.templates.base.BaseTemplate] = None, replay_size: ~typing.Optional[int] = None, increasing_replay_size: bool = False, **base_kwargs)[source]

Creates an instance of Generative Replay Strategy for a solver-generator pair.

Parameters
  • model – The solver model.

  • optimizer – The optimizer to use.

  • criterion – The loss criterion to use.

  • train_mb_size – The train minibatch size. Defaults to 1.

  • train_epochs – The number of training epochs. Defaults to 1.

  • eval_mb_size – The eval minibatch size. Defaults to 1.

  • device – The device to use. Defaults to None (cpu).

  • plugins – Plugins to be added. Defaults to None.

  • evaluator – (optional) instance of EvaluationPlugin for logging and metric computations.

  • eval_every – the frequency of the calls to eval inside the training loop. -1 disables the evaluation. 0 means eval is called only at the end of the learning experience. Values >0 mean that eval is called every eval_every epochs and at the end of the learning experience.

  • generator_strategy – A trainable strategy with a generative model, which employs GenerativeReplayPlugin. Defaults to None.

  • **base_kwargs – any additional BaseTemplate constructor arguments.

Methods

__init__(model, optimizer[, criterion, ...])

Creates an instance of Generative Replay Strategy for a solver-generator pair.

backward()

Run the backward pass.

criterion()

Loss function.

eval(exp_list, **kwargs)

Evaluate the current model on a series of experiences and returns the last recorded value for each metric.

eval_dataset_adaptation(**kwargs)

Initialize self.adapted_dataset.

eval_epoch(**kwargs)

Evaluation loop over the current self.dataloader.

forward()

Compute the model's output given the current mini-batch.

make_eval_dataloader([num_workers, ...])

Initializes the eval data loader. :param num_workers: How many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0). :param pin_memory: If True, the data loader will copy Tensors into CUDA pinned memory before returning them. Defaults to True. :param kwargs: :return:.

make_optimizer()

Optimizer initialization.

make_train_dataloader([num_workers, ...])

Data loader initialization.

model_adaptation([model])

Adapts the model to the current data.

optimizer_step()

Execute the optimizer step (weights update).

stop_training()

Signals to stop training at the next iteration.

train(experiences[, eval_streams])

Training loop.

train_dataset_adaptation(**kwargs)

Initialize self.adapted_dataset.

training_epoch(**kwargs)

Training epoch.

Attributes

is_eval

True if the strategy is in evaluation mode.

mb_task_id

Current mini-batch task labels.

mb_x

Current mini-batch input.

mb_y

Current mini-batch target.