avalanche.training.templates.SupervisedTemplate

class avalanche.training.templates.SupervisedTemplate(model: ~torch.nn.modules.module.Module, optimizer: ~torch.optim.optimizer.Optimizer, criterion=CrossEntropyLoss(), train_mb_size: int = 1, train_epochs: int = 1, eval_mb_size: ~typing.Optional[int] = 1, device='cpu', plugins: ~typing.Optional[~typing.Sequence[~avalanche.core.SupervisedPlugin]] = None, evaluator=<avalanche.training.plugins.evaluation.EvaluationPlugin object>, eval_every=-1, peval_mode='epoch')[source]

Base class for continual learning strategies.

BaseTemplate is the super class of all task-based continual learning strategies. It implements a basic training loop and callback system that allows to execute code at each experience of the training loop. Plugins can be used to implement callbacks to augment the training loop with additional behavior (e.g. a memory buffer for replay).

Scenarios This strategy supports several continual learning scenarios:

  • class-incremental scenarios (no task labels)

  • multi-task scenarios, where task labels are provided)

  • multi-incremental scenarios, where the same task may be revisited

The exact scenario depends on the data stream and whether it provides the task labels.

Training loop The training loop is organized as follows:

train
    train_exp  # for each experience
        adapt_train_dataset
        train_dataset_adaptation
        make_train_dataloader
        train_epoch  # for each epoch
            # forward
            # backward
            # model update

Evaluation loop The evaluation loop is organized as follows:

eval
    eval_exp  # for each experience
        adapt_eval_dataset
        eval_dataset_adaptation
        make_eval_dataloader
        eval_epoch  # for each epoch
            # forward
            # backward
            # model update
__init__(model: ~torch.nn.modules.module.Module, optimizer: ~torch.optim.optimizer.Optimizer, criterion=CrossEntropyLoss(), train_mb_size: int = 1, train_epochs: int = 1, eval_mb_size: ~typing.Optional[int] = 1, device='cpu', plugins: ~typing.Optional[~typing.Sequence[~avalanche.core.SupervisedPlugin]] = None, evaluator=<avalanche.training.plugins.evaluation.EvaluationPlugin object>, eval_every=-1, peval_mode='epoch')[source]

Init.

Parameters
  • model – PyTorch model.

  • optimizer – PyTorch optimizer.

  • criterion – loss function.

  • train_mb_size – mini-batch size for training.

  • train_epochs – number of training epochs.

  • eval_mb_size – mini-batch size for eval.

  • device – PyTorch device where the model will be allocated.

  • plugins – (optional) list of StrategyPlugins.

  • evaluator – (optional) instance of EvaluationPlugin for logging and metric computations. None to remove logging.

  • eval_every – the frequency of the calls to eval inside the training loop. -1 disables the evaluation. 0 means eval is called only at the end of the learning experience. Values >0 mean that eval is called every eval_every epochs and at the end of the learning experience.

  • peval_mode – one of {‘epoch’, ‘iteration’}. Decides whether the periodic evaluation during training should execute every eval_every epochs or iterations (Default=’epoch’).

Methods

__init__(model, optimizer[, criterion, ...])

Init.

backward()

Run the backward pass.

criterion()

Loss function.

eval(exp_list, **kwargs)

Evaluate the current model on a series of experiences and returns the last recorded value for each metric.

eval_dataset_adaptation(**kwargs)

Initialize self.adapted_dataset.

eval_epoch(**kwargs)

Evaluation loop over the current self.dataloader.

forward()

Compute the model's output given the current mini-batch.

make_eval_dataloader([num_workers, ...])

Initializes the eval data loader. :param num_workers: How many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0). :param pin_memory: If True, the data loader will copy Tensors into CUDA pinned memory before returning them. Defaults to True. :param kwargs: :return:.

make_optimizer()

Optimizer initialization.

make_train_dataloader([num_workers, ...])

Data loader initialization.

model_adaptation([model])

Adapts the model to the current data.

optimizer_step()

Execute the optimizer step (weights update).

stop_training()

Signals to stop training at the next iteration.

train(experiences[, eval_streams])

Training loop.

train_dataset_adaptation(**kwargs)

Initialize self.adapted_dataset.

training_epoch(**kwargs)

Training epoch.

Attributes

is_eval

True if the strategy is in evaluation mode.

mb_task_id

Current mini-batch task labels.

mb_x

Current mini-batch input.

mb_y

Current mini-batch target.

adapted_dataset

Data used to train.