avalanche.training.DER
- class avalanche.training.DER(*, model: ~torch.nn.modules.module.Module, optimizer: ~torch.optim.optimizer.Optimizer, criterion: ~torch.nn.modules.module.Module | ~typing.Callable[[~torch.Tensor, ~torch.Tensor], ~torch.Tensor] = CrossEntropyLoss(), mem_size: int = 200, batch_size_mem: int | None = None, alpha: float = 0.1, beta: float = 0.5, train_mb_size: int = 1, train_epochs: int = 1, eval_mb_size: int | None = 1, device: str | ~torch.device = 'cpu', plugins: ~typing.List[~avalanche.core.SupervisedPlugin] | None = None, evaluator: ~avalanche.training.plugins.evaluation.EvaluationPlugin | ~typing.Callable[[], ~avalanche.training.plugins.evaluation.EvaluationPlugin] = <function default_evaluator>, eval_every=-1, peval_mode='epoch', **kwargs)[source]
Implements the DER and the DER++ Strategy, from the “Dark Experience For General Continual Learning” paper, Buzzega et. al, https://arxiv.org/abs/2004.07211
- __init__(*, model: ~torch.nn.modules.module.Module, optimizer: ~torch.optim.optimizer.Optimizer, criterion: ~torch.nn.modules.module.Module | ~typing.Callable[[~torch.Tensor, ~torch.Tensor], ~torch.Tensor] = CrossEntropyLoss(), mem_size: int = 200, batch_size_mem: int | None = None, alpha: float = 0.1, beta: float = 0.5, train_mb_size: int = 1, train_epochs: int = 1, eval_mb_size: int | None = 1, device: str | ~torch.device = 'cpu', plugins: ~typing.List[~avalanche.core.SupervisedPlugin] | None = None, evaluator: ~avalanche.training.plugins.evaluation.EvaluationPlugin | ~typing.Callable[[], ~avalanche.training.plugins.evaluation.EvaluationPlugin] = <function default_evaluator>, eval_every=-1, peval_mode='epoch', **kwargs)[source]
- Parameters:
model – PyTorch model.
optimizer – PyTorch optimizer.
criterion – loss function.
mem_size – int : Fixed memory size
batch_size_mem – int : Size of the batch sampled from the buffer
alpha – float : Hyperparameter weighting the MSE loss
beta – float : Hyperparameter weighting the CE loss, when more than 0, DER++ is used instead of DER
transforms – Callable: Transformations to use for both the dataset and the buffer data, on top of already existing test transformations. If any supplementary transformations are applied to the input data, it will be overwritten by this argument
train_mb_size – mini-batch size for training.
train_passes – number of training passes.
eval_mb_size – mini-batch size for eval.
device – PyTorch device where the model will be allocated.
plugins – (optional) list of StrategyPlugins.
evaluator – (optional) instance of EvaluationPlugin for logging and metric computations. None to remove logging.
eval_every – the frequency of the calls to eval inside the training loop. -1 disables the evaluation. 0 means eval is called only at the end of the learning experience. Values >0 mean that eval is called every eval_every experiences and at the end of the learning experience.
peval_mode – one of {‘experience’, ‘iteration’}. Decides whether the periodic evaluation during training should execute every eval_every experience or iterations (Default=’experience’).
Methods
__init__
(*, model, optimizer[, criterion, ...])- param model:
PyTorch model.
backward
()Run the backward pass.
check_model_and_optimizer
([...])criterion
()Loss function for supervised problems.
eval
(exp_list, **kwargs)Evaluate the current model on a series of experiences and returns the last recorded value for each metric.
eval_dataset_adaptation
(**kwargs)Initialize self.adapted_dataset.
eval_epoch
(**kwargs)Evaluation loop over the current self.dataloader.
forward
()Compute the model's output given the current mini-batch.
make_eval_dataloader
([num_workers, shuffle, ...])Initializes the eval data loader. :param num_workers: How many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0). :param pin_memory: If True, the data loader will copy Tensors into CUDA pinned memory before returning them. Defaults to True. :param kwargs: :return:.
make_optimizer
([reset_optimizer_state, ...])Optimizer initialization.
make_train_dataloader
([num_workers, ...])Data loader initialization.
model_adaptation
([model])Adapts the model to the current data.
optimizer_step
()Execute the optimizer step (weights update).
stop_training
()Signals to stop training at the next iteration.
train
(experiences[, eval_streams])Training loop.
train_dataset_adaptation
(**kwargs)Initialize self.adapted_dataset.
training_epoch
(**kwargs)Training epoch.
Attributes
is_eval
True if the strategy is in evaluation mode.
mb_task_id
Current mini-batch task labels.
mb_x
Current mini-batch input.
mb_y
Current mini-batch target.
mbatch
Current mini-batch.
mb_output
Model's output computed on the current mini-batch.
dataloader
Dataloader.
optimizer
PyTorch optimizer.
loss
Loss of the current mini-batch.
train_epochs
Number of training epochs.
train_mb_size
Training mini-batch size.
eval_mb_size
Eval mini-batch size.
retain_graph
Retain graph when calling loss.backward().
evaluator
EvaluationPlugin used for logging and metric computations.
clock
Incremental counters for strategy events.
adapted_dataset
Data used to train.
model
PyTorch model.
device
PyTorch device where the model will be allocated.
plugins
List of `SupervisedPlugin`s. .
experience
Current experience.
is_training
True if the strategy is in training mode.
current_eval_stream
Current evaluation stream.