avalanche.training.ICaRL
- class avalanche.training.ICaRL(feature_extractor: ~torch.nn.modules.module.Module, classifier: ~torch.nn.modules.module.Module, optimizer: ~torch.optim.optimizer.Optimizer, memory_size, buffer_transform, fixed_memory, criterion=<avalanche.training.losses.ICaRLLossPlugin object>, train_mb_size: int = 1, train_epochs: int = 1, eval_mb_size: ~typing.Optional[int] = None, device=None, plugins: ~typing.Optional[~typing.List[~avalanche.core.SupervisedPlugin]] = None, evaluator: ~avalanche.training.plugins.evaluation.EvaluationPlugin = <avalanche.training.plugins.evaluation.EvaluationPlugin object>, eval_every=-1)[source]
iCaRL Strategy.
This strategy does not use task identities.
- __init__(feature_extractor: ~torch.nn.modules.module.Module, classifier: ~torch.nn.modules.module.Module, optimizer: ~torch.optim.optimizer.Optimizer, memory_size, buffer_transform, fixed_memory, criterion=<avalanche.training.losses.ICaRLLossPlugin object>, train_mb_size: int = 1, train_epochs: int = 1, eval_mb_size: ~typing.Optional[int] = None, device=None, plugins: ~typing.Optional[~typing.List[~avalanche.core.SupervisedPlugin]] = None, evaluator: ~avalanche.training.plugins.evaluation.EvaluationPlugin = <avalanche.training.plugins.evaluation.EvaluationPlugin object>, eval_every=-1)[source]
Init.
- Parameters
feature_extractor – The feature extractor.
classifier – The differentiable classifier that takes as input the output of the feature extractor.
optimizer – The optimizer to use.
memory_size – The nuber of patterns saved in the memory.
buffer_transform – transform applied on buffer elements already modified by test_transform (if specified) before being used for replay
fixed_memory – If True a memory of size memory_size is allocated and partitioned between samples from the observed experiences. If False every time a new class is observed memory_size samples of that class are added to the memory.
train_mb_size – The train minibatch size. Defaults to 1.
train_epochs – The number of training epochs. Defaults to 1.
eval_mb_size – The eval minibatch size. Defaults to 1.
device – The device to use. Defaults to None (cpu).
plugins – Plugins to be added. Defaults to None.
evaluator – (optional) instance of EvaluationPlugin for logging and metric computations.
eval_every – the frequency of the calls to eval inside the training loop. -1 disables the evaluation. 0 means eval is called only at the end of the learning experience. Values >0 mean that eval is called every eval_every epochs and at the end of the learning experience.
Methods
__init__
(feature_extractor, classifier, ...)Init.
backward
()Run the backward pass.
criterion
()Loss function.
eval
(exp_list, **kwargs)Evaluate the current model on a series of experiences and returns the last recorded value for each metric.
eval_dataset_adaptation
(**kwargs)Initialize self.adapted_dataset.
eval_epoch
(**kwargs)Evaluation loop over the current self.dataloader.
forward
()Compute the model's output given the current mini-batch.
make_eval_dataloader
([num_workers, ...])Initializes the eval data loader. :param num_workers: How many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0). :param pin_memory: If True, the data loader will copy Tensors into CUDA pinned memory before returning them. Defaults to True. :param kwargs: :return:.
make_optimizer
()Optimizer initialization.
make_train_dataloader
([num_workers, ...])Data loader initialization.
model_adaptation
([model])Adapts the model to the current data.
optimizer_step
()Execute the optimizer step (weights update).
stop_training
()Signals to stop training at the next iteration.
train
(experiences[, eval_streams])Training loop.
train_dataset_adaptation
(**kwargs)Initialize self.adapted_dataset.
training_epoch
(**kwargs)Training epoch.
Attributes
is_eval
True if the strategy is in evaluation mode.
mb_task_id
Current mini-batch task labels.
mb_x
Current mini-batch input.
mb_y
Current mini-batch target.