avalanche.training.GDumb

class avalanche.training.GDumb(model: torch.nn.modules.module.Module, optimizer: torch.optim.optimizer.Optimizer, criterion, mem_size: int = 200, train_mb_size: int = 1, train_epochs: int = 1, eval_mb_size: typing.Optional[int] = None, device=None, plugins: typing.Optional[typing.List[avalanche.training.plugins.strategy_plugin.StrategyPlugin]] = None, evaluator: avalanche.training.plugins.evaluation.EvaluationPlugin = <avalanche.training.plugins.evaluation.EvaluationPlugin object>, eval_every=-1)[source]

GDumb strategy.

See GDumbPlugin for more details. This strategy does not use task identities.

__init__(model: torch.nn.modules.module.Module, optimizer: torch.optim.optimizer.Optimizer, criterion, mem_size: int = 200, train_mb_size: int = 1, train_epochs: int = 1, eval_mb_size: typing.Optional[int] = None, device=None, plugins: typing.Optional[typing.List[avalanche.training.plugins.strategy_plugin.StrategyPlugin]] = None, evaluator: avalanche.training.plugins.evaluation.EvaluationPlugin = <avalanche.training.plugins.evaluation.EvaluationPlugin object>, eval_every=-1)[source]

Init.

Parameters
  • model – The model.

  • optimizer – The optimizer to use.

  • criterion – The loss criterion to use.

  • mem_size – replay buffer size.

  • train_mb_size – The train minibatch size. Defaults to 1.

  • train_epochs – The number of training epochs. Defaults to 1.

  • eval_mb_size – The eval minibatch size. Defaults to 1.

  • device – The device to use. Defaults to None (cpu).

  • plugins – Plugins to be added. Defaults to None.

  • evaluator – (optional) instance of EvaluationPlugin for logging and metric computations.

  • eval_every – the frequency of the calls to eval inside the training loop. -1 disables the evaluation. 0 means eval is called only at the end of the learning experience. Values >0 mean that eval is called every eval_every epochs and at the end of the learning experience.

Methods

__init__(model, optimizer, criterion[, ...])

Init.

criterion()

Loss function.

eval(exp_list, **kwargs)

Evaluate the current model on a series of experiences and returns the last recorded value for each metric.

eval_dataset_adaptation(**kwargs)

Initialize self.adapted_dataset.

eval_epoch(**kwargs)

Evaluation loop over the current self.dataloader.

forward()

Compute the model's output given the current mini-batch.

make_eval_dataloader([num_workers, pin_memory])

Initializes the eval data loader. :param num_workers: How many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0). :param pin_memory: If True, the data loader will copy Tensors into CUDA pinned memory before returning them. Defaults to True. :param kwargs: :return:.

make_optimizer()

Optimizer initialization.

make_train_dataloader([num_workers, ...])

Data loader initialization.

model_adaptation([model])

Adapts the model to the current data.

stop_training()

Signals to stop training at the next iteration.

train(experiences[, eval_streams])

Training loop.

train_dataset_adaptation(**kwargs)

Initialize self.adapted_dataset.

train_exp(experience[, eval_streams])

Training loop over a single Experience object.

training_epoch(**kwargs)

Training epoch.

Attributes

DISABLED_CALLBACKS

Internal class attribute used to disable some callbacks if a strategy does not support them.

epoch

Epoch counter.

is_eval

True if the strategy is in evaluation mode.

mb_it

Iteration counter.

mb_task_id

Current mini-batch task labels.

mb_x

Current mini-batch input.

mb_y

Current mini-batch target.

training_exp_counter

Counts the number of training steps.