avalanche.training.plugins.BiCPlugin

class avalanche.training.plugins.BiCPlugin(mem_size: int = 2000, batch_size: Optional[int] = None, batch_size_mem: Optional[int] = None, task_balanced_dataloader: bool = False, storage_policy: Optional[ExemplarsBuffer] = None, val_percentage: float = 0.1, T: int = 2, stage_2_epochs: int = 200, lamb: float = -1, lr: float = 0.1)[source]

Bias Correction (BiC) plugin.

Technique introduced in: “Wu, Yue, et al. “Large scale incremental learning.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019”

Implementation based on FACIL, as in: https://github.com/mmasana/FACIL/blob/master/src/approach/bic.py

__init__(mem_size: int = 2000, batch_size: Optional[int] = None, batch_size_mem: Optional[int] = None, task_balanced_dataloader: bool = False, storage_policy: Optional[ExemplarsBuffer] = None, val_percentage: float = 0.1, T: int = 2, stage_2_epochs: int = 200, lamb: float = -1, lr: float = 0.1)[source]
Parameters
  • mem_size – replay buffer size.

  • batch_size – the size of the data batch. If set to None, it will be set equal to the strategy’s batch size.

  • batch_size_mem – the size of the memory batch. If task_balanced_dataloader is set to True, it must be greater than or equal to the number of tasks. If its value is set to None (the default value), it will be automatically set equal to the data batch size.

  • task_balanced_dataloader – if True, buffer data loaders will be task-balanced, otherwise it will create a single dataloader for the buffer samples.

  • storage_policy – The policy that controls how to add new exemplars in memory

  • val_percentage – hyperparameter used to set the percentage of exemplars in the val set.

  • T – hyperparameter used to set the temperature used in stage 1.

  • stage_2_epochs – hyperparameter used to set the amount of epochs of stage 2.

  • lamb – hyperparameter used to balance the distilling loss and the classification loss.

  • lr – hyperparameter used as a learning rate for the second phase of training.

Methods

__init__([mem_size, batch_size, ...])

param mem_size

replay buffer size.

after_backward(strategy, *args, **kwargs)

Called after criterion.backward() by the BaseTemplate.

after_eval(strategy, *args, **kwargs)

Called after eval by the BaseTemplate.

after_eval_dataset_adaptation(strategy, ...)

Called after eval_dataset_adaptation by the BaseTemplate.

after_eval_exp(strategy, *args, **kwargs)

Called after eval_exp by the BaseTemplate.

after_eval_forward(strategy, **kwargs)

Called after model.forward() by the BaseTemplate.

after_eval_iteration(strategy, *args, **kwargs)

Called after the end of an iteration by the BaseTemplate.

after_forward(strategy, **kwargs)

Called after model.forward() by the BaseTemplate.

after_train_dataset_adaptation(strategy, ...)

Called after train_dataset_adapatation by the BaseTemplate.

after_training(strategy, *args, **kwargs)

Called after train by the BaseTemplate.

after_training_epoch(strategy, *args, **kwargs)

Called after train_epoch by the BaseTemplate.

after_training_exp(strategy, **kwargs)

Called after train_exp by the BaseTemplate.

after_training_iteration(strategy, *args, ...)

Called after the end of a training iteration by the BaseTemplate.

after_update(strategy, *args, **kwargs)

Called after optimizer.update() by the BaseTemplate.

before_backward(strategy, **kwargs)

Called before criterion.backward() by the BaseTemplate.

before_eval(strategy, *args, **kwargs)

Called before eval by the BaseTemplate.

before_eval_dataset_adaptation(strategy, ...)

Called before eval_dataset_adaptation by the BaseTemplate.

before_eval_exp(strategy, *args, **kwargs)

Called before eval_exp by the BaseTemplate.

before_eval_forward(strategy, *args, **kwargs)

Called before model.forward() by the BaseTemplate.

before_eval_iteration(strategy, *args, **kwargs)

Called before the start of a training iteration by the BaseTemplate.

before_forward(strategy, *args, **kwargs)

Called before model.forward() by the BaseTemplate.

before_train_dataset_adaptation(strategy, ...)

Called before train_dataset_adapatation by the BaseTemplate.

before_training(strategy, *args, **kwargs)

Called before train by the BaseTemplate.

before_training_epoch(strategy, *args, **kwargs)

Called before train_epoch by the BaseTemplate.

before_training_exp(strategy[, num_workers, ...])

Dataloader to build batches containing examples from both memories and the training dataset

before_training_iteration(strategy, *args, ...)

Called before the start of a training iteration by the BaseTemplate.

before_update(strategy, *args, **kwargs)

Called before optimizer.update() by the BaseTemplate.

cross_entropy(outputs, targets)

Calculates cross-entropy with temperature scaling

get_group_lengths(num_groups)

Compute groups lengths given the number of groups num_groups.

Attributes

ext_mem