avalanche.training.LearningToPrompt

class avalanche.training.LearningToPrompt(model_name: str, criterion: ~torch.nn.modules.module.Module = CrossEntropyLoss(), train_mb_size: int = 1, train_epochs: int = 1, eval_mb_size: int | None = 1, device: str | ~torch.device = 'cpu', plugins: ~typing.List[~avalanche.core.SupervisedPlugin] | None = None, evaluator: ~avalanche.training.plugins.evaluation.EvaluationPlugin | ~typing.Callable[[], ~avalanche.training.plugins.evaluation.EvaluationPlugin] = <function default_evaluator>, eval_every: int = -1, peval_mode: str = 'epoch', prompt_pool: bool = True, pool_size: int = 20, prompt_length: int = 5, top_k: int = 5, lr: float = 0.03, sim_coefficient: float = 0.1, prompt_key: bool = True, pretrained: bool = True, num_classes: int = 10, drop_rate: float = 0.0, drop_path_rate: float = 0.0, embedding_key: str = 'cls', prompt_init: str = 'uniform', batchwise_prompt: bool = False, head_type: str = 'prompt', use_prompt_mask: bool = False, train_prompt_mask: bool = False, use_cls_features: bool = True, use_mask: bool = True, use_vit: bool = True, **kwargs)[source]

Learning to Prompt (L2P) strategy.

Technique introduced in: “Wang, Zifeng, et al. “Learning to prompt for continual learning.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.”

Implementation based on: - https://github.com/JH-LEE-KR/l2p-pytorch - And implementations by Dario Salvati

As a model_name, we expect to receive one of the model list in avalanche.models.vit

Those models are based on the library timm.

__init__(model_name: str, criterion: ~torch.nn.modules.module.Module = CrossEntropyLoss(), train_mb_size: int = 1, train_epochs: int = 1, eval_mb_size: int | None = 1, device: str | ~torch.device = 'cpu', plugins: ~typing.List[~avalanche.core.SupervisedPlugin] | None = None, evaluator: ~avalanche.training.plugins.evaluation.EvaluationPlugin | ~typing.Callable[[], ~avalanche.training.plugins.evaluation.EvaluationPlugin] = <function default_evaluator>, eval_every: int = -1, peval_mode: str = 'epoch', prompt_pool: bool = True, pool_size: int = 20, prompt_length: int = 5, top_k: int = 5, lr: float = 0.03, sim_coefficient: float = 0.1, prompt_key: bool = True, pretrained: bool = True, num_classes: int = 10, drop_rate: float = 0.0, drop_path_rate: float = 0.0, embedding_key: str = 'cls', prompt_init: str = 'uniform', batchwise_prompt: bool = False, head_type: str = 'prompt', use_prompt_mask: bool = False, train_prompt_mask: bool = False, use_cls_features: bool = True, use_mask: bool = True, use_vit: bool = True, **kwargs)[source]

Init.

Parameters:
  • model_name – Name of the model to use. For a complete list check models.vit.py

  • criterion – Loss functions used during training. Default CrossEntropyLoss.

  • train_mb_size – The train minibatch size. Defaults to 1.

  • train_epochs – The number of training epochs. Defaults to 1.

  • eval_mb_size – The eval minibatch size. Defaults to 1.

  • device – The device to use. Defaults to None (cpu).

  • plugins – Plugins to be added. Defaults to None.

  • evaluator – (optional) instance of EvaluationPlugin for logging and metric computations.

  • eval_every – the frequency of the calls to eval inside the training loop. -1 disables the evaluation. 0 means eval is called only at the end of the learning experience. Values >0 mean that eval is called every eval_every epochs and at the end of the learning experience.

  • use_cls_features – Use an external pre-trained model to obtained features to obtained the prompts.

  • use_mask – Use mask to train only classification rows of the classes of the current task. Default True.

  • use_vit – Boolean to confirm the usage of a visual Transformer. Default True

Methods

__init__(model_name[, criterion, ...])

Init.

backward()

Run the backward pass.

check_model_and_optimizer([...])

criterion()

Loss function for supervised problems.

eval(exp_list, **kwargs)

Evaluate the current model on a series of experiences and returns the last recorded value for each metric.

eval_dataset_adaptation(**kwargs)

Initialize self.adapted_dataset.

eval_epoch(**kwargs)

Evaluation loop over the current self.dataloader.

forward()

Compute the model's output given the current mini-batch.

make_eval_dataloader([num_workers, shuffle, ...])

Initializes the eval data loader. :param num_workers: How many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0). :param pin_memory: If True, the data loader will copy Tensors into CUDA pinned memory before returning them. Defaults to True. :param kwargs: :return:.

make_optimizer([reset_optimizer_state])

Optimizer initialization.

make_train_dataloader([num_workers, ...])

Data loader initialization.

model_adaptation([model])

Adapts the model to the current data.

optimizer_step()

Execute the optimizer step (weights update).

stop_training()

Signals to stop training at the next iteration.

train(experiences[, eval_streams])

Training loop.

train_dataset_adaptation(**kwargs)

Initialize self.adapted_dataset.

training_epoch(**kwargs)

Training epoch.

Attributes

is_eval

True if the strategy is in evaluation mode.

mb_task_id

Current mini-batch task labels.

mb_x

Current mini-batch input.

mb_y

Current mini-batch target.

mbatch

Current mini-batch.

mb_output

Model's output computed on the current mini-batch.

dataloader

Dataloader.

optimizer

PyTorch optimizer.

loss

Loss of the current mini-batch.

train_epochs

Number of training epochs.

train_mb_size

Training mini-batch size.

eval_mb_size

Eval mini-batch size.

retain_graph

Retain graph when calling loss.backward().

evaluator

EvaluationPlugin used for logging and metric computations.

clock

Incremental counters for strategy events.

adapted_dataset

Data used to train.

model

PyTorch model.

device

PyTorch device where the model will be allocated.

plugins

List of `SupervisedPlugin`s. .

experience

Current experience.

is_training

True if the strategy is in training mode.

current_eval_stream

Current evaluation stream.