avalanche.training.StreamingLDA

class avalanche.training.StreamingLDA(slda_model, criterion, input_size, num_classes, output_layer_name=None, shrinkage_param=0.0001, streaming_update_sigma=True, train_epochs: int = 1, train_mb_size: int = 1, eval_mb_size: int = 1, device='cpu', plugins: typing.Optional[typing.Sequence[avalanche.training.plugins.strategy_plugin.StrategyPlugin]] = None, evaluator=<avalanche.training.plugins.evaluation.EvaluationPlugin object>, eval_every=-1)[source]

Deep Streaming Linear Discriminant Analysis.

This strategy does not use backpropagation. Minibatches are first passed to the pretrained feature extractor. The result is processed one element at a time to fit the LDA. Original paper: “Hayes et. al., Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis, CVPR Workshop, 2020” https://openaccess.thecvf.com/content_CVPRW_2020/papers/w15/Hayes_Lifelong_Machine_Learning_With_Deep_Streaming_Linear_Discriminant_Analysis_CVPRW_2020_paper.pdf

__init__(slda_model, criterion, input_size, num_classes, output_layer_name=None, shrinkage_param=0.0001, streaming_update_sigma=True, train_epochs: int = 1, train_mb_size: int = 1, eval_mb_size: int = 1, device='cpu', plugins: typing.Optional[typing.Sequence[avalanche.training.plugins.strategy_plugin.StrategyPlugin]] = None, evaluator=<avalanche.training.plugins.evaluation.EvaluationPlugin object>, eval_every=-1)[source]

Init function for the SLDA model.

Parameters
  • slda_model – a PyTorch model

  • criterion – loss function

  • output_layer_name – if not None, wrap model to retrieve only the output_layer_name output. If None, the strategy assumes that the model already produces a valid output. You can use FeatureExtractorBackbone class to create your custom SLDA-compatible model.

  • input_size – feature dimension

  • num_classes – number of total classes in stream

  • train_mb_size – batch size for feature extractor during training. Fit will be called on a single pattern at a time.

  • eval_mb_size – batch size for inference

  • shrinkage_param – value of the shrinkage parameter

  • streaming_update_sigma – True if sigma is plastic else False

feature extraction in self.feature_extraction_wrapper’ :param plugins: list of StrategyPlugins :param evaluator: Evaluation Plugin instance :param eval_every: run eval every `eval_every epochs.

See BaseStrategy for details.

Methods

__init__(slda_model, criterion, input_size, ...)

Init function for the SLDA model.

criterion()

Loss function.

eval(exp_list, **kwargs)

Evaluate the current model on a series of experiences and returns the last recorded value for each metric.

eval_dataset_adaptation(**kwargs)

Initialize self.adapted_dataset.

eval_epoch(**kwargs)

Evaluation loop over the current self.dataloader.

fit(x, y)

Fit the SLDA model to a new sample (x,y).

fit_base(X, y)

Fit the SLDA model to the base data.

forward([return_features])

Compute the model's output given the current mini-batch.

load_model(save_path, save_name)

Load the model parameters into StreamingLDA object.

make_eval_dataloader([num_workers, pin_memory])

Initializes the eval data loader. :param num_workers: How many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0). :param pin_memory: If True, the data loader will copy Tensors into CUDA pinned memory before returning them. Defaults to True. :param kwargs: :return:.

make_optimizer()

Empty function.

make_train_dataloader([num_workers, ...])

Data loader initialization.

model_adaptation([model])

Adapts the model to the current data.

predict(X)

Make predictions on test data X.

save_model(save_path, save_name)

Save the model parameters to a torch file.

stop_training()

Signals to stop training at the next iteration.

train(experiences[, eval_streams])

Training loop.

train_dataset_adaptation(**kwargs)

Initialize self.adapted_dataset.

train_exp(experience[, eval_streams])

Training loop over a single Experience object.

training_epoch(**kwargs)

Training epoch.

Attributes

DISABLED_CALLBACKS

Internal class attribute used to disable some callbacks if a strategy does not support them.

epoch

Epoch counter.

is_eval

True if the strategy is in evaluation mode.

mb_it

Iteration counter.

mb_task_id

Current mini-batch task labels.

mb_x

Current mini-batch input.

mb_y

Current mini-batch target.

training_exp_counter

Counts the number of training steps.