avalanche.models

The models module provides a set of (eventually pre-trained) models that can be used for your continual learning experiments and applications. These models are mostly torchvision.models and pytorchcv but we plan to add more architectures in the near future.

Submodules

Package Contents

Classes

SimpleCNN

Base class for all neural network modules.

MTSimpleCNN

Base class for all neural network modules.

SimpleMLP

Base class for all neural network modules.

MTSimpleMLP

Multi-task modules are `torch.nn.Modules`s for multi-task

SimpleMLP_TinyImageNet

Base class for all neural network modules.

MobilenetV1

Base class for all neural network modules.

DynamicModule

Dynamic Modules are Avalanche modules that can be incrementally

MultiTaskModule

Multi-task modules are `torch.nn.Modules`s for multi-task

IncrementalClassifier

Dynamic Modules are Avalanche modules that can be incrementally

MultiHeadClassifier

Multi-task modules are `torch.nn.Modules`s for multi-task

TrainEvalModel

TrainEvalModel.

FeatureExtractorBackbone

This PyTorch module allows us to extract features from a backbone network

SLDAResNetModel

This is a model wrapper to reproduce experiments from the original

IcarlNet

Base class for all neural network modules.

NCMClassifier

NCM Classifier.

BaseModel

A base abstract class for models

Functions

avalanche_forward(model, x, task_labels)

initialize_icarl_net(m: Module)

make_icarl_net(num_classes: int, n=5, c=3) → IcarlNet

as_multitask(model: nn.Module, classifier_name: str) → MultiTaskModule

Wraps around a model to make it a multitask model

class avalanche.models.SimpleCNN(num_classes=10)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(self, x)[source]
class avalanche.models.MTSimpleCNN[source]

Bases: avalanche.models.simple_cnn.SimpleCNN, avalanche.models.dynamic_modules.MultiTaskModule

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Multi-task CNN with multi-head classifier.

forward(self, x, task_labels)[source]

compute the output given the input x and task labels.

Parameters
  • x

  • task_labels – task labels for each sample. if None, the computation will return all the possible outputs as a dictionary with task IDs as keys and the output of the corresponding task as output.

Returns

class avalanche.models.SimpleMLP(num_classes=10, input_size=28 * 28, hidden_size=512, hidden_layers=1, drop_rate=0.5)[source]

Bases: torch.nn.Module, avalanche.models.base_model.BaseModel

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(self, x)[source]
get_features(self, x)[source]

Get features from model given input

class avalanche.models.MTSimpleMLP(input_size=28 * 28, hidden_size=512)[source]

Bases: avalanche.models.dynamic_modules.MultiTaskModule

Multi-task modules are torch.nn.Modules`s for multi-task scenarios. The `forward method accepts task labels, one for each sample in the mini-batch.

By default the forward method splits the mini-batch by task and calls forward_single_task. Subclasses must implement forward_single_task or override `forward.

if task_labels == None, the output is computed in parallel for each task.

Multi-task MLP with multi-head classifier.

forward(self, x, task_labels)[source]

compute the output given the input x and task labels.

Parameters
  • x

  • task_labels – task labels for each sample. if None, the computation will return all the possible outputs as a dictionary with task IDs as keys and the output of the corresponding task as output.

Returns

class avalanche.models.SimpleMLP_TinyImageNet(num_classes=200, num_channels=3)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(self, x)[source]
class avalanche.models.MobilenetV1(pretrained=True, latent_layer_num=20)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(self, x, latent_input=None, return_lat_acts=False)[source]
class avalanche.models.DynamicModule[source]

Bases: torch.nn.Module

Dynamic Modules are Avalanche modules that can be incrementally expanded to allow architectural modifications (multi-head classifiers, progressive networks, …).

Compared to pytoch Modules, they provide an additional method, model_adaptation, which adapts the model given data from the current experience.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

adaptation(self, dataset: AvalancheDataset = None)[source]

Adapt the module (freeze units, add units…) using the current data. Optimizers must be updated after the model adaptation.

Avalanche strategies call this method to adapt the architecture before processing each experience. Strategies also update the optimizer automatically.

Warning

As a general rule, you should NOT use this method to train the model. The dataset should be used only to check conditions which require the model’s adaptation, such as the discovery of new classes or tasks.

Parameters

dataset – data from the current experience.

Returns

train_adaptation(self, dataset: AvalancheDataset)[source]

Module’s adaptation at training time.

Avalanche strategies automatically call this method before training on each experience.

eval_adaptation(self, dataset: AvalancheDataset)[source]

Module’s adaptation at evaluation time.

Avalanche strategies automatically call this method before evaluating on each experience.

Warning

This method receives the experience’s data at evaluation time because some dynamic models need it for adaptation. For example, an incremental classifier needs to be expanded even at evaluation time if new classes are available. However, you should never use this data to train the module’s parameters.

class avalanche.models.MultiTaskModule[source]

Bases: avalanche.models.dynamic_modules.DynamicModule

Multi-task modules are torch.nn.Modules`s for multi-task scenarios. The `forward method accepts task labels, one for each sample in the mini-batch.

By default the forward method splits the mini-batch by task and calls forward_single_task. Subclasses must implement forward_single_task or override `forward.

if task_labels == None, the output is computed in parallel for each task.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

known_train_tasks_labels

Set of task labels encountered up to now.

train_adaptation(self, dataset: AvalancheDataset = None)[source]

Update known task labels.

forward(self, x: torch.Tensor, task_labels: torch.Tensor) torch.Tensor[source]

compute the output given the input x and task labels.

Parameters
  • x

  • task_labels – task labels for each sample. if None, the computation will return all the possible outputs as a dictionary with task IDs as keys and the output of the corresponding task as output.

Returns

abstract forward_single_task(self, x: torch.Tensor, task_label: int) torch.Tensor[source]

compute the output given the input x and task label.

Parameters
  • x

  • task_label – a single task label.

Returns

forward_all_tasks(self, x: torch.Tensor)[source]

compute the output given the input x and task label. By default, it considers only tasks seen at training time.

Parameters

x

Returns

all the possible outputs are returned as a dictionary with task IDs as keys and the output of the corresponding task as output.

class avalanche.models.IncrementalClassifier(in_features, initial_out_features=2)[source]

Bases: avalanche.models.dynamic_modules.DynamicModule

Dynamic Modules are Avalanche modules that can be incrementally expanded to allow architectural modifications (multi-head classifiers, progressive networks, …).

Compared to pytoch Modules, they provide an additional method, model_adaptation, which adapts the model given data from the current experience.

Output layer that incrementally adds units whenever new classes are encountered.

Typically used in class-incremental benchmarks where the number of classes grows over time.

Parameters
  • in_features – number of input features.

  • initial_out_features – initial number of classes (can be dynamically expanded).

adaptation(self, dataset: AvalancheDataset)[source]

If dataset contains unseen classes the classifier is expanded.

Parameters

dataset – data from the current experience.

Returns

forward(self, x, **kwargs)[source]

compute the output given the input x. This module does not use the task label.

Parameters

x

Returns

class avalanche.models.MultiHeadClassifier(in_features, initial_out_features=2)[source]

Bases: avalanche.models.dynamic_modules.MultiTaskModule

Multi-task modules are torch.nn.Modules`s for multi-task scenarios. The `forward method accepts task labels, one for each sample in the mini-batch.

By default the forward method splits the mini-batch by task and calls forward_single_task. Subclasses must implement forward_single_task or override `forward.

if task_labels == None, the output is computed in parallel for each task.

Multi-head classifier with separate heads for each task.

Typically used in task-incremental benchmarks where task labels are available and provided to the model.

Note

Each output head may have a different shape, and the number of classes can be determined automatically.

However, since pytorch doest not support jagged tensors, when you compute a minibatch’s output you must ensure that each sample has the same output size, otherwise the model will fail to concatenate the samples together.

These can be easily ensured in two possible ways: - each minibatch contains a single task, which is the case in most

common benchmarks in Avalanche. Some exceptions to this setting are multi-task replay or cumulative strategies.

  • each head has the same size, which can be enforced by setting a

    large enough initial_out_features.

Parameters
  • in_features – number of input features.

  • initial_out_features – initial number of classes (can be dynamically expanded).

adaptation(self, dataset: AvalancheDataset)[source]

If dataset contains new tasks, a new head is initialized.

Parameters

dataset – data from the current experience.

Returns

forward_single_task(self, x, task_label)[source]

compute the output given the input x. This module uses the task label to activate the correct head.

Parameters
  • x

  • task_label

Returns

class avalanche.models.TrainEvalModel(feature_extractor, train_classifier, eval_classifier)[source]

Bases: avalanche.models.dynamic_modules.DynamicModule

TrainEvalModel. This module allows to wrap together a common feature extractor and two classifiers: one used during training time and another used at test time. The classifier is switched when self.adaptation() is called.

Parameters
  • feature_extractor – a differentiable feature extractor

  • train_classifier – a differentiable classifier used during training

  • eval_classifier – a classifier used during testing. Doesn’t have to be differentiable.

forward(self, x)[source]
train_adaptation(self, dataset: AvalancheDataset = None)[source]

Module’s adaptation at training time.

Avalanche strategies automatically call this method before training on each experience.

eval_adaptation(self, dataset: AvalancheDataset = None)[source]

Module’s adaptation at evaluation time.

Avalanche strategies automatically call this method before evaluating on each experience.

Warning

This method receives the experience’s data at evaluation time because some dynamic models need it for adaptation. For example, an incremental classifier needs to be expanded even at evaluation time if new classes are available. However, you should never use this data to train the module’s parameters.

avalanche.models.avalanche_forward(model, x, task_labels)[source]
class avalanche.models.FeatureExtractorBackbone(model, output_layer_name)[source]

Bases: torch.nn.Module

This PyTorch module allows us to extract features from a backbone network given a layer name.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(self, x)[source]
get_name_to_module(self, model)[source]
get_activation(self)[source]
add_hooks(self, model)[source]
Parameters
  • model

  • outputs – Outputs from layers specified in output_layer_names

will be stored in output variable :param output_layer_names: :return:

class avalanche.models.SLDAResNetModel(arch='resnet18', output_layer_name='layer4.1', imagenet_pretrained=True, device='cpu')[source]

Bases: torch.nn.Module

This is a model wrapper to reproduce experiments from the original paper of Deep Streaming Linear Discriminant Analysis by using a pretrained ResNet model.

Parameters

arch – backbone architecture (default is resnet-18, but others

can be used by modifying layer for feature extraction in `self.feature_extraction_wrapper’ :param imagenet_pretrained: True if initializing backbone with imagenet pre-trained weights else False :param output_layer_name: name of the layer from feature extractor :param device: cpu, gpu or other device

static pool_feat(features)[source]
forward(self, x)[source]
Parameters

x – raw x data

avalanche.models.initialize_icarl_net(m: Module)[source]
avalanche.models.make_icarl_net(num_classes: int, n=5, c=3) IcarlNet[source]
class avalanche.models.IcarlNet(num_classes: int, n=5, c=3)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(self, x)[source]
class avalanche.models.NCMClassifier(class_mean=None)[source]

Bases: torch.nn.Module

NCM Classifier. NCMClassifier performs nearest class mean classification measuring the distance between the input tensor and the ones stored in ‘self.class_means’.

Parameters

class_mean – tensor of dimension (num_classes x feature_size) used to classify input patterns.

forward(self, x)[source]
class avalanche.models.BaseModel[source]

Bases: abc.ABC

A base abstract class for models

abstract get_features(self, x)[source]

Get features from model given input

avalanche.models.as_multitask(model: nn.Module, classifier_name: str) MultiTaskModule[source]

Wraps around a model to make it a multitask model

Parameters
  • model – model to be converted into MultiTaskModule

  • classifier_name – the name of the attribute containing the classification layer (nn.Linear). It can also be an instance of nn.Sequential containing multiple layers as long as the classification layer is the last layer.

:return the decorated model, now subclassing MultiTaskModule, and accepting task_labels as forward() method argument