avalanche.models.FeCAMClassifier

class avalanche.models.FeCAMClassifier(tukey=True, shrinkage=True, shrink1: float = 1.0, shrink2: float = 1.0, tukey1: float = 0.5, covnorm=True)[source]

Similar to NCM but uses malahanobis distance instead of l2 distance

This approach has been proposed for continual learning in “FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning” Goswami et. al. (Neurips 2023)

This requires the storage of full per-class covariance matrices

__init__(tukey=True, shrinkage=True, shrink1: float = 1.0, shrink2: float = 1.0, tukey1: float = 0.5, covnorm=True)[source]
Parameters:
  • tukey

    whether to use the tukey transforms (help get the distribution closer

    to multivariate gaussian)

  • shrinkage – whether to shrink the covariance matrices

  • shrink1

  • shrink2

  • tukey1 – power in tukey transforms

  • covnorm – whether to normalize the covariance matrix

Methods

__init__([tukey, shrinkage, shrink1, ...])

param tukey:

whether to use the tukey transforms

adaptation(experience)

Adapt the module (freeze units, add units...) using the current data.

add_module(name, module)

Add a child module to the current module.

apply(fn)

Apply fn recursively to every submodule (as returned by .children()) as well as self.

apply_cov_transforms(class_cov)

apply_invert_transforms(features)

apply_transforms(features)

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Return an iterator over module buffers.

children()

Return an iterator over immediate children modules.

compile(*args, **kwargs)

Compile this Module's forward using torch.compile().

cpu()

Move all model parameters and buffers to the CPU.

cuda([device])

Move all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Set the module in evaluation mode.

extra_repr()

Set the extra representation of the module.

float()

Casts all floating point parameters and buffers to float datatype.

forward(x)

param x:

(batch_size, feature_size)

get_buffer(target)

Return the buffer given by target if it exists, otherwise throw an error.

get_extra_state()

Return any extra state to include in the module's state_dict.

get_parameter(target)

Return the parameter given by target if it exists, otherwise throw an error.

get_submodule(target)

Return the submodule given by target if it exists, otherwise throw an error.

half()

Casts all floating point parameters and buffers to half datatype.

init_missing_classes(classes, class_size, device)

ipu([device])

Move all model parameters and buffers to the IPU.

load_state_dict(state_dict[, strict])

Copy parameters and buffers from state_dict into this module and its descendants.

modules()

Return an iterator over all modules in the network.

named_buffers([prefix, recurse, ...])

Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse, ...])

Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Return an iterator over module parameters.

pre_adapt(agent, experience)

Calls self.adaptation recursively accross the hierarchy of pytorch module childrens

register_backward_hook(hook)

Register a backward hook on the module.

register_buffer(name, tensor[, persistent])

Add a buffer to the module.

register_forward_hook(hook, *[, prepend, ...])

Register a forward hook on the module.

register_forward_pre_hook(hook, *[, ...])

Register a forward pre-hook on the module.

register_full_backward_hook(hook[, prepend])

Register a backward hook on the module.

register_full_backward_pre_hook(hook[, prepend])

Register a backward pre-hook on the module.

register_load_state_dict_post_hook(hook)

Register a post hook to be run after module's load_state_dict is called.

register_module(name, module)

Alias for add_module().

register_parameter(name, param)

Add a parameter to the module.

register_state_dict_pre_hook(hook)

Register a pre-hook for the state_dict() method.

replace_class_cov_dict(class_cov_dict)

replace_class_means_dict(class_means_dict)

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

Set extra state contained in the loaded state_dict.

share_memory()

See torch.Tensor.share_memory_().

state_dict(*args[, destination, prefix, ...])

Return a dictionary containing references to the whole state of the module.

to(*args, **kwargs)

Move and/or cast the parameters and buffers.

to_empty(*, device[, recurse])

Move the parameters and buffers to the specified device without copying storage.

train([mode])

Set the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

update_class_cov_dict(class_cov_dict[, momentum])

update_class_means_dict(class_means_dict[, ...])

xpu([device])

Move all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Reset gradients of all model parameters.

Attributes

T_destination

call_super_init

dump_patches

training