- avalanche.benchmarks.utils.make_classification_dataset(dataset: Union[IDatasetWithTargets, ITensorDataset, Subset, ConcatDataset, ClassificationDataset], *, transform: Optional[Union[XTransformDef, XComposedTransformDef]] = None, target_transform: Optional[YTransformDef] = None, transform_groups: Optional[Dict[str, Union[None, XTransformDef, XComposedTransformDef, Tuple[Optional[Union[XTransformDef, XComposedTransformDef]], Optional[YTransformDef]]]]] = None, initial_transform_group: Optional[str] = None, task_labels: Optional[Union[int, Sequence[int]]] = None, targets: Optional[Sequence[int]] = None, collate_fn: Optional[Callable[[List], Any]] = None)
Avalanche Classification Dataset.
Supervised continual learning benchmarks in Avalanche return instances of this dataset, but it can also be used in a completely standalone manner.
This dataset applies input/target transformations, it supports slicing and advanced indexing and it also contains useful fields as targets, which contains the pattern labels, and targets_task_labels, which contains the pattern task labels. The task_set field can be used to obtain a the subset of patterns labeled with a given task label.
This dataset can also be used to apply several advanced operations involving transformations. For instance, it allows the user to add and replace transformations, freeze them so that they can’t be changed, etc.
This dataset also allows the user to keep distinct transformations groups. Simply put, a transformation group is a pair of transform+target_transform (exactly as in torchvision datasets). This dataset natively supports keeping two transformation groups: the first, ‘train’, contains transformations applied to training patterns. Those transformations usually involve some kind of data augmentation. The second one is ‘eval’, that will contain transformations applied to test patterns. Having both groups can be useful when, for instance, in need to test on the training data (as this process usually involves removing data augmentation operations). Switching between transformations can be easily achieved by using the
Moreover, arbitrary transformation groups can be added and used. For more info see the constructor and the
This dataset will try to inherit the task labels from the input dataset. If none are available and none are given via the task_labels parameter, each pattern will be assigned a default task label 0.
dataset – The dataset to decorate. Beware that AvalancheDataset will not overwrite transformations already applied by this dataset.
transform – A function/transform that takes the X value of a pattern from the original dataset and returns a transformed version.
target_transform – A function/transform that takes in the target and transforms it.
transform_groups – A dictionary containing the transform groups. Transform groups are used to quickly switch between training and eval (test) transformations. This becomes useful when in need to test on the training dataset as test transformations usually don’t contain random augmentations.
AvalancheDatasetnatively supports the ‘train’ and ‘eval’ groups by calling the
eval()methods. When using custom groups one can use the
with_transforms(group_name)method instead. Defaults to None, which means that the current transforms will be used to handle both ‘train’ and ‘eval’ groups (just like in standard
initial_transform_group – The name of the initial transform group to be used. Defaults to None, which means that the current group of the input dataset will be used (if an AvalancheDataset). If the input dataset is not an AvalancheDataset, then ‘train’ will be used.
task_labels – The task label of each instance. Must be a sequence of ints, one for each instance in the dataset. Alternatively can be a single int value, in which case that value will be used as the task label for all the instances. Defaults to None, which means that the dataset will try to obtain the task labels from the original dataset. If no task labels could be found, a default task label 0 will be applied to all instances.
targets – The label of each pattern. Defaults to None, which means that the targets will be retrieved from the dataset (if possible).
collate_fn – The function to use when slicing to merge single patterns.This function is the function used in the data loading process, too. If None the constructor will check if a collate_fn field exists in the dataset. If no such field exists, the default collate function will be used.