avalanche.benchmarks.utils.avalanche_dataset.AvalancheSubset

class avalanche.benchmarks.utils.avalanche_dataset.AvalancheSubset(dataset: Union[avalanche.benchmarks.utils.dataset_definitions.IDatasetWithTargets, avalanche.benchmarks.utils.dataset_definitions.ITensorDataset, torch.utils.data.dataset.Subset, torch.utils.data.dataset.ConcatDataset], indices: Optional[Sequence[int]] = None, *, class_mapping: Optional[Sequence[int]] = None, transform: Optional[Callable[[Any], Any]] = None, target_transform: Optional[Callable[[int], int]] = None, transform_groups: Optional[Dict[str, Tuple[Optional[Callable[[Any], Any]], Optional[Callable[[Any], avalanche.benchmarks.utils.avalanche_dataset.TTargetType]]]]] = None, initial_transform_group: Optional[str] = None, task_labels: Optional[Union[int, Sequence[int]]] = None, targets: Optional[Sequence[avalanche.benchmarks.utils.avalanche_dataset.TTargetType]] = None, dataset_type: Optional[avalanche.benchmarks.utils.avalanche_dataset.AvalancheDatasetType] = None, collate_fn: Optional[Callable[[List], Any]] = None, targets_adapter: Optional[Callable[[Any], avalanche.benchmarks.utils.avalanche_dataset.TTargetType]] = None)[source]

A Dataset that behaves like a PyTorch torch.utils.data.Subset. This Dataset also supports transformations, slicing, advanced indexing, the targets field, class mapping and all the other goodies listed in AvalancheDataset.

__init__(dataset: Union[avalanche.benchmarks.utils.dataset_definitions.IDatasetWithTargets, avalanche.benchmarks.utils.dataset_definitions.ITensorDataset, torch.utils.data.dataset.Subset, torch.utils.data.dataset.ConcatDataset], indices: Optional[Sequence[int]] = None, *, class_mapping: Optional[Sequence[int]] = None, transform: Optional[Callable[[Any], Any]] = None, target_transform: Optional[Callable[[int], int]] = None, transform_groups: Optional[Dict[str, Tuple[Optional[Callable[[Any], Any]], Optional[Callable[[Any], avalanche.benchmarks.utils.avalanche_dataset.TTargetType]]]]] = None, initial_transform_group: Optional[str] = None, task_labels: Optional[Union[int, Sequence[int]]] = None, targets: Optional[Sequence[avalanche.benchmarks.utils.avalanche_dataset.TTargetType]] = None, dataset_type: Optional[avalanche.benchmarks.utils.avalanche_dataset.AvalancheDatasetType] = None, collate_fn: Optional[Callable[[List], Any]] = None, targets_adapter: Optional[Callable[[Any], avalanche.benchmarks.utils.avalanche_dataset.TTargetType]] = None)[source]

Creates an AvalancheSubset instance.

Parameters
  • dataset – The whole dataset.

  • indices – Indices in the whole set selected for subset. Can be None, which means that the whole dataset will be returned.

  • class_mapping – A list that, for each possible target (Y) value, contains its corresponding remapped value. Can be None. Beware that setting this parameter will force the final dataset type to be CLASSIFICATION or UNDEFINED.

  • transform – A function/transform that takes the X value of a pattern from the original dataset and returns a transformed version.

  • target_transform – A function/transform that takes in the target and transforms it.

  • transform_groups – A dictionary containing the transform groups. Transform groups are used to quickly switch between training and eval (test) transformations. This becomes useful when in need to test on the training dataset as test transformations usually don’t contain random augmentations. AvalancheDataset natively supports the ‘train’ and ‘eval’ groups by calling the train() and eval() methods. When using custom groups one can use the with_transforms(group_name) method instead. Defaults to None, which means that the current transforms will be used to handle both ‘train’ and ‘eval’ groups (just like in standard torchvision datasets).

  • initial_transform_group – The name of the initial transform group to be used. Defaults to None, which means that the current group of the input dataset will be used (if an AvalancheDataset). If the input dataset is not an AvalancheDataset, then ‘train’ will be used.

  • task_labels – The task label for each instance. Must be a sequence of ints, one for each instance in the dataset. This can either be a list of task labels for the original dataset or the list of task labels for the instances of the subset (an automatic detection will be made). In the unfortunate case in which the original dataset and the subset contain the same amount of instances, then this parameter is considered to contain the task labels of the subset. Alternatively can be a single int value, in which case that value will be used as the task label for all the instances. Defaults to None, which means that the dataset will try to obtain the task labels from the original dataset. If no task labels could be found, a default task label “0” will be applied to all instances.

  • targets – The label of each pattern. Defaults to None, which means that the targets will be retrieved from the dataset (if possible). This can either be a list of target labels for the original dataset or the list of target labels for the instances of the subset (an automatic detection will be made). In the unfortunate case in which the original dataset and the subset contain the same amount of instances, then this parameter is considered to contain the target labels of the subset.

  • dataset_type – The type of the dataset. Defaults to None, which means that the type will be inferred from the input dataset. When the dataset_type is different than UNDEFINED, a proper value for collate_fn and targets_adapter will be set. If the dataset_type is different than UNDEFINED, then collate_fn and targets_adapter must not be set. The only exception to this rule regards class_mapping. If class_mapping is set, the final dataset_type (as set by this parameter or detected from the subset) must be CLASSIFICATION or UNDEFINED.

  • collate_fn – The function to use when slicing to merge single patterns. In the future this function may become the function used in the data loading process, too. If None and the dataset_type is UNDEFINED, the constructor will check if a collate_fn field exists in the dataset. If no such field exists, the default collate function will be used.

  • targets_adapter – A function used to convert the values of the targets field. Defaults to None. Note: the adapter will not change the value of the second element returned by __getitem__. The adapter is used to adapt the values of the targets field only.

Methods

__init__(dataset[, indices, class_mapping, ...])

Creates an AvalancheSubset instance.

add_transforms([transform, target_transform])

Returns a new dataset with the given transformations added to the existing ones.

add_transforms_group(group_name, transform, ...)

Returns a new dataset with a new transformations group.

add_transforms_to_group(group_name[, ...])

Returns a new dataset with the given transformations added to the existing ones for a certain group.

eval()

Returns a new dataset with the transformations of the 'eval' group loaded.

freeze_group_transforms(group_name)

Returns a new dataset where the transformations for a specific group are frozen.

freeze_transforms()

Returns a new dataset where the current transformations are frozen.

get_transforms([transforms_group])

Returns the transformations given a group.

register_datapipe_as_function(function_name, ...)

register_function(function_name, function)

replace_transforms(transform, target_transform)

Returns a new dataset with the existing transformations replaced with the given ones.

train()

Returns a new dataset with the transformations of the 'train' group loaded.

with_transforms(group_name)

Returns a new dataset with the transformations of a different group loaded.

Attributes

functions