avalanche.benchmarks.utils.concat_classification_datasets

avalanche.benchmarks.utils.concat_classification_datasets(datasets: List[Union[IDatasetWithTargets, ITensorDataset, Subset, ConcatDataset, ClassificationDataset]], *, transform: Optional[Callable[[Any], Any]] = None, target_transform: Optional[Callable[[int], int]] = None, transform_groups: Optional[Dict[str, Tuple[Optional[Union[XTransformDef, XComposedTransformDef]], Optional[YTransformDef]]]] = None, initial_transform_group: Optional[str] = None, task_labels: Optional[Union[int, Sequence[int], Sequence[Sequence[int]]]] = None, targets: Optional[Union[Sequence[int], Sequence[Sequence[int]]]] = None, collate_fn: Optional[Callable[[List], Any]] = None)[source]

Creates a AvalancheConcatDataset instance.

For simple subset operations you should use the method dataset.concat(other) or concat_datasets from avalanche.benchmarks.utils.utils. Use this constructor only if you need to redefine transformation or class/task labels.

A Dataset that behaves like a PyTorch torch.utils.data.ConcatDataset. However, this Dataset also supports transformations, slicing, advanced indexing and the targets field and all the other goodies listed in AvalancheDataset.

This dataset guarantees that the operations involving the transformations and transformations groups are consistent across the concatenated dataset (if they are subclasses of AvalancheDataset).

Parameters
  • datasets – A collection of datasets.

  • transform – A function/transform that takes the X value of a pattern from the original dataset and returns a transformed version.

  • target_transform – A function/transform that takes in the target and transforms it.

  • transform_groups – A dictionary containing the transform groups. Transform groups are used to quickly switch between training and eval (test) transformations. This becomes useful when in need to test on the training dataset as test transformations usually don’t contain random augmentations. AvalancheDataset natively supports the ‘train’ and ‘eval’ groups by calling the train() and eval() methods. When using custom groups one can use the with_transforms(group_name) method instead. Defaults to None, which means that the current transforms will be used to handle both ‘train’ and ‘eval’ groups (just like in standard torchvision datasets).

  • initial_transform_group – The name of the initial transform group to be used. Defaults to None, which means that if all AvalancheDatasets in the input datasets list agree on a common group (the “current group” is the same for all datasets), then that group will be used as the initial one. If the list of input datasets does not contain an AvalancheDataset or if the AvalancheDatasets do not agree on a common group, then ‘train’ will be used.

  • targets – The label of each pattern. Can either be a sequence of labels or, alternatively, a sequence containing sequences of labels (one for each dataset to be concatenated). Defaults to None, which means that the targets will be retrieved from the datasets (if possible).

  • task_labels – The task labels for each pattern. Must be a sequence of ints, one for each pattern in the dataset. Alternatively, task labels can be expressed as a sequence containing sequences of ints (one for each dataset to be concatenated) or even a single int, in which case that value will be used as the task label for all instances. Defaults to None, which means that the dataset will try to obtain the task labels from the original datasets. If no task labels could be found for a dataset, a default task label 0 will be applied to all patterns of that dataset.

  • collate_fn – The function to use when slicing to merge single patterns. In the future this function may become the function used in the data loading process, too. If None, the constructor will check if a collate_fn field exists in the first dataset. If no such field exists, the default collate function will be used. Beware that the chosen collate function will be applied to all the concatenated datasets even if a different collate is defined in different datasets.