avalanche.benchmarks.task_incremental_benchmark

avalanche.benchmarks.task_incremental_benchmark(bm: CLScenario, reset_task_labels=False) CLScenario[source]

Creates a task-incremental benchmark from a dataset scenario.

Adds progressive task labels to each stream (experience $i$ has task label $i$). Task labels are also added to each AvalancheDataset and will be returned by the __getitem__. For example, if your datasets have <x, y> samples (input, class), the new datasets will return <x, y, t> triplets, where t is the task label.

Example of usage - SplitMNIST with task labels:

bm = SplitMNIST(2)  # create class-incremental splits
bm = task_incremental_benchmark(bm)  # adds task labels to the benchmark

If reset_task_labels is False (default) the datasets must not have task labels already set. If the dataset have task labels, use:

with_task_labels(benchmark_from_datasets(**dataset_streams)
Parameters:
  • **dataset_streams

    keys are stream names, values are list of datasets.

  • reset_task_labels – whether existing task labels should be ignored. If False (default) if any dataset has task labels the function will raise a ValueError. If True, it will reset task labels.

Returns:

a CLScenario in the task-incremental setting.