- avalanche.benchmarks.generators.ni_benchmark(train_dataset: Union[Sequence[Union[IDatasetWithTargets, ITensorDataset, Subset, ConcatDataset, ClassificationDataset]], IDatasetWithTargets, ITensorDataset, Subset, ConcatDataset, ClassificationDataset], test_dataset: Union[Sequence[Union[IDatasetWithTargets, ITensorDataset, Subset, ConcatDataset, ClassificationDataset]], IDatasetWithTargets, ITensorDataset, Subset, ConcatDataset, ClassificationDataset], n_experiences: int, *, task_labels: bool = False, shuffle: bool = True, seed: Optional[int] = None, balance_experiences: bool = False, min_class_patterns_in_exp: int = 0, fixed_exp_assignment: Optional[Sequence[Sequence[int]]] = None, train_transform=None, eval_transform=None, reproducibility_data: Optional[Dict[str, Any]] = None) NIScenario [source]
This is the high-level benchmark instances generator for the “New Instances” (NI) case. Given a sequence of train and test datasets creates the continual stream of data as a series of experiences.
This is the reference helper function for creating instances of Domain-Incremental benchmarks.
task_labelsparameter determines if each incremental experience has an increasing task label or if, at the contrary, a default task label 0 has to be assigned to all experiences. This can be useful when differentiating between Single-Incremental-Task and Multi-Task scenarios.
There are other important parameters that can be specified in order to tweak the behaviour of the resulting benchmark. Please take a few minutes to read and understand them as they may save you a lot of work.
This generator features an integrated reproducibility mechanism that allows the user to store and later re-load a benchmark. For more info see the
train_dataset – A list of training datasets, or a single dataset.
test_dataset – A list of test datasets, or a single test dataset.
n_experiences – The number of experiences.
task_labels – If True, each experience will have an ascending task label. If False, the task label will be 0 for all the experiences.
shuffle – If True, patterns order will be shuffled.
seed – A valid int used to initialize the random number generator. Can be None.
balance_experiences – If True, pattern of each class will be equally spread across all experiences. If False, patterns will be assigned to experiences in a complete random way. Defaults to False.
min_class_patterns_in_exp – The minimum amount of patterns of every class that must be assigned to every experience. Compatible with the
balance_experiencesparameter. An exception will be raised if this constraint can’t be satisfied. Defaults to 0.
fixed_exp_assignment – If not None, the pattern assignment to use. It must be a list with an entry for each experience. Each entry is a list that contains the indexes of patterns belonging to that experience. Overrides the
train_transform – The transformation to apply to the training data, e.g. a random crop, a normalization or a concatenation of different transformations (see torchvision.transform documentation for a comprehensive list of possible transformations). Defaults to None.
eval_transform – The transformation to apply to the test data, e.g. a random crop, a normalization or a concatenation of different transformations (see torchvision.transform documentation for a comprehensive list of possible transformations). Defaults to None.
reproducibility_data – If not None, overrides all the other benchmark definition options, including
fixed_exp_assignment. This is usually a dictionary containing data used to reproduce a specific experiment. One can use the
get_reproducibility_datamethod to get (and even distribute) the experiment setup so that it can be loaded by passing it as this parameter. In this way one can be sure that the same specific experimental setup is being used (for reproducibility purposes). Beware that, in order to reproduce an experiment, the same train and test datasets must be used. Defaults to None.
A properly initialized