Benchmarks module
Popular benchmarks (like SplitMNIST, PermutedMNIST, SplitCIFAR, …) are contained in the
classic
sub-module.Dataset implementations are available in the
datasets
sub-module.One can create new benchmarks by using the utilities found in the
generators
sub-module.Avalanche uses custom dataset and dataloader implementations contained in the
utils
sub-module. More info can be found in this couple of How-Tos here and here.
avalanche.benchmarks
Continual Learning Scenarios
Scenarios
|
Continual Learning benchmark. |
|
|
|
Ex-Model CL Scenario. |
|
This class defines a "New Classes" scenario. |
|
This class defines a "New Instance" scenario. |
|
Helper to obtain a benchmark with a validation stream. |
Streams
|
A CL stream is a named iterator of experiences. |
|
A CL stream build from a pre-initialized list of experience. |
|
Experiences
|
Base Experience. |
|
Definition of a learning experience based on a |
|
Defines a "New Classes" experience. |
|
Defines a "New Instances" experience. |
|
Online CL (OCL) Experience. |
|
Ex-Model CL Experience. |
|
Experience attributes are used to define data belonging to an experience which may only be available at train or eval time. |
Classic Benchmarks
CORe50-based benchmarks
Benchmarks based on the CORe50 dataset.
|
Creates a CL benchmark for CORe50. |
CIFAR-based benchmarks
Benchmarks based on the CIFAR-10 and CIFAR-100 datasets.
|
Creates a CL benchmark using the CIFAR10 dataset. |
|
Creates a CL benchmark using the CIFAR100 dataset. |
|
Creates a CL benchmark using both the CIFAR100 and CIFAR10 datasets. |
CUB200-based benchmarks
Benchmarks based on the Caltech-UCSD Birds 200 dataset.
|
Creates a CL benchmark using the Cub-200 dataset. |
EndlessCLSim-based benchmarks
Benchmarks based on the EndlessCLSim derived datasets.
|
Creates a CL scenario for the Endless-Continual-Learning Simulator's derived datasets, or custom datasets created from the Endless-Continual-Learning-Simulator's `standalone application < https://zenodo.org/record/4899294>`__. |
FashionMNIST-based benchmarks
Benchmarks based on the Fashion MNIST dataset.
|
Creates a CL benchmark using the Fashion MNIST dataset. |
ImageNet-based benchmarks
Benchmarks based on the ImageNet ILSVRC-2012 dataset.
|
Creates a CL benchmark using the ImageNet dataset. |
|
Creates a CL benchmark using the Tiny ImageNet dataset. |
iNaturalist-based benchmarks
Benchmarks based on the iNaturalist-2018 dataset.
|
Creates a CL benchmark using the iNaturalist2018 dataset. |
MNIST-based benchmarks
Benchmarks based on the MNIST dataset.
|
Creates a CL benchmark using the MNIST dataset. |
|
Creates a Permuted MNIST benchmark. |
|
Creates a Rotated MNIST benchmark. |
Omniglot-based benchmarks
Benchmarks based on the Omniglot dataset.
|
Creates a CL benchmark using the OMNIGLOT dataset. |
OpenLORIS-based benchmarks
Benchmarks based on the OpenLORIS dataset.
|
Creates a CL benchmark for OpenLORIS. |
Stream51-based benchmarks
Benchmarks based on the Stream-51, dataset.
|
Creates a CL benchmark for Stream-51. |
CLEAR-based benchmarks
Benchmarks based on the CLEAR dataset.
|
Creates a Domain-Incremental benchmark for CLEAR 10 & 100 with 10 & 100 illustrative classes and an n+1 th background class. |
Ex-Model benchmarks
Benchmarks for learning from pretrained models or multi-agent continual learning scenarios. Based on the Ex-Model paper. Pretrained models are downloaded automatically.
|
ExML scenario on MNIST data. |
|
ExML scenario on CoRE50. |
|
ExML scenario on CIFAR10. |
Datasets
|
CORe50 Pytorch Dataset |
|
Basic CUB200 PathsDataset to be used as a standard PyTorch Dataset. |
|
Endless Continual Leanring Simulator Dataset |
|
INATURALIST Pytorch Dataset |
|
The MiniImageNet dataset. |
|
Custom class used to adapt Omniglot (from Torchvision) and make it compatible with the Avalanche API. |
|
OpenLORIS Pytorch Dataset |
|
Stream-51 Pytorch Dataset |
|
Tiny Imagenet Pytorch Dataset |
|
CLEAR Base Dataset for downloading / loading metadata |
|
root: dataset root location url: version name of the dataset download: automatically download the dataset, if not present subset: one of 'training', 'validation', 'testing' mfcc_preprocessing: an optional torchaudio.transforms.MFCC instance to preprocess each audio. Warning: this may slow down the execution since preprocessing is applied on-the-fly each time a sample is retrieved from the dataset. |
Benchmark Generators
|
Creates a benchmark given a list of datasets for each stream. |
|
Splits datasets according to a class-incremental scenario. |
|
Benchmark generator for "New Instances" (NI) scenarios. |
|
Creates a task-incremental benchmark from a dataset scenario. |
If you want to add attributes to experiences (such as classes_in_this_experiences or task_labels) you can use the generic decorators:
Add ClassesTimeline attributes. |
|
|
Add TaskAware attributes. |
Online streams where experiences are made of small minibatches:
|
Split a stream of large batches to create an online stream of small mini-batches. |
Creates a stream of sub-experiences from a list of overlapped |
Train/Validation splits for streams:
|
Helper to obtain a benchmark with a validation stream. |
Class-balanced dataset split. |
|
|
Splits an AvalancheDataset in two splits. |
Utils (Data Loading and AvalancheDataset)
|
Task-balanced data loader for Avalanche's datasets. |
|
Data loader that balances data from multiple datasets. |
|
Custom data loader for rehearsal/replay strategies. |
|
Data loader that balances data from multiple datasets emitting an infinite stream. |
|
Avalanche Dataset. |
|
Avalanche Dataset. |
|
A lazy mapping for <task-label -> task dataset>. |
|
Data attributes manage sample-wise information such as task or class labels. |