avalanche.evaluation.metrics.ClassAccuracy

class avalanche.evaluation.metrics.ClassAccuracy(classes: Optional[Union[Dict[int, Iterable[int]], Iterable[int]]] = None)[source]

The Class Accuracy metric. This is a standalone metric used to compute more specific ones.

Instances of this metric keeps the running average accuracy over multiple <prediction, target> pairs of Tensors, provided incrementally. The “prediction” and “target” tensors may contain plain labels or one-hot/logit vectors.

Each time result is called, this metric emits the average accuracy for all classes seen and across all predictions made since the last reset. The set of classes to be tracked can be reduced (please refer to the constructor parameters).

The reset method will bring the metric to its initial state. By default, this metric in its initial state will return a {task_id -> {class_id -> accuracy}} dictionary in which all accuracies are set to 0.

__init__(classes: Optional[Union[Dict[int, Iterable[int]], Iterable[int]]] = None)[source]

Creates an instance of the standalone Accuracy metric.

By default, this metric in its initial state will return an empty dictionary. The metric can be updated by using the update method while the running accuracies can be retrieved using the result method.

By using the classes parameter, one can restrict the list of classes to be tracked and in addition can immediately create plots for yet-to-be-seen classes.

Parameters

classes – The classes to keep track of. If None (default), all classes seen are tracked. Otherwise, it can be a dict of classes to be tracked (as “task-id” -> “list of class ids”) or, if running a task-free benchmark (with only task 0), a simple list of class ids. By passing this parameter, the plot of each class is created immediately (with a default value of 0.0) and plots will be aligned across all classes. In addition, this can be used to restrict the classes for which the accuracy should be logged.

Methods

__init__([classes])

Creates an instance of the standalone Accuracy metric.

reset()

Resets the metric.

result()

Retrieves the running accuracy for each class.

update(predicted_y, true_y, task_labels)

Update the running accuracy given the true and predicted labels for each class.

Attributes

classes

The list of tracked classes.

dynamic_classes

If True, newly encountered classes will be tracked.