avalanche.evaluation.metrics.MeanScoresEvalPluginMetric

class avalanche.evaluation.metrics.MeanScoresEvalPluginMetric(image_creator: typing.Optional[typing.Callable[[typing.Dict[typing.Literal['new', 'old'], typing.Dict[int, int]]], matplotlib.figure.Figure]] = <function default_mean_scores_image_creator>)[source]
Plugin to show the scores of the true class during evaluation, averaged by

new and old classes.

__init__(image_creator: typing.Optional[typing.Callable[[typing.Dict[typing.Literal['new', 'old'], typing.Dict[int, int]]], matplotlib.figure.Figure]] = <function default_mean_scores_image_creator>)

Creates an instance of a plugin metric.

Child classes can safely invoke this (super) constructor as the first experience.

Methods

__init__([image_creator])

Creates an instance of a plugin metric.

after_backward(strategy)

after_eval(strategy)

after_eval_dataset_adaptation(strategy)

after_eval_exp(strategy)

after_eval_forward(strategy)

after_eval_iteration(strategy)

after_forward(strategy)

after_train_dataset_adaptation(strategy)

after_training(strategy)

after_training_epoch(strategy)

after_training_exp(strategy)

after_training_iteration(strategy)

after_update(strategy)

before_backward(strategy)

before_eval(strategy)

before_eval_dataset_adaptation(strategy)

before_eval_exp(strategy)

before_eval_forward(strategy)

before_eval_iteration(strategy)

before_forward(strategy)

before_train_dataset_adaptation(strategy)

before_training(strategy)

before_training_epoch(strategy)

before_training_exp(strategy)

before_training_iteration(strategy)

before_update(strategy)

reset()

Resets the metric internal state.

result()

Obtains the value of the metric.

update(strategy)

update_new_classes(strategy)