avalanche.evaluation.metrics.mean_scores_metrics

avalanche.evaluation.metrics.mean_scores_metrics(*, on_train: bool = True, on_eval: bool = True, image_creator: ~typing.Callable[[~typing.Dict[~typing.Literal['new', 'old'], ~typing.Dict[int, float]]], ~matplotlib.figure.Figure] | None = <function default_mean_scores_image_creator>) List[PluginMetric][source]
Helper to create plugins to show the scores of the true class, averaged by

new and old classes. The plugins are available during training (for the last epoch of each experience) and evaluation.

Parameters:
  • on_train – If True the train plugin is created

  • on_eval – If True the eval plugin is created

  • image_creator – The function to use to create an image of the history of the mean scores grouped by old and new classes

Returns:

The list of plugins that were specified