avalanche.evaluation.metrics.mean_scores_metrics

avalanche.evaluation.metrics.mean_scores_metrics(*, on_train: bool = True, on_eval: bool = True, image_creator: typing.Optional[typing.Callable[[typing.Dict[typing.Literal['new', 'old'], typing.Dict[int, int]]], matplotlib.figure.Figure]] = <function default_mean_scores_image_creator>) List[avalanche.evaluation.metric_definitions.PluginMetric][source]
Helper to create plugins to show the scores of the true class, averaged by

new and old classes. The plugins are available during training (for the last epoch of each experience) and evaluation.

Parameters
  • on_train – If True the train plugin is created

  • on_eval – If True the eval plugin is created

  • image_creator – The function to use to create an image of the history of the mean scores grouped by old and new classes

Returns

The list of plugins that were specified