avalanche.training.plugins.EvaluationPlugin
- class avalanche.training.plugins.EvaluationPlugin(*metrics: PluginMetric | Sequence[PluginMetric], loggers: BaseLogger | Sequence[BaseLogger] | Callable[[], Sequence[BaseLogger]] | None = None, collect_all=True, strict_checks=False)[source]
Manager for logging and metrics.
An evaluation plugin that obtains relevant data from the training and eval loops of the strategy through callbacks. The plugin keeps a dictionary with the last recorded value for each metric. The dictionary will be returned by the train and eval methods of the strategies. It is also possible to keep a dictionary with all recorded metrics by specifying collect_all=True. The dictionary can be retrieved via the get_all_metrics method.
This plugin also logs metrics using the provided loggers.
- __init__(*metrics: PluginMetric | Sequence[PluginMetric], loggers: BaseLogger | Sequence[BaseLogger] | Callable[[], Sequence[BaseLogger]] | None = None, collect_all=True, strict_checks=False)[source]
Creates an instance of the evaluation plugin.
- Parameters:
metrics – The metrics to compute.
loggers – The loggers to be used to log the metric values.
collect_all – if True, collect in a separate dictionary all metric curves values. This dictionary is accessible with get_all_metrics method.
strict_checks – if True, checks that the full evaluation streams is used when calling eval. An error will be raised otherwise.
Methods
__init__
(*metrics[, loggers, collect_all, ...])Creates an instance of the evaluation plugin.
before_eval
(strategy, **kwargs)get_all_metrics
()Return the dictionary of all collected metrics.
get_last_metrics
()Return a shallow copy of dictionary with metric names as keys and last metrics value as values.
publish_metric_value
(mval)Publish a MetricValue to be processed by the loggers.
reset_last_metrics
()Set the dictionary storing last value for each metric to be empty dict.
Attributes
active