Evaluation module
PluginMetric
class, which provides all the callbacks needed to include custom metric logic in specific points of the continual learning workflow.evaluation.metrics
Metrics helper functions
EvaluationPlugin
).
|
Helper method that can be used to obtain the desired set of plugin metrics. |
|
Helper method that can be used to obtain the desired set of plugin metrics. |
|
Helper method that can be used to obtain the desired set of plugin metrics. |
|
Helper method that can be used to obtain the desired set of plugin metrics. |
|
Helper method that can be used to obtain the desired set of plugin metrics. |
|
Helper method that can be used to obtain the desired set of plugin metrics. |
|
Helper method that can be used to obtain the desired set of plugin metrics. |
|
Helper method that can be used to obtain the desired set of plugin metrics. |
|
Helper method that can be used to obtain the desired set of plugin metrics. |
|
Helper method that can be used to obtain the desired set of plugin metrics. |
|
Helper method that can be used to obtain the desired set of standalone metrics. |
|
Helper method that can be used to obtain the desired set of plugin metrics. |
|
Helper method that can be used to obtain the desired set of plugin metrics. |
|
Helper method that can be used to obtain the desired set of plugin metrics. |
|
Helper method that can be used to obtain the desired set of plugin metrics. |
|
Create the plugins to log some images samples in grids. |
|
Create plugins to monitor the labels repartition. |
|
Helper to create plugins to show the scores of the true class, averaged by |
Stream Metrics
At the end of the entire stream of experiences, this plugin metric reports the average accuracy over all patterns seen in all experiences. |
|
|
At the end of the entire stream of experiences, this plugin metric reports the average accuracy over all patterns seen in all experiences (separately for each class). |
|
Plugin metric for the Average Mean Class Accuracy (AMCA). |
At the end of each experience, this plugin metric reports the average accuracy for only the experiences that the model has been trained on so far. |
|
At the end of the entire stream of experiences, this metric reports the average loss over all patterns seen in all experiences. |
|
The StreamBWT metric, emitting the average BWT across all experiences encountered during training. |
|
The StreamForgetting metric, describing the average evaluation accuracy loss detected over all experiences observed during training. |
|
The Forward Transfer averaged over all the evaluation experiences. |
|
|
The Stream Confusion Matrix metric. |
|
Confusion Matrix metric compatible with Weights and Biases logger. |
The average stream CPU usage metric. |
|
|
The average stream Disk usage metric. |
The stream time metric. |
|
|
The Stream Max RAM metric. |
|
The Stream Max GPU metric. |
|
At the end of the entire stream of experiences, this plugin metric reports the average top-k accuracy over all patterns seen in all experiences. |
|
Plugin to show the scores of the true class during evaluation, averaged by |
Experience Metrics
At the end of each experience, this plugin metric reports the average accuracy over all patterns seen in that experience. |
|
|
At the end of each experience, this plugin metric reports the average accuracy over all patterns seen in that experience (separately for each class). |
At the end of each experience, this metric reports the average loss over all patterns seen in that experience. |
|
The Experience Backward Transfer metric. |
|
The ExperienceForgetting metric, describing the accuracy loss detected for a certain experience. |
|
The Forward Transfer computed on each experience separately. |
|
The average experience CPU usage metric. |
|
|
The average experience Disk usage metric. |
The experience time metric. |
|
At the end of each experience, this metric reports the MAC computed on a single pattern. |
|
|
The Experience Max RAM metric. |
|
The Experience Max GPU metric. |
|
At the end of each experience, this plugin metric reports the average top-k accuracy over all patterns seen in that experience. |
The WeightCheckpoint Metric. |
|
|
Metric used to sample random images. |
Epoch Metrics
The average accuracy over a single training epoch. |
|
|
The average class accuracy over a single training epoch. |
The average loss over a single training epoch. |
|
The Epoch CPU usage metric. |
|
|
The Epoch Disk usage metric. |
The epoch elapsed time metric. |
|
|
The MAC at the end of each epoch computed on a single pattern. |
|
The Epoch Max RAM metric. |
|
The Epoch Max GPU metric. |
|
Plugin to show the scores of the true class during the lasts training |
|
The average top-k accuracy over a single training epoch. |
RunningEpoch Metrics
The average accuracy across all minibatches up to the current epoch iteration. |
|
|
The average class accuracy across all minibatches up to the current epoch iteration. |
|
The average top-k accuracy across all minibatches up to the current epoch iteration. |
The average loss across all minibatches up to the current epoch iteration. |
|
The running epoch CPU usage metric. |
|
The running epoch time metric. |
Minibatch Metrics
The minibatch plugin accuracy metric. |
|
|
The minibatch plugin class accuracy metric. |
The minibatch loss metric. |
|
The minibatch CPU usage metric. |
|
|
The minibatch Disk usage metric. |
The minibatch time metric. |
|
The minibatch MAC metric. |
|
|
The Minibatch Max RAM metric. |
|
The Minibatch Max GPU metric. |
|
The minibatch plugin top-k accuracy metric. |
Other Plugin Metrics
The WeightCheckpoint Metric. |
Standalone Metrics
|
The Accuracy metric. |
|
The Average Mean Class Accuracy (AMCA) metric. |
|
The standalone Backward Transfer metric. |
|
The standalone CPU usage metric. |
|
The Class Accuracy metric. |
|
The standalone confusion matrix metric. |
|
The standalone disk usage metric. |
The standalone Elapsed Time metric. |
|
The standalone Forgetting metric. |
|
The standalone Forward Transfer metric. |
|
Metric used to monitor the labels repartition. |
|
|
The standalone Loss metric. |
|
Standalone Multiply-and-accumulate metric. |
|
The standalone GPU usage metric. |
|
The standalone RAM usage metric. |
|
The standalone mean metric. |
Average the scores of the true class by old and new classes |
|
Average the scores of the true class by label |
|
|
An extension of the Average Mean Class Accuracy (AMCA) metric (class:AverageMeanClassAccuracy) able to separate the computation of the AMCA based on the current stream. |
|
The standalone sum metric. |
|
The Top-k Accuracy metric. |
At the end of each experience, this plugin metric reports the average top-k accuracy for only the experiences that the model has been trained on so far. |
evaluation.metrics.detection
|
Returns an instance of |
|
Adapted from: https://github.com/pytorch/vision/blob/main/references/detection/engine.py |
|
Metric used to compute the detection and segmentation metrics using the dataset-specific API. |
evaluation.metric_definitions
General interfaces on which metrics are built.
|
Standalone metric. |
A metric that can be used together with |
|
|
This class provides a generic implementation of a Plugin Metric. |
evaluation.metric_results
Metric result types
|
The result of a Metric. |
|
A type for MetricValues. |