avalanche.evaluation.metrics.detection.DetectionMetrics

class avalanche.evaluation.metrics.detection.DetectionMetrics(*, evaluator_factory: ~typing.Callable[[~typing.Any, ~typing.List[str]], ~avalanche.evaluation.metrics.detection.DetectionEvaluator] = <class 'avalanche.evaluation.metrics.detection_evaluators.coco_evaluator.CocoEvaluator'>, gt_api_def: ~typing.Sequence[~typing.Tuple[str, ~typing.Union[~typing.Tuple[~typing.Type], ~typing.Type]]] = (('coco', <class 'pycocotools.coco.COCO'>), ('lvis_api', <class 'lvis.lvis.LVIS'>)), default_to_coco=False, save_folder=None, filename_prefix='model_output', save_stream='test', iou_types: ~typing.Union[str, ~typing.List[str]] = 'bbox', summarize_to_stdout: bool = True)[source]

Metric used to compute the detection and segmentation metrics using the dataset-specific API.

Metrics are returned after each evaluation experience.

This metric can also be used to serialize model outputs to JSON files, by producing one file for each evaluation experience. This can be useful if outputs have to been processed later (like in a competition).

If no dataset-specific API is used, the COCO API (pycocotools) will be used.

__init__(*, evaluator_factory: ~typing.Callable[[~typing.Any, ~typing.List[str]], ~avalanche.evaluation.metrics.detection.DetectionEvaluator] = <class 'avalanche.evaluation.metrics.detection_evaluators.coco_evaluator.CocoEvaluator'>, gt_api_def: ~typing.Sequence[~typing.Tuple[str, ~typing.Union[~typing.Tuple[~typing.Type], ~typing.Type]]] = (('coco', <class 'pycocotools.coco.COCO'>), ('lvis_api', <class 'lvis.lvis.LVIS'>)), default_to_coco=False, save_folder=None, filename_prefix='model_output', save_stream='test', iou_types: ~typing.Union[str, ~typing.List[str]] = 'bbox', summarize_to_stdout: bool = True)[source]

Creates an instance of DetectionMetrics.

Parameters
  • evaluator_factory – The factory for the evaluator to use. By default, the COCO evaluator will be used. The factory should accept 2 parameters: the API object containing the test annotations and the list of IOU types to consider. It must return an instance of a DetectionEvaluator.

  • gt_api_def – The name and type of the API to search. The name must be the name of the field of the original dataset, while the Type must be the one the API object. For instance, for LvisDataset is (‘lvis_api’, lvis.LVIS). Defaults to the datasets explicitly supported by Avalanche.

  • default_to_coco – If True, it will try to convert the dataset to the COCO format.

  • save_folder – path to the folder where to write model output files. Defaults to None, which means that the model output of test instances will not be stored.

  • filename_prefix – prefix common to all model outputs files. Ignored if save_folder is None. Defaults to “model_output”

  • iou_types – list of (or a single string) strings describing the iou types to use when computing metrics. Defaults to “bbox”. Valid values are usually “bbox” and “segm”, but this may vary depending on the dataset.

  • summarize_to_stdout – if True, a summary of evaluation metrics will be printed to stdout (as a table) using the Lvis API. Defaults to True.

Methods

__init__(*[, evaluator_factory, gt_api_def, ...])

Creates an instance of DetectionMetrics.

after_backward(strategy)

after_eval(strategy)

after_eval_dataset_adaptation(strategy)

after_eval_exp(strategy)

after_eval_forward(strategy)

after_eval_iteration(strategy)

after_forward(strategy)

after_train_dataset_adaptation(strategy)

after_training(strategy)

after_training_epoch(strategy)

after_training_exp(strategy)

after_training_iteration(strategy)

after_update(strategy)

before_backward(strategy)

before_eval(strategy)

before_eval_dataset_adaptation(strategy)

before_eval_exp(strategy)

before_eval_forward(strategy)

before_eval_iteration(strategy)

before_forward(strategy)

before_train_dataset_adaptation(strategy)

before_training(strategy)

before_training_epoch(strategy)

before_training_exp(strategy)

before_training_iteration(strategy)

before_update(strategy)

reset()

Resets the metric internal state.

result()

Obtains the value of the metric.

update(res)

Attributes

save_folder

The folder to use when storing the model outputs.

filename_prefix

The file name prefix to use when storing the model outputs.

save_stream

The stream for which the model outputs should be saved.

iou_types

The IoU types for which metrics will be computed.

summarize_to_stdout

If True, a summary of evaluation metrics will be printed to stdout.

evaluator_factory

The factory of the evaluator object.

evaluator

Main evaluator object to compute metrics.

gt_api_def

The name and type of the dataset API object containing the ground truth test annotations.

default_to_coco

If True, it will try to convert the dataset to the COCO format.

current_filename

File containing the current model outputs.

current_outputs

List of dictionaries containing the current model outputs.

current_additional_metrics

The current additional metrics.

save

If True, model outputs will be written to file.