avalanche.evaluation.metrics.detection.make_lvis_metrics

avalanche.evaluation.metrics.detection.make_lvis_metrics(save_folder=None, filename_prefix='model_output', iou_types: str | ~typing.List[str] = 'bbox', summarize_to_stdout: bool = True, evaluator_factory: ~typing.Callable[[~typing.Any, ~typing.List[str]], ~avalanche.evaluation.metrics.detection.DetectionEvaluator] = <function lvis_evaluator_factory>, gt_api_def: ~typing.Sequence[~typing.Tuple[str, ~typing.Tuple[~typing.Type] | ~typing.Type]] = (('coco', <class 'pycocotools.coco.COCO'>), ('lvis_api', <class 'lvis.lvis.LVIS'>)))[source]

Returns an instance of DetectionMetrics initialized for the LVIS dataset.

Parameters:
  • save_folder – path to the folder where to write model output files. Defaults to None, which means that the model output of test instances will not be stored.

  • filename_prefix – prefix common to all model outputs files. Ignored if save_folder is None. Defaults to “model_output”

  • iou_types – list of (or a single string) strings describing the iou types to use when computing metrics. Defaults to “bbox”. Valid values are “bbox” and “segm”.

  • summarize_to_stdout – if True, a summary of evaluation metrics will be printed to stdout (as a table) using the Lvis API. Defaults to True.

  • evaluator_factory – Defaults to LvisEvaluator constructor.

  • gt_api_def – Defaults to the list of supported datasets (LVIS is supported in Avalanche through class:LvisDataset).

Returns:

A metric plugin that can compute metrics on the LVIS dataset.