avalanche.evaluation.metrics.labels_repartition_metrics
- avalanche.evaluation.metrics.labels_repartition_metrics(*, on_train: bool = True, emit_train_at: ~typing.Literal['stream', 'experience', 'epoch'] = 'epoch', on_eval: bool = False, emit_eval_at: ~typing.Literal['stream', 'experience'] = 'stream', image_creator: ~typing.Callable[[~typing.Dict[int, ~typing.List[int]], ~typing.List[int]], ~matplotlib.figure.Figure] | None = <function default_history_repartition_image_creator>) List[PluginMetric] [source]
Create plugins to monitor the labels repartition.
- Parameters:
on_train – If True, emit the metrics during training.
emit_train_at – (only if on_train is True) when to emit the training metrics.
on_eval – If True, emit the metrics during evaluation.
emit_eval_at – (only if on_eval is True) when to emit the evaluation metrics.
image_creator – The function to use to create an image from the history of the labels repartition. It will receive a dictionary of the form {label_id: [count_at_step_0, count_at_step_1, …], …} and the list of the corresponding steps [step_0, step_1, …]. If set to None, only the raw data is emitted.
- Returns:
The list of corresponding plugins.