avalanche.logging

The logging module provides a set of utilities that can be used for logging your experiment metrics results on stdout, logfile and browser-based dashboard such as “Tensorboard” and “Weights & Biases”. These resources are provided in interactive_logging, text_logging and tensorboard_logger respectively. Loggers do not specify which metrics to monitor, but only the way metrics will be reported to the user. Please, see evaluation module for a list of metrics and how to use them. Loggers should be passed as parameters to the EvaluationPlugin in order to properly monitor the training and evaluation flows.

Submodules

Package Contents

Classes

StrategyLogger

The base class for the strategy loggers.

TensorboardLogger

The TensorboardLogger provides an easy integration with

WandBLogger

The WandBLogger provides an easy integration with

InteractiveLogger

The InteractiveLogger class provides logging facilities

CSVLogger

The CSVLogger logs accuracy and loss metrics into a csv file.

class avalanche.logging.StrategyLogger[source]

Bases: StrategyCallbacks[None], abc.ABC

The base class for the strategy loggers.

Strategy loggers will receive events, under the form of callback calls, from the EvaluationPlugin carrying a reference to the strategy as well as the values emitted by the metrics.

Each child class should implement the log_metric method, which specifies how to report to the user the metrics gathered during training and evaluation flows. The log_metric method is invoked by default on each callback. In addition, child classes may override the desired callbacks to customize the logger behavior.

Make sure, when overriding callbacks, to call the proper super method.

Initialize self. See help(type(self)) for accurate signature.

log_single_metric(self, name, value, x_plot)[source]

This abstract method will have to be implemented by each subclass. This method takes a metric name, a metric value and a x value and decides how to show the metric value.

Parameters
  • name – str, metric name

  • value – the metric value, will be ignored if not supported by the logger

  • x_plot – an integer representing the x value associated to the metric value

log_metric(self, metric_value: MetricValue, callback: str) None[source]

This method will be invoked on each callback. The callback parameter describes the callback from which the metric value is coming from.

Parameters
  • metric_value – The value to be logged.

  • callback – The name of the callback (event) from which the metric value was obtained.

Returns

None

before_training(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called before train by the BaseStrategy.

before_training_exp(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called before train_exp by the BaseStrategy.

after_train_dataset_adaptation(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after train_dataset_adapatation by the BaseStrategy.

before_training_epoch(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called before train_epoch by the BaseStrategy.

before_training_iteration(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called before the start of a training iteration by the BaseStrategy.

before_forward(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called before model.forward() by the BaseStrategy.

after_forward(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after model.forward() by the BaseStrategy.

before_backward(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called before criterion.backward() by the BaseStrategy.

after_backward(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after criterion.backward() by the BaseStrategy.

after_training_iteration(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after the end of a training iteration by the BaseStrategy.

before_update(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called before optimizer.update() by the BaseStrategy.

after_update(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after optimizer.update() by the BaseStrategy.

after_training_epoch(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after train_epoch by the BaseStrategy.

after_training_exp(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after train_exp by the BaseStrategy.

after_training(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after train by the BaseStrategy.

before_eval(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called before eval by the BaseStrategy.

after_eval_dataset_adaptation(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after eval_dataset_adaptation by the BaseStrategy.

before_eval_exp(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called before eval_exp by the BaseStrategy.

after_eval_exp(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after eval_exp by the BaseStrategy.

after_eval(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after eval by the BaseStrategy.

before_eval_iteration(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called before the start of a training iteration by the BaseStrategy.

before_eval_forward(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called before model.forward() by the BaseStrategy.

after_eval_forward(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after model.forward() by the BaseStrategy.

after_eval_iteration(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after the end of an iteration by the BaseStrategy.

class avalanche.logging.TensorboardLogger(tb_log_dir: Union[str, Path] = './tb_data', filename_suffix: str = '')[source]

Bases: avalanche.logging.StrategyLogger

The TensorboardLogger provides an easy integration with Tensorboard logging. Each monitored metric is automatically logged to Tensorboard. The user can inspect results in real time by appropriately launching tensorboard with tensorboard –logdir=/path/to/tb_log_exp_name.

AWS’s S3 buckets and (if tensorflow is installed) GCloud storage url are supported.

If no parameters are provided, the default folder in which tensorboard log files are placed is “./runs/”. .. note:

We rely on PyTorch implementation of Tensorboard. If you
don't have Tensorflow installed in your environment,
tensorboard will tell you that it is running with reduced
feature set. This should not impact on the logger performance.

Creates an instance of the TensorboardLogger.

Parameters
  • tb_log_dir – path to the directory where tensorboard log file will be stored. Default to “./tb_data”.

  • filename_suffix – string suffix to append at the end of tensorboard log file. Default ‘’.

__del__(self)[source]
log_single_metric(self, name, value, x_plot)[source]

This abstract method will have to be implemented by each subclass. This method takes a metric name, a metric value and a x value and decides how to show the metric value.

Parameters
  • name – str, metric name

  • value – the metric value, will be ignored if not supported by the logger

  • x_plot – an integer representing the x value associated to the metric value

class avalanche.logging.WandBLogger(project_name: str = 'Avalanche', run_name: str = 'Test', log_artifacts: bool = False, path: Union[str, Path] = 'Checkpoints', uri: str = None, sync_tfboard: bool = False, save_code: bool = True, config: object = None, dir: Union[str, Path] = None, params: dict = None)[source]

Bases: avalanche.logging.StrategyLogger

The WandBLogger provides an easy integration with Weights & Biases logging. Each monitored metric is automatically logged to a dedicated Weights & Biases project dashboard.

External storage for W&B Artifacts (for instance - AWS S3 and GCS buckets) uri are supported.

The wandb log files are placed by default in “./wandb/” unless specified.

Note

TensorBoard can be synced on to the W&B dedicated dashboard.

Creates an instance of the WandBLogger. :param project_name: Name of the W&B project. :param run_name: Name of the W&B run. :param log_artifacts: Option to log model weights as W&B Artifacts. :param path: Path to locally save the model checkpoints. :param uri: URI identifier for external storage buckets (GCS, S3). :param sync_tfboard: Syncs TensorBoard to the W&B dashboard UI. :param save_code: Saves the main training script to W&B. :param config: Syncs hyper-parameters and config values used to W&B. :param dir: Path to the local log directory for W&B logs to be saved at. :param params: All arguments for wandb.init() function call.

Visit https://docs.wandb.ai/ref/python/init to learn about all wand.init() parameters.

import_wandb(self)[source]
args_parse(self)[source]
before_run(self)[source]
log_single_metric(self, name, value, x_plot)[source]

This abstract method will have to be implemented by each subclass. This method takes a metric name, a metric value and a x value and decides how to show the metric value.

Parameters
  • name – str, metric name

  • value – the metric value, will be ignored if not supported by the logger

  • x_plot – an integer representing the x value associated to the metric value

class avalanche.logging.InteractiveLogger[source]

Bases: avalanche.logging.TextLogger

The InteractiveLogger class provides logging facilities for the console standard output. The logger shows a progress bar during training and evaluation flows and interactively display metric results as soon as they become available. The logger writes metric results after each training epoch, evaluation experience and at the end of the entire evaluation stream.

Note

To avoid an excessive amount of printed lines, this logger will not print results after each iteration. If the user is monitoring metrics which emit results after each minibatch (e.g., MinibatchAccuracy), only the last recorded value of such metrics will be reported at the end of the epoch.

Note

Since this logger works on the standard output, metrics producing images or more complex visualizations will be converted to a textual format suitable for console printing. You may want to add more loggers to your EvaluationPlugin to better support different formats.

Creates an instance of TextLogger class.

Parameters

file – destination file to which print metrics (default=sys.stdout).

before_training_epoch(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called before train_epoch by the BaseStrategy.

after_training_epoch(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after train_epoch by the BaseStrategy.

before_eval_exp(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called before eval_exp by the BaseStrategy.

after_eval_exp(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after eval_exp by the BaseStrategy.

after_training_iteration(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after the end of a training iteration by the BaseStrategy.

after_eval_iteration(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after the end of an iteration by the BaseStrategy.

class avalanche.logging.CSVLogger(log_folder=None)[source]

Bases: avalanche.logging.StrategyLogger

The CSVLogger logs accuracy and loss metrics into a csv file. Metrics are logged separately for training and evaluation in files training_results.csv and eval_results.csv, respectively. This Logger assumes that the user is evaluating on only one experience during training (see below for an example of a train call).

Trough the EvaluationPlugin, the user should monitor at least EpochAccuracy/Loss and ExperienceAccuracy/Loss. If monitored, the logger will also record Experience Forgetting. In order to monitor the performance on held-out experience associated to the current training experience, set eval_every=1 (or larger value) in the strategy constructor and pass the eval experience to the train method: for i, exp in enumerate(benchmark.train_stream):

strategy.train(exp, eval_streams=[benchmark.test_stream[i]])

When not provided, validation loss and validation accuracy will be logged as zero.

The training file header is composed of: training_exp_id, epoch, training_accuracy, val_accuracy, training_loss, val_loss.

The evaluation file header is composed of: eval_exp, training_exp, eval_accuracy, eval_loss, forgetting

Creates an instance of CSVLogger class.

Parameters

log_folder – folder in which to create log files. If None, csvlogs folder in the default current directory will be used.

log_single_metric(self, name, value, x_plot) None[source]

This abstract method will have to be implemented by each subclass. This method takes a metric name, a metric value and a x value and decides how to show the metric value.

Parameters
  • name – str, metric name

  • value – the metric value, will be ignored if not supported by the logger

  • x_plot – an integer representing the x value associated to the metric value

print_train_metrics(self, training_exp, epoch, train_acc, val_acc, train_loss, val_loss)[source]
print_eval_metrics(self, eval_exp, training_exp, eval_acc, eval_loss, forgetting)[source]
after_training_epoch(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after train_epoch by the BaseStrategy.

after_eval_exp(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after eval_exp by the BaseStrategy.

before_training_exp(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called before train_exp by the BaseStrategy.

before_eval(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Manage the case in which eval is first called before train

before_training(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called before train by the BaseStrategy.

after_training(self, strategy: BaseStrategy, metric_values: List['MetricValue'], **kwargs)[source]

Called after train by the BaseStrategy.

close(self)[source]