avalanche.logging.WandBLogger
- class avalanche.logging.WandBLogger(project_name: str = 'Avalanche', run_name: str = 'Test', log_artifacts: bool = False, path: str | Path = 'Checkpoints', uri: str | None = None, sync_tfboard: bool = False, save_code: bool = True, config: object | None = None, dir: Path | str | None = None, params: dict | None = None)[source]
Weights and Biases logger.
The WandBLogger provides an easy integration with Weights & Biases logging. Each monitored metric is automatically logged to a dedicated Weights & Biases project dashboard.
External storage for W&B Artifacts (for instance - AWS S3 and GCS buckets) uri are supported.
The wandb log files are placed by default in “./wandb/” unless specified.
Note
TensorBoard can be synced on to the W&B dedicated dashboard.
- __init__(project_name: str = 'Avalanche', run_name: str = 'Test', log_artifacts: bool = False, path: str | Path = 'Checkpoints', uri: str | None = None, sync_tfboard: bool = False, save_code: bool = True, config: object | None = None, dir: Path | str | None = None, params: dict | None = None)[source]
Creates an instance of the WandBLogger.
- Parameters:
project_name – Name of the W&B project.
run_name – Name of the W&B run.
log_artifacts – Option to log model weights as W&B Artifacts. Note that, in order for model weights to be logged, the
WeightCheckpoint
metric must be added to the evaluation plugin.path – Path to locally save the model checkpoints.
uri – URI identifier for external storage buckets (GCS, S3).
sync_tfboard – Syncs TensorBoard to the W&B dashboard UI.
save_code – Saves the main training script to W&B.
config – Syncs hyper-parameters and config values used to W&B.
dir – Path to the local log directory for W&B logs to be saved at.
params – All arguments for wandb.init() function call. Visit https://docs.wandb.ai/ref/python/init to learn about all wand.init() parameters.
Methods
__init__
([project_name, run_name, ...])Creates an instance of the WandBLogger.
after_backward
(strategy, *args, **kwargs)Called after criterion.backward() by the BaseTemplate.
after_eval
(strategy, *args, **kwargs)Called after eval by the BaseTemplate.
after_eval_dataset_adaptation
(strategy, ...)Called after eval_dataset_adaptation by the BaseTemplate.
after_eval_exp
(strategy, *args, **kwargs)Called after eval_exp by the BaseTemplate.
after_eval_forward
(strategy, *args, **kwargs)Called after model.forward() by the BaseTemplate.
after_eval_iteration
(strategy, *args, **kwargs)Called after the end of an iteration by the BaseTemplate.
after_forward
(strategy, *args, **kwargs)Called after model.forward() by the BaseTemplate.
after_train_dataset_adaptation
(strategy, ...)Called after train_dataset_adapatation by the BaseTemplate.
after_training
(strategy, *args, **kwargs)Called after train by the BaseTemplate.
after_training_epoch
(strategy, *args, **kwargs)Called after train_epoch by the BaseTemplate.
after_training_exp
(strategy, metric_values, ...)Called after train_exp by the BaseTemplate.
after_training_iteration
(strategy, *args, ...)Called after the end of a training iteration by the BaseTemplate.
after_update
(strategy, *args, **kwargs)Called after optimizer.update() by the BaseTemplate.
args_parse
()before_backward
(strategy, *args, **kwargs)Called before criterion.backward() by the BaseTemplate.
before_eval
(strategy, *args, **kwargs)Called before eval by the BaseTemplate.
before_eval_dataset_adaptation
(strategy, ...)Called before eval_dataset_adaptation by the BaseTemplate.
before_eval_exp
(strategy, *args, **kwargs)Called before eval_exp by the BaseTemplate.
before_eval_forward
(strategy, *args, **kwargs)Called before model.forward() by the BaseTemplate.
before_eval_iteration
(strategy, *args, **kwargs)Called before the start of a training iteration by the BaseTemplate.
before_forward
(strategy, *args, **kwargs)Called before model.forward() by the BaseTemplate.
before_run
()before_train_dataset_adaptation
(strategy, ...)Called before train_dataset_adapatation by the BaseTemplate.
before_training
(strategy, *args, **kwargs)Called before train by the BaseTemplate.
before_training_epoch
(strategy, *args, **kwargs)Called before train_epoch by the BaseTemplate.
before_training_exp
(strategy, *args, **kwargs)Called before train_exp by the BaseTemplate.
before_training_iteration
(strategy, *args, ...)Called before the start of a training iteration by the BaseTemplate.
before_update
(strategy, *args, **kwargs)Called before optimizer.update() by the BaseTemplate.
import_wandb
()log_metrics
(metric_values)Receive a list of MetricValues to log.
log_single_metric
(name, value, x_plot)Log a metric value.
Attributes
supports_distributed
A flag describing whether this plugin supports distributed training.