Why not for wandb?
I’m looking at the trainer code and it is applied to whatever logger is used:
should_log_metrics = (batch_idx + 1) % self.row_log_interval == 0 or self.should_stop
if should_log_metrics or self.fast_dev_run:
# logs user requested information to logger
metrics = batch_output.batch_log_metrics
grad_norm_dic = batch_output.grad_norm_dic
if len(metrics) > 0 or len(grad_norm_dic) > 0:
self.log_metrics(metrics, grad_norm_dic)
row_log_interval is for how many steps should be skipped before adding a new row to the summary.
log_save_interval is the interval at which this summary is actually written to disk. this is the slowest part, and determines how often you can “refresh” your tensorboard.
log_save_interval may not apply to all loggers if they do their own thing, like sending the data to the cloud.