How to modify the default progress bar

Hello,
I’m clueless when it comes to modifying the progress bar to tailor my needs.
Here is how it currently looks:

Epoch 9: 100%|█████████████████████████████████| 470/470 [00:34<00:00, 13.77it/s, loss=0.001, v_num=0, max_mem=1.38e+3]

How should I proceed to:

  1. Remove the number of iterations per second (“13.77it/s”)
  2. Similarly, remove the version number (“v_num=0”)
  3. Print the loss in a different format, e.g. {loss:.3e}

There is a callback for this need please check the documentation here. Hope this helps :smile:

for this, you need to override some method init_train_tqdm, init_validation_tqdm, … as per your requirement in ProgressBar as @asvskartheek mentioned.
PS: you can use this to remove it/s

tqdm(
    ...,
    bar_format='{l_bar}{bar}{r_bar}|{n_fmt}/{total_fmt} [{elapsed}<{remaining}{postfix}'
)

for these, you need to override get-progress-bar-dict method.

1 Like

Thank you so much @asvskartheek and @goku!
I will keep you updated on my definitive solution.

I was able to modify the progress bar to tailor my need.

Here is how I did it:

  1. Remove the number of iterations per second

@goku was right: I created a LitProgressBar class that inherits from pytorch_lightning.callbacks.progress.ProgressBar and modified the following methods: init_[sanity|train|validation|test]_tqdm such that the tqdm bar was built with:

bar_format='{l_bar}{bar}|{n_fmt}/{total_fmt} [{elapsed}<{remaining}{postfix}]'
  1. Remove the version number
  2. Print the loss in a different format (e.g. {loss:.3e})

To solve these two problems I did override the get_progress_bar_dict in my custom LightningModule in the following way:

    def get_progress_bar_dict(self):
            items = super().get_progress_bar_dict()
            # discard the version number
            items.pop("v_num", None)
            # discard the loss
            items.pop("loss", None)
            # Override tqdm_dict logic in `get_progress_bar_dict`
            # call .item() only once but store elements without graphs
            running_train_loss = self.trainer.running_loss.mean()
            avg_training_loss = (
                running_train_loss.cpu().item()
                if running_train_loss is not None
                else float("NaN")
            )
            # convert the loss in the proper format
            items["loss"] = f"{avg_training_loss:.3e}"
            return items

Hope that helps!

By the way, the loss format could be handled directly by PL in the get_progress_bar_dict at line 1350 as it is now done with {loss:.3f} which I find quite limiting. Do you think it warrants a change/PR?