Pause at end of every epoch?

I have a training loop where epochs are very fast. Training and validation happens on the order of seconds.

However, there is a long pause after the epoch end. Why, and how can I disable this so I can train faster?

My guess was that checkpointing or logging is happening, but I poked around and couldn’t determine if that was the case or how to decrease its frequency.

yeah I’m having the same issue here

I also met this situation, whether you have solved this problem

I got the Logging Error after the first epoch. I downgraded PL to 1.3.1 and wandb (weights&biases) to 0.10.30, and now it seems to fix my problem at least.

My suggestion would be to change your PL version.