In particular if
- I am training with epochs
- I am training with iterations instead
For 1 is calling the scheduler every epoch the most common?
For 2 is calling the scheduler every 150 its the most common? (to approximate every epoch).
Or is whenever the validation loss is not decreasing anymore…?
you might want to setup the scheduler configuration accordingly. Check out the examples here: LightningModule — PyTorch Lightning 1.6.0dev documentation
Also, we have moved the discussions to GitHub Discussions. You might want to check that out instead to get a quick response. The forums will be marked read-only soon.