What is currently the best/correct way to train on stages? I mean, for example, that at first I train model with one loss, and after some time change it to different one. Or change augmentations.
There is an issue about this, but it is stale: https://github.com/PyTorchLightning/pytorch-lightning/issues/2006
I see three possible ways:
- Write a callback, which will check training status in on_epoch_start (or using some other method) and change parameters of lightning module. But I’m not sure it is safe. For example, will it be okay if I change dataloaders using such callback?
- Wait until training is finished, then load the checkpoint and continue. But I’m not sure how to change optimizers/dataloaders and other things in this case.
- Save model directly (pytorch pth file), then create a new instance of Lightning Module with new parameters and load the weights from file. Then train.
Which of these approaches is better, or maybe there is a different better way to do this?