How to change optimizer and lr scheduler in the middle of training

I need to train a model multi-phases with a pre-trained backbone.

For the first 10 epoch, I want to have the backbone frozen and train the classifier only. After epoch 10, I want to start training the whole network. After a certain point (e.g. 100 epochs), I need to enable certain blocks in the network. In regular pytorch, I would instantiate a new optimizer adding the backbone params, required blocks params that I want to train. Then I’d swap both optimizer and lr_scheduler.

I know I can make a multi But, what’s the recommended way to do something like this in PL in a callback like BaseFinetuninig?

Here’s my sample code in a callback

    def freeze_before_training(self, pl_module):
        # Here, we are freezing `backbone`
        if not pl_module.shared_weights:

    def on_train_start(self, trainer, pl_module) -> None:

        if trainer.current_epoch == self._unfreeze_at_epoch:
            print("unfreeze and add param group...")
            new_optimizer = optim.Adam(
                    lambda p: p.requires_grad,
            new_schedulers = optim.lr_scheduler.ReduceLROnPlateau(
            # not sure if its correct or safe to do this
            trainer.optimizers = [new_optimizer]
            trainer.lr_schedulers = [new_schedulers]
    if not pl_module.shared_weights and current_epoch == self._enable_left_view_at_epoch:
        # do the same process
        # unfreeze, and change opt and scheduler
    if not pl_module.shared_weights and current_epoch == self._enable_right_view_at_epoch:
       # do the same process
       # unfreeze, and change opt and scheduler
   # and more conditions and blocks

I would avoid doing what you’re doing.

Either way, read this: Optimization — PyTorch Lightning 1.5.9 documentation

And this: lightning — PyTorch Lightning 1.5.9 documentation

Instead of overwriting the optimizers that way, write your own custom configure_optimizers to return the correct optimizers AND schedulers based on your conditions.

The simplest way is to just have 2 separate calls to, like you said. Re-use the same torch model in between. But either way, the advice above applies. Write a custom configure_optimizers that returns the correct values. you probably dont need an on_train_start