How to change optimizer and lr scheduler in the middle of training

I need to train a model multi-phases with a pre-trained backbone.

For the first 10 epoch, I want to have the backbone frozen and train the classifier only. After epoch 10, I want to start training the whole network. After a certain point (e.g. 100 epochs), I need to enable certain blocks in the network. In regular pytorch, I would instantiate a new optimizer adding the backbone params, required blocks params that I want to train. Then I’d swap both optimizer and lr_scheduler.

I know I can make a multi trainer.fit(). But, what’s the recommended way to do something like this in PL in a callback like BaseFinetuninig?

Here’s my sample code in a callback

    
    def freeze_before_training(self, pl_module):
        # Here, we are freezing `backbone`
        self.freeze(pl_module.net.encoder)
        if not pl_module.shared_weights:
            self.freeze(pl_module.net.left_encoder)
            self.freeze(pl_module.net.right_encoder)

    def on_train_start(self, trainer, pl_module) -> None:

        if trainer.current_epoch == self._unfreeze_at_epoch:
            print("unfreeze and add param group...")
            pl_module.net.freeze_backbone(False)
            new_optimizer = optim.Adam(
                filter(
                    lambda p: p.requires_grad,
                    pl_module.net.parameters()),
                lr=pl_module.lr,
                weight_decay=pl_module.weight_decay)
            new_schedulers = optim.lr_scheduler.ReduceLROnPlateau(
                new_optimizer,
                mode="min",
                factor=0.1,
                patience=pl_module.scheduler_patience,
                cooldown=3,
            )
            # not sure if its correct or safe to do this
            trainer.optimizers = [new_optimizer]
            trainer.lr_schedulers = [new_schedulers]
    if not pl_module.shared_weights and current_epoch == self._enable_left_view_at_epoch:
        # do the same process
        # unfreeze, and change opt and scheduler
    if not pl_module.shared_weights and current_epoch == self._enable_right_view_at_epoch:
       # do the same process
       # unfreeze, and change opt and scheduler
   # and more conditions and blocks

I would avoid doing what you’re doing.

Either way, read this: https://pytorch-lightning.readthedocs.io/en/stable/common/optimizers.html

And this: https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.core.lightning.html#pytorch_lightning.core.lightning.LightningModule.configure_optimizers

Instead of overwriting the optimizers that way, write your own custom configure_optimizers to return the correct optimizers AND schedulers based on your conditions.

The simplest way is to just have 2 separate calls to trainer.fit(), like you said. Re-use the same torch model in between. But either way, the advice above applies. Write a custom configure_optimizers that returns the correct values. you probably dont need an on_train_start