Weird behavior in Colab when using Trainer.tune

I have two cells in a notebook in colab:

Cell1:

litmodel = LitModel(num_classes, 10.00017, len(dstrain), 64, model)
trainer = Trainer(gpus=1, auto_lr_find=True, max_epochs=epochs)
trainer.tune(litmodel, train_dataloader=dltrain, val_dataloaders=dlval)
trainer.fit(litmodel, dltrain, dlval)

Cell2:

litmodel = LitModel(num_classes, 10.00017, len(dstrain), 64, model)
trainer = Trainer(gpus=1, auto_lr_find=True, max_epochs=epochs)
trainer.tune(litmodel, train_dataloader=dltrain, val_dataloaders=dlval)
trainer2 = Trainer(gpus=1, auto_lr_find=True, max_epochs=epochs)
trainer2.fit(litmodel, dltrain, dlval)

Cell2 produces much better results. The loss right at the beginning of the first epoch is about 8 for Cell1 and 0.8 for Cell2. Furthermore I get the following warning when running Cell1:

/usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:136: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at torch.optim — PyTorch 2.1 documentation
torch.optim — PyTorch 2.1 documentation”, UserWarning)

This is the configure_optimizers function I use:

def configure_optimizers(self):
    optimizer = torch.optim.Adam(self.parameters(), lr= self.learning_rate)
    scheduler = {
            'scheduler': torch.optim.lr_scheduler.OneCycleLR(optimizer,
             max_lr = self.learning_rate, 
             total_steps = 2 * self.stepsize),
            'interval': 'step',
            'frequency': 1
    }


    return [optimizer], [scheduler]

Hello, my apology for the late reply. We are slowly converging to deprecate this forum in favor of the GH build-in version… Could we kindly ask you to recreate your question there - Lightning Discussions