Manual Backward, how to get the parameters gradients at the end of the fit method

Hi, I’d like to apply the .manual_backward() without do the usual optimizer step.
Then when the Trainer().fit method is finished I’d like to get parameters gradients, but unfortunately I got all zeros as gradients value. Below you can see pictured the situation:

def training_step(self, batch, batch_idx):
        opt = self.optimizers()
        x, y = batch
        logits = self(x)
        loss = F.nll_loss(logits, y)
        self.manual_backward(loss, opt, retain_graph=True)
        ...
dm = MNISTDataModule()

# Init model from datamodule's attributes

model = MNISTNet(*dm.size(), dm.num_classes)

# Init trainer
trainer = pl.Trainer(gpus=-1,max_epochs=3, progress_bar_refresh_rate=20)
# Train
trainer.fit(model, dm)
post_trained_parameters = model.named_parameters()

Finally here the result will be all zeros :slightly_frowning_face:
[p.grad for (_,p) in post_trained_parameters]

Any suggestion are welcome, thanks in advance!

you need to set automatic_optimization=False

class LitModule(LightningModule):
    def __init__(self, ...):
        ....
        self.automatic_optimization = False

Thanks for your reply. Yes of course I’ve set it…I think that probably there’s something under the hood made by PL which lost the computational graph linked to the model parameters, hence the .grad return all zeros.

mind reproduce it with the below code sample? will check