Is it possible to return gradient instead of loss in train_step

Hi, from the tutorial LightningModule — PyTorch Lightning 1.6.5 documentation. It regulates the train_step should return a loss Tensor. Because under the hood the logic is

outs = []
for batch_idx, batch in enumerate(train_dataloader):
    # forward
    loss = training_step(batch, batch_idx)
    outs.append(loss)

    # clear gradients
    optimizer.zero_grad()

    # backward
    loss.backward()

    # update parameters
    optimizer.step()

epoch_metric = torch.mean(torch.stack([x for x in outs]))

However, we know in Pytorch loss.backward() could also take gradient argument, and if I want to use loss.backward(gradient) I should return the gradient in train_step and choose a different logic under the hood

Is it possible?