Switching between loss functions

In my application, I will sometimes want to train the network against a different loss function (basically training to an initial condition). In addition, the neural network in the module may be used several times in the training step. What is the ideal architecture for this? I don’t have a mental model of what is valid in pytorch or pytorch lightning. Does something like the following work?

Note that the u defines the application of the network, but is called in the forward. And the forward itself has the boolean to switch the loss function.

class MyModule(pl.LightningModule):
    def __init__(self):
        super().__init__(use_alternative_loss= False)        
        # and some neural network stuff.
        self.layer_1 = torch.nn.Linear(....) # etc.
        self.layer_2 = torch.nn.Linear(....) # etc
        self.use_alternative_loss= use_alternative_loss

    def u(self, x): # evaluation of the neural network.
        x = self.layer_1(x)  # or whatever....
        x = self.layer_2(x)
        return x

    def forward(self, x):
        val = self.u(x)        
        val2 = self.u(x + stuff) # etc.
        if use_alternative_loss:
            # grossly simplified but along these lines.
            # where the "goal" would be that it is zero, for example.
            return stuff_with_val_and_val(val, val2)
        else:
            return other_stuff_with_val(val)

    def training_step(self, batch, batch_idx):
        resid = self(batch)
        return torch.sum(resid ** 2)

Hello, my apology for the late reply. We are slowly converging to deprecate this forum in favor of the GH build-in version… Could we kindly ask you to recreate your question there - Lightning Discussions