How to customize trainer in order to restrict parameter range during training?


I am currently using Pytorch Lightning to train my CNN for object detection. Since I want to add a customized layer and I want the parameters in this layer to be restricted between [0,1], the idea I came up with was to clip these parameters using e.g. clamp() after each single gradient update, but I am not sure how to realize it using Pytorch Lightning. I don’t think that the default Trainer class has such functionality. So should I also customize the Trainer itself, or is there any other possibility that I can reach my goal without changing the default Trainer class?

Best regards

hey @Unseo

you can configure optimizer_step hook within LightningModule to achieve this.

class LitModel(LightningModule):
    def optimizer_step(self, *args, **kwargs):
        super().optimizer_step(*args, **kwargs)  # <- parameters are updated here
        self.layer.clamp(...)  # <- now clamp the parameters

Also, we have moved the discussions to GitHub Discussions. You might want to check that out instead to get a quick response. The forums will be marked read-only soon.

Thank you

Thanks a lot for the solution :slight_smile: