I am currently using Pytorch Lightning to train my CNN for object detection. Since I want to add a customized layer and I want the parameters in this layer to be restricted between [0,1], the idea I came up with was to clip these parameters using e.g. clamp() after each single gradient update, but I am not sure how to realize it using Pytorch Lightning. I don’t think that the default Trainer class has such functionality. So should I also customize the Trainer itself, or is there any other possibility that I can reach my goal without changing the default Trainer class?