How to train PyTorch on multiple GPUs

Suppose we need to train a pytorch model on multiple GPUs, how can this be achieved?

With PyTorch Lightning you can set a flag in the trainer and not worry about making cuda calls etc…

model = MyModel()
trainer = Trainer(gpus=2)
trainer.fit(model)