DistributedSampler and LightningDataModule

When creating data loaders for DDP training, in the LightningDataModule is it ok for me to set the DistributedSampler when instantiating the dataloader?

Something like the following -

class MyData(pl.LightningDataModule):
    def train_dataloader(self, stage):
        if stage == "fit":
            return DataLoader(
                self.trainset,
                batch_size=self.hparams.batch_size,
                sampler=DistributedSampler(self.trainset, shuffle=True)
            )

In the Multi-GPU docs the recommendation is to not explicitly use DistributedSampler. In my normal workflow I implement the LightningDataModule.train_dataloader() to provide the trainer with my dataloader. In this case, it makes sense to me to explicitly set the DistributedSampler when instantiating my data loader. However, this contradicts the advice given in the docs hence my question.

Thanks in advance.