Val dataloader batchsize overrides train dataloader size

I am using two different dataloaders for train_dataloader and val_dataloader. These two dataloaders use two different datasets.
Mysteriously, the batch size for val_dataloader (which is declared after train_dataloader) overwrites the batch size for train_dataloader. As a result, the training code uses batch size that is batch size for validation. Is this a known bug?