Hey, I need a way to do multiple batches of validation data into a single val step. The GPU can only fit a batch size of 8 into memory but this isn’t enough ‘granularity’ for monitoring training metrics. 64 or 128 is often used as our val set size.
Unfortunately, the validation signature is def validation_step(self, batch, batch_idx):
is there any native way to access the validation dataloader from inside the validation_step function? are there other solutions to this problem? Thanks.
to acces the dataloaders from inside validation_step, you can use
self.trainer.val_dataloaders which contains a list of val dataloaders configured. Also, for your use-case you are monitoring training metrics, but how would increasing val batch_size would help here.
Also, we have moved the discussions to GitHub Discussions. You might want to check that out instead to get a quick response. The forums will be marked read-only soon. We can continue the discussion there.