Hi, it is currently not possible to return multiple dataloaders for training (that only works for validation).
A feature for this is in progress here #1959.
However, in your case, I think it is more elegant to do this:
Step 1:
return the right dataloader in each epoch:
def train_dataloader(self):
if self.current_epoch % 2 == 0:
labeled_dataloader = ...
return labeled_dataloader
else:
unlabeled_dataloader = ...
return unlabeled_dataloader
Step 2:
modify your training_step like this:
def training_step(...):
if self.current_epoch % 2 == 0:
# apply loss with labels
else:
# apply unsupervised loss
return ...
Step 3:
Finally, tell Trainer to call the train_dataloader method every epoch, so it will switch to the new dataset.
trainer = Trainer(..., reload_dataloaders_every_epoch=True) # False by default