I am implementing a module where i am trying to use labeled and unlabeled dataset for semi-supervised classifcation. The solution provided here ( switch between multiple train dataloaders ) on how to load 2 dataloaders is a big help. As i understand, we calculate loss here alternatively for labeled and unlabeled. However, in my problem, i have below issue.
- The loss obtained from labeled and unlabeled dataloader needs to be added per batch
- This combined loss is then will be optimized using loss.backward()
I tried to use the solution here (https://pytorch-lightning.readthedocs.io/en/latest/multiple_loaders.html), however in my case batch size of both dataloader is different, so this doesn’t work.
any small help is appreciated. Please let me know if some things unclear.