My dataset is large.
Is there a way for lightning to use the first 1/10 for the first epoch, the 2/10 for the second epoch, etc
Using distributed_backend=‘ddp’ and gpus=[0,1]
My dataset is large.
Is there a way for lightning to use the first 1/10 for the first epoch, the 2/10 for the second epoch, etc
Using distributed_backend=‘ddp’ and gpus=[0,1]
@John_Grabner you could specify your train loader to iteratively cycle through each partition of your dataset
How does one tell the loader to “iteratively cycle through” ?