Training on single-node multi-GPU VM

I posted this on the Optuna github page thinking it may be a nuance with that package (which it may well be)

I am trying to run a search for hyper parameters on a multi-GPU VM so that each GPU trains different parameters. I was wondering if the packages are try to go after the same resources or something I may need to set in pyTorch/pyt-L

If this has no relevance with pytorch-lightning, pls note/flag and I am happy to delete. Thx.

hi, I would say that with setting gpus Trainer flags as particular index [0] for the first GPU