Hi everyone, just a small question here.
I am trying to use Lightning with 4 GPUs, and I am getting some errors.We would like to know how we can be prepare a setup function to use multiple CPUs and GPUs.See below what we have done:
class MyDataset(object):
def __init__(self):
super().__init__()
self.cfg = cfg
[self.dm] = LocalDataManager(None)
self.rast = build_rasterizer(self.cfg, [self.dm]) def chunked_dataset(self, key: str):
dl_cfg = self.cfg[key]
dataset_path = [self.dm].require(dl_cfg["key"])
zarr_dataset = ChunkedDataset(dataset_path)
zarr_dataset.open()
return zarr_dataset
#Here we define a custom function called 'train_data_loader'
#We created this property:
def train_data_loader(self):
key = "train_data_loader"
dl_cfg = self.cfg[key]
zarr_dataset = self.chunked_dataset(key)
agent_dataset = AgentDataset(self.cfg, zarr_dataset, self.rast)
return DataLoader(
agent_dataset,
shuffle=dl_cfg["shuffle"],
batch_size=dl_cfg["batch_size"],
num_workers=dl_cfg["num_workers"],
)
We have also tried this one:
CPUnum =19
gpus =4
trainer = pl.Trainer(num_processes=CPUnum,gpus=gpus, max_steps=500, min_epochs=3, max_epochs=10,default_root_dir=resul_dir) # (edited)
Thank you very much!