CPU count during training

Is there a way to choose the number of cpus/threads when running trainer.fit()?

Right now it seems that it uses all that are available, but I would like to limit this in some way

I see there are options to choose the number of gpus, but is there nothing for cpus?

is this what you need? Trainer — PyTorch Lightning 1.1.2 documentation

hmm im not sure. in the docs it say that only works with distributed_backend=”ddp_cpu”?

num_processes ([ int ]) – number of processes for distributed training with distributed_backend=”ddp_cpu”

yes, num_processes is used only with ddp_cpu as distributed backend, so in 1.2 we will set the num always to max unless you define any other number; for now, you can use num_processes=os.cpu_count()

will there be a way to set the number of cpus to any number (not max) in the future?