Why is pytorch sometimes running on multi- or 1- cpu?


I’ve noted that when I run the same pytorch/lightning code on my laptop, it’s using the 8 CPUs
while when I run it on my desktop, it’s only using 1 CPU (while there are 16 CPUs), and so it’s much slower on my desktop.

Any idea why ?
I don’t have any GPU, and it’s vanilla/basic lightning code, without anything fancy (no parallelization, accelerate…)

Thank you !

do you use the same pytorch lightning version?

Thank you for your reply. Yes, it’s the same lightning versions, but after a second thought, I’m not so sure any more about the precise behavior of both codes: I was looking at the load with htop, but it’s actually not so clear that all cores are really active, and for which precise task, I’m wondering if it’s not simply related to embedded CPU optimization, hyper-threading and all this stuff, so it’s very likely not at all related to pytorch… Sorry about the initial question, I think it’s out of topic, I propose to close it if you don’t mind !

Thanks !

I am also not sure. However this might be interesting for you: Using multiple CPU cores for training - #9 by Murtaza_Basu - distributed - PyTorch Forums

I guess if you don’t want to continue this thread anymore than it’s definitely good reason to close it :wink:
have a nice day :slight_smile:

Thank you for the link !
Sure we can close it… though I’m not sure how to, I’ll try :wink:
Thanks !