Training using DDP and SLURM

The current scenario is two nodes with different free GPUs. For instance, node1 has 5 free gpus and node2 has 3 free gpus. I can requested the 8 free gpus using slurm without care the number of nodes. Is there any way that I can use PL for using the 8 available gpus in this context?. I read the documentation and it looks that one constraint is to have always the same number of free gpus on each node.

Hello, my apology for the late reply. We are slowly converging to deprecate this forum in favor of the GH build-in version… Could we kindly ask you to recreate your question there - Lightning Discussions