It is possible to max GPU utilization when training a model in Jupyter

Does anyone know if it is possible to max GPU utilization (e.g. to 80%) when training a model in Jupyter Notebook with PyLightning? (not memory)
On Ubuntu 18.04, training hogs 95+% of the GPU performance and my PC (single-GPU) becomes barely usable. I rather have the GPU leaving like 20% for the system and train a bit slower, so I can continue doing stuff like browsing and programming stuff only requiring CPU.

I think that it depends if you are running your code in jupyter code as

model = ...
trainer = Trainer(...)

or calling external script like

! python

then the also depends on the used backend, and the luck or performance can we cased bu some IO issue, for example for me, running some training in scripts the GPPU utilisation peaks around 75% with 98% memory used…
BTW, in both cases you are using the same machine with the same environment (the same package versions)?

I’m running my code like that. I’m hitting GPU utilization (according to Allegro Trains tracker) levels between 94-98%. About 85% of my GPU’s memory is being used.
The problem due that high GPU utilization, when I type a reply here, it sometimes takes a while for letters to appear. Basically slowing down everything I want to do by about 4 times.

The Jupyter Notebook is using a conda env with PyTorch Lightning 0.9.0. Data is being retrieved from an internal SSD. I have to do other tasks at this same machine, but there is no issue if I would have to use a different conda environment.

I just want to limit how much GPU performance is available to training the model, so I still have a machine that can used while waiting for training to finish (also for purposes like browsing the internet).

According to

this is not possible in PyTorch (the answer is a bit older, but still holds) and thus also not possible in PyTorch Lightning