How to resume training in detectron2 with pl

In this script, detectron2/lightning_train_net.py at main · facebookresearch/detectron2 · GitHub
it says to enable resuming, pl checkpointing should be used. Is there any specific way to use this feature in this script?

Besides, detectron2’s checkpointer supports only saving max_to_keep models, is it also supported in pl’s checkpointer?