ModelCheckpoint docs for every_n_epochs==None

The docs for ModelCheckpoint say:

every_n_epochs: Number of epochs between checkpoints.
If every_n_epochs == None or every_n_epochs == 0, we skip saving when the epoch ends.
To disable, set every_n_epochs = 0. This value must be None or non-negative.

However, by default every_n_epochs==None, and in the simplest case of

checkpoint_callback = ModelCheckpoint(monitor="val_loss")
trainer = Trainer(callbacks=[checkpoint_callback])

it seems like checkpoints are still saved. Are the docs correct?

hey @lap

looks like the docs are updated on master, so should be good now.

Also, we have moved the discussions to GitHub Discussions. You might want to check that out instead to get a quick response. The forums will be marked read-only soon.

Thank you