The docs for ModelCheckpoint say:
every_n_epochs: Number of epochs between checkpoints.
every_n_epochs == None or every_n_epochs == 0, we skip saving when the epoch ends.
To disable, set
every_n_epochs = 0. This value must be
However, by default every_n_epochs==None, and in the simplest case of
checkpoint_callback = ModelCheckpoint(monitor="val_loss") trainer = Trainer(callbacks=[checkpoint_callback])
it seems like checkpoints are still saved. Are the docs correct?