Does ModelCheckpoint callback take priority over EarlyStopping callback?

Here’s my runner code snippet:

    from pytorch_lightning.callbacks import ModelCheckpoint
    checkpoint_callback = ModelCheckpoint(
        save_top_k=5,
        verbose=True,
        monitor='avg_val_loss',
        mode='min'
    )

    from pytorch_lightning.callbacks.early_stopping import EarlyStopping
    early_stop_callback = EarlyStopping(
        monitor='avg_val_accuracy',
        min_delta=0.00,
        patience=10,
        verbose=True,
        mode='max'
    )

    model = Model()
    trainer = Trainer(gpus=1,
        callbacks=[early_stop_callback],
        checkpoint_callback=checkpoint_callback)
    
    trainer.fit(model)

My above training stops with the output:

Epoch 29: avg_val_loss was not in top 5

Does this mean the ModelCheckpoint callback also performs some kind of early stopping?
If not, why is my EarlyStopping callback not being effective?

this comes from ModelCheckpoint since you have set save_top_k=5 and verbose=True. So at epoch 29 the monitored value avg_val_loss wasn’t in top 5.

ref: https://github.com/PyTorchLightning/pytorch-lightning/blob/5c1eff351b035db5881d0cff81b1d9c9e150e2d0/pytorch_lightning/callbacks/model_checkpoint.py#L493-L498