Stop training if high enough accuracy isn't reached

Hi, I know that there is EarlyStopping if validation metrics are deteriorating. But I was wondering if it was possible to stop training if after say epoch 10, the accuracy hasn’t reached say 20%. If such a callback doesn’t exist, any thoughts on how I can get started on the implementation of it?

For context I am running a distributed hyper-parameter optimizer and I know that the “good” hyper-parameter set will get me to 50% accuracy by epoch 5.