Weird number of steps per epoch

Hello, I’m facing an issue of a weird number of steps per epochs being displayed and processed while training.
the number of steps per epoch should be as defined by my code len(train_dataloader) // BATCH_SIZE however I’m getting another number that corresponds to neither my train train_dataloser nor the len(train_dataloader) // BATCH_SIZE

Below is a colab link to my code :

Any thoughts why I’m getting this ?

if you are referring to the steps displayed in the progress bar, then the total steps in the progress bar is actually total_train_steps + total_val_steps. In your case, the displayed value is 1076, and train_batches = 112, val_batches = 964 so total is 112+964 = 1076.

1 Like

Indeed I’m referring to the steps displayed in the progress bar, so it’s normal to have the 1076 value right?
What If I only want to display train_batches ?


and that is not possible I guess since it doesn’t work that way. Although if you disable validation then it will just display just the training_batches.

Thank you for your reply, I couldn’t find this explanation anywhere in the docs.

Always dealing with the same code that I provided, I’m having this issue of decreasing loss but not increasing of F1 metric any idea from where this behaviour is originating?

the notebook is big, not possible for me to look at the complete code for now. But I’d suggest checking the metrics package thoroughly and whether you are using it correctly in your code. You might find some error there if it’s not increasing.