Multiple data loader get stuck at epoch 1

Hi can I have some help with this problem.

I defined two data loaders which has the same length
and with

trainer.fit(model,
              [loader1, loader2]
           ...)

int the

def training_step(self, batch, batch_idx):  # type: ignore
        if some_condition:
            out_1 = forward(batch[0])  
            out_2 = forward(batch[1]) 
            loss = my_loss_function(out_1, out_2)
            
        return loss

then I see there are two problems compared with using only one loader:
1, the training speed is about 5 times slower than before
2. the training is stuck in the epoch 1 and never get out

Training   -> Epoch 1, batch 71120: running loss = 0.003,

(there are very small training examples in there actually. With one data loader, it finish within 10 batches.)

Thanks