Logger in Lightning

Hey! I have been seeing a discrepancy between the final epoch validation loss I get from the self.log() of PyTorch lightning (Loss : 1.65) and calculating the mean of the losses per sample over the validation epoch myself (Loss : 1.73). So I wanted to ask whether self.log() and validation_epoch_end() functions work properly with multi GPU training?

This is my code (In init I have set → self.val_losses = ) :

def validation_step(self, batch, batch_idx):
        
        x, y = batch

        x = x.reshape(-1, 3, 256, 256)
        y = y.reshape(-1,3)

        preds = self(x)

        val_loss = self.angular_loss(preds, y)

        self.log('val_loss', val_loss)

        self.val_losses += val_loss.cpu().tolist() 
        
        return val_loss

def validation_epoch_end(self,losses):
        losses = self.val_losses
        losses = np.array(losses)
        result_summary = OrderedDict()
   
        result_summary["angular_error" + "_mean"] = np.mean(losses)
        result_summary["angular_error" + "_median"] = np.median(losses)
        result_summary["angular_error" + "_sem"] = np.std(losses) / np.sqrt(losses.shape[-1])
        result_summary["angular_error" + "_max"] = np.max(losses)

        print(result_summary)