I have hard to understand how to use return in validation_step, validation_epoch_end (well this also goes for train and test).

First of all, when do I want to use validation_epoch_end? I have seen some not using it at all.

Second, I do not understand how the logging works and how to use it, eg

```
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = F.cross_entropy(y_hat, y)
return {'loss': loss, 'log': loss}
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = F.cross_entropy(y_hat, y)
return {'val_loss': loss}
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
log = {'val_loss': avg_loss}
return {'val_loss': avg_loss, 'log': log}
```

Where does log go? I understand the return of ‘loss’, but I don’t understand where ‘log’ goes and how to use it.

Third, what I understand there is a new way to use log by writing self.log. I get warnings by not using this. So what is the difference?