Metrics in Tensorboard's Hparam View

Prior to Pytorch Lightning 0.9 I have been using the workaround proposed in issue #1228 in order to have metrics shown correctly in tensorboard’s hparam view:

class MyModule(LightningModule):
# set up 'test_loss' metric before fit routine starts
def on_fit_start(self):
    metric_placeholder = {'test_loss': 0}
    self.logger.log_hyperparams(self.hparams, metrics=metric_placeholder)

# at some method later
def test_epoch_end(self, outputs):
    metrics_log = {'test_loss': something}
    return {'log': metrics_log}

I noticed that if you want to show metrics both from the latest validation epoch and additional ones from the test loop, you have to manually hand over the validation metrics and log them once again in the test loop, otherwise, the validation metrics are set to 0 (placeholder) again during the testing phase.

Now with the changes in 0.9 to logging using the results objects (e.g.
result = pl.EvalResult(checkpoint_on=val_loss) etc.), I wonder what the current best practice is to get both the working tensorboard hparam view, while possibly still keeping the new, sleek logging style without manual aggregation in validation_epoch_end.

Hello,
Did you find a solution to this issue? I am trying to log the test results in Tensorboard’s Hparam View. I read a lot of posts on multiple github issue pages and it is still unclear to me how to do this correctly:

UserWarning: The {log:dict keyword} was deprecated in 0.9.1 and will be removed in 1.0.0 Please use self.log(...) inside the lightningModule instead.

.