Allow logging of non-scalar values and providing more information to the logger

Moving forward I was wondering if there were any plans to allow the logging of non-scalar values through the self.log(...) method provided to pytorch lightning modules. For custom logging the user can access the logger through self.log.experiment() which custom loggers are free to overload. The disadvantage of this is:

  1. It adds additional lines of code to an otherwise beautifully clean experiment definition
  2. It ties a pytorch lightning module to a particular logger implementation

A potentially better alternative would be to allow the pytorch lightning module to expose variables to a logger (scalar, tensors or potentially anything else) with the logger class responsible for processing them into the correct format for logging. This would allow the logging of images / videos / whatever else provided the input to the logger can be generated from the exposed variables. One way to achieve this is to allow the log method to take arbitrary data types and instead of converting them to scalars before passing to the logger as self.trainer.logger.agg_and_log_metrics(scalar_metrics, step=step) in LoggerConnector.log_metrics() converting them after and changing the metrics_to_scalars defined in TrainerLoggingMixin to filter out / convert any object that cannot be converted to a scalar for downstream tasks. I am not sure if this last step is necessary. From looking at the code there are two places where side-effects might be present in from this change inlogger.agg_and_log_metrics(...):

self.logged_metrics.update(scalar_metrics)
self.trainer.dev_debugger.track_logged_metrics_history(scalar_metrics)

In the first case, this seems to simply update a dictionary of logged variables which is probably fine unless the variables in scalar_metrics are very large. In the second case scalar_metrics are appended which is none ideal. Luckily this only occurs when running in debug mode from what I can tell?

My question would be whether it is actually necessary to preserve logged variables in state. What is this used for? I was struggling to figure it out in the code. If this is not necessary then the proposed change is pretty simple to implement. I have a working proof of concept for this approach by simply overloading the agg_and_log_metrics(...) in the Trainer class to look something like:

    def metrics_to_scalars(self, metrics):
        """Overloaded to let logger log tensor variables."""
        if self._dont_reduce:
            metrics.update({
                'n_steps': len(self.train_dataloader),
                'n_epochs': self.max_epochs})
            return metrics
        else:
            return super().metrics_to_scalars(metrics)

Then the user is able to do what they want with the tensor variables by defining there own def log_metrics(self, metrics, step).

At the moment the logger is also passed 'epoch' in the dictionary scalar_metrics but there are some cases where you might want the logger to log additional information which is not known until after the instantiation of the Trainer object (such as length of dataloader etc). For example in the above I extract two other bits of information ("n_steps" and "n_epochs"). A better option might be to also give loggers access to the trainer object (just as the trainer object has access to the logger).

I would be more than happy to help contribute to this. I am a newcomer but keen to learn and happy to spend some time helping out!

Hello, my apology for the late reply. We are slowly converging to deprecate this forum in favor of the GH build-in version… Could we kindly ask you to recreate your question there - Lightning Discussions

Hello! Of course. I have just added moved across :slight_smile:

1 Like