Default Logging + MLFlow

I know you can create a list of loggers and pass it to the trainer, however creating a tensorboard logger and an mlflow logger isn’t giving me what I want.

if there a way to keep the default logging (as if I don 't pass in a logger) and add the mlflow logger.

specifically, I want every stored locally in ./lightning_logs (logs, checkpoints, you know the default stuff) while logging to my MLFlow server.

you can try:

logger=[
    MLFlowLogger(...),
    TensorBoardLogger(save_dir=os.getcwd(), name='lightning_logs')
],

if you are changing default_root_dir or weights_save_path in Trainer, then you might need to update save_dir too in TensorBoardLogger accordingly.

thanks so much for responding.
I tried your suggestion and got really close, however, it created 2 directories “lightning_logs” and “1_lightning_logs”.
It seems that “1_lightning_logs” has all the checkpoints, and “lightning_logs” has a version folder with an events.out.tfevents file along with the hparams.yaml.

Is there anyway to get the checkpoints in the lightning_logs folder and not create the 1_lightning_logs folder?

hmm… weird… mind reproduce it with Google Colab

will check. works fine for me locally.