Use the same logger, when resuming from checkpoint

Hey,

I have trained a checkpoint and want to resume training. For this, I use the resume_from_checkpoint argument from the Trainer class. For logging, I use the TensorboardLogger. Unfortunately, a new logger will be created for the new training, but ideally, I want the logging to be continued with the old logger. I tried to save the logger in the LightningModule of the checkpoint but it will still create a new logging directory. Is there a way to reuse the old logger?

hello :slight_smile:

I had the same issue but with Wandb. I contacted their community (I even had a video call with them) and I understood that I can’t resume logging from the last checkpoint as the training is already finished.
But after a little bit of research, I found this issue and I think it can help you as wandb and tensorboard are just loggers.

I hope this will help you :wink: good luck!