Saving/Loading the Model for Inference Later

I am a bit confused here between saving the model and saving a checkpoint.

What I would like to do is save the model (the architecture and weights) into a file (or multiple) and then load it later with out having to define the LightningModule and the models architecture in the inference code (different notebook).

Can this be done?

My hope was saving the models in MLFlow through a bunch of different experiments and then later running the all the models on hold-out data. I don’t really want to have to go back and write the LighningModule and model definition for every saved model again to just load the checkpoint.

I hope that makes sense.

anyone here can that offer some help?

Hello, you can export model to ONNX, and then load it w/o having python code.
But that works only for inference


You use MLFlow for tracing logs or some other reason as I see, so you can use MLFlow to save the whole model with log_model() or save_model() functions after training.