Implement model ensemble

I have been trying to implement model ensemble in plt, but haven’t got any elegant solution. The problem is simple: train a ResNet-50 on the same dataset several times with different random seed, when do the inference on a new dataset, I want to average the prediction from all those models. I can certainly do the inference multiple times, each time with a different model and save the prediction in a csv. But, I feel there should be a more elegant solution. Does anyone have any suggestion?

thanks,
Jianxu

hello, not sure if having the very same model just with different weights make such a difference, in theory, your ensembles shall be composed of weak classifiers it various architectures to ver the ground…

I tried to create multiple models and train them separately.
It does not necessary to store the result as far as there are two objects.
there is a snippet of my code:

autoencoder = LitAutoEncoder(t, signal, basisset)
autoencoder2 = LitAutoEncoder(t, signal, basisset)

def main():
    trainer = pl.Trainer(gpus=1, max_epochs=100)
    # trainer = pl.Trainer(gpus=1)
    trainer.fit(autoencoder.to('cuda:0'), DataLoader(train, batch_size=32), DataLoader(val, batch_size=32))
    trainer2 = pl.Trainer(gpus=1, max_epochs=100)
    trainer2.fit(autoencoder2.to('cuda:0'), DataLoader(train, batch_size=32), DataLoader(val, batch_size=32))

It might be possible to train them on two different GPUs.