Forward method for a Sequence to Sequence RNN time series model

Hello folks. This is my first time looking at Pytorch Lightning, but I had a question about the forward() method. So I have a Sequence-to-Sequence model that I use for time series analysis. I have an encoder that is an LSTM and a decoder which is LSTM with attention.

My question is about setting up the forward() method. I am not quite clear on what that method is intended to produce or what its output is supposed to be. Perhaps I am confusing it with the forward method in the usual pytorch nn.Module.

So in my own Seq-2-Seq model, inference usually involves taking a time series window–maybe 20 days–and then predicting out 20 more days. I have to take in the original time series window, then pass that to the encoder and capture the outputs of the encoder hidden state. Then I have to pass that hidden state to the decoder and iterate over and over until I have 20 predictions for my output array.

What was a little confusing was that in the MNIST autoencoder example, the output of the forward() function was just the end of the encoder without the additional decoder piece. I can see how the decoder is just used to train the encoder or something, but that part was not clear from the video or docs.

Hence I imagine my forward() method for my own Seq-2-Seq model would include both the encoder and decode step to generate the prediction output array, but I want to make sure that I was on the right track. Haha, I will have to get used to a whole new set of error messages so not sure how I will decipher those in the Lightning framework.