I. Introduction
Advanced and complex applications like multi-step time series forecasting require sequence-to-sequence learning. An LSTM model can be trained in sequence-to-sequence learning to map an input sequence to an output sequence. An LSTM-based sequence-to-sequence model (seq2seq) consists of two submodels that are known as encoders and decoders. The Encoder-Decoder LSTM is a model designed primarily to forecast variable length input and output. The model was created for sequence-to-sequence forecasting with a sequence of input and a sequence of output that can be of different lengths. The encoder component summarises the input sequence by converting it into a fixed-length vector to encode it. The context vector is the name given to this fixed-length vector. It represents the model’s interpretation of the sequence as the encoder’s output. To forecast the output sequence, the decoder receives the context vector as input and the final encoder state as the initial decoder state. Various encoder models can be used in the Encoder-Decoder LSTM architecture, including stacked, bidirectional, CNN, and vanilla LSTM models.