Loading [MathJax]/extensions/MathMenu.js
Interconnected Traffic Forecasting Using Time Distributed Encoder-Decoder Multivariate Multi-Step LSTM | IEEE Conference Publication | IEEE Xplore

Interconnected Traffic Forecasting Using Time Distributed Encoder-Decoder Multivariate Multi-Step LSTM


Abstract:

Long Short-term Memory (LSTM) is a Recurrent Neural Network (RNN) that is widely used in time series traffic forecasting. LSTM captures both short-term and long-term tren...Show More

Abstract:

Long Short-term Memory (LSTM) is a Recurrent Neural Network (RNN) that is widely used in time series traffic forecasting. LSTM captures both short-term and long-term trends and dependency in sequential data like time series data, as it contains specialized memory cells to store information in memory for longer periods. Existing traffic forecasting approaches lack features to forecast the traffic speed of interconnected road links and provide multivariate (i.e., multi-input and multi-output) and multi-step traffic forecasting both in the short- and long-term. We propose an Encoder-Decoder LSTM-based sequence-to-sequence architecture to capture the traffic speed of interconnected road links and provide multivariate multistep traffic forecasting both in the short-term (15 minutes) and long-term (two days). We apply a sliding-window approach to feed the short-term traffic forecasting as input to the model to project long-term traffic forecasting. Our model can incorporate multiple interconnected road links and providing traffic speed forecasting for multiple future steps. We conducted our experiment at an intersection in Oshawa, ON, Canada, and evaluated performance using the error distribution and Mean Absolute Error. The evaluation shows that the model can forecast traffic speed across interconnected road links with negligible error, both in the short-term and the long-term.
Date of Conference: 02-05 June 2024
Date Added to IEEE Xplore: 15 July 2024
ISBN Information:

ISSN Information:

Conference Location: Jeju Island, Korea, Republic of

I. Introduction

Advanced and complex applications like multi-step time series forecasting require sequence-to-sequence learning. An LSTM model can be trained in sequence-to-sequence learning to map an input sequence to an output sequence. An LSTM-based sequence-to-sequence model (seq2seq) consists of two submodels that are known as encoders and decoders. The Encoder-Decoder LSTM is a model designed primarily to forecast variable length input and output. The model was created for sequence-to-sequence forecasting with a sequence of input and a sequence of output that can be of different lengths. The encoder component summarises the input sequence by converting it into a fixed-length vector to encode it. The context vector is the name given to this fixed-length vector. It represents the model’s interpretation of the sequence as the encoder’s output. To forecast the output sequence, the decoder receives the context vector as input and the final encoder state as the initial decoder state. Various encoder models can be used in the Encoder-Decoder LSTM architecture, including stacked, bidirectional, CNN, and vanilla LSTM models.

Contact IEEE to Subscribe

References

References is not available for this document.