I. Introduction
Autonomous vehicles require spatially and semantically-rich representations of their environment and doing this from cameras alone is challenging. While semantic segmentation in the image-plane is a good initial step, it lacks the spatial layout that would make it directly useful for downstream tasks such as trajectory forecasting and path planning. A semantically segmented birds-eye-view (BEV) map provides a compact method of capturing the spatial configuration of a scene and the agents within it.