I. Introduction
5G and Beyond 5G systems can provide high-resolution measurements of delays and angles, which make them attractive for localization and sensing applications [1]–[4]. Localization of connected devices is important for autonomous vehicles [5], Vehicle-to-Everything (V2X) [6], [7], and spatial signal design [8]. Sensing of passive objects and connected users is an important part of integrated sensing and communication (ISAC) [9]–[11], e.g., to support predictive beam tracking, beam alignment, and communication resource allocation. Sensing in ISAC relies on standard communication waveforms, such as orthogonal frequency division multiplexing (OFDM) and can be monostatic or bistatic. In monostatic sensing, the transmitter and receiver are co-located, which brings the advantage of a common clock and perfect knowledge of the transmitted data signal [12], thus allowing a common radio signal for communication and sensing. In bistatic sensing, the transmitter and receiver are spatially separated [3]. Hence, the data symbols are unknown to the receiver, and the transmitter and receiver are not synchronized. The former issue can be resolved by sending predetermined pilot signals within the OFDM frame structure, while the latter issue has the serious implication that only delay-differences among propagation paths bring information. Even in a bistatic setting, there are several benefits and challenges to the integration of sensing and communication, beyond the use of the same waveform type and the aforementioned improvement of communication. In particular, the sensing and communication resources must be multiplexed, which leads to interesting trade-offs at the transmit signal level [13]. At the receiver side, the channel estimation routines used as part of communication can be largely reused for sensing, leading to chip area savings. When the transmitter and receiver have known locations (e.g., base stations (BSs)), the problem is referred to as passive localization or mapping. When the transmitter or the receiver has an unknown location (e.g., a user equipment (UE)), this location must be determined jointly with the map (this also applies to the monostatic case with a UE radar), referred to as radio simultaneous localization and mapping (SLAM) [14], [15]. Here, the UE acts as a sensor with an unknown and time-varying state, while the static objects in the propagation environment act as landmarks with unknown states and cardinality.