1. INTRODUCTION
Autonomous vehicles are poised to revolutionize transportation in the future. One direction towards realizing this vision involves relaying massive volumes of locally collected sensor data from the vehicle to a central cloud for optimizing routes and obstacle avoidance, in the order of 10 Gbps. Transmission in the millimeter-wave (mmWave) band makes such high data rates possible [1], though beamforming is required to overcome the high path loss in this band by channeling the radio frequency (RF) energy in narrow spatial lobes. We have previously used machine learning (ML) over contextual information from the environment, captured via images from a vehicle-mounted camera to enable beamforming faster than the standards-defined brute force approach. In this paper, we tackle the key problem of enabling our image-guided beamforming approach to work in environments not seen during training, while minimizing the costly overhead of new data collection.