I. Introduction
The Current practice for testing analog/RF devices is specification (or parametric) testing, which involves direct measurement of the performance parameters (e.g., gain, integral nonlinearity, noise figure, third-order intercept point, etc.). While specification testing is highly accurate, it often incurs a very high cost. Indeed, testing the analog/RF functions of a mixed-signal integrated circuit (IC) is typically responsible for the majority of the total cost despite the fact that the vast majority of the IC is digital [1]. In particular, the base cost per second of automatic test equipment (ATE) escalates rapidly when incorporating mixed-signal and RF features. Compounding this problem, specification testing involves long test times. Specifically, during its course, the device is consecutively switched to numerous test configurations, resulting in long setup and settling times. In each test configuration, measurements are performed multiple times and averaged in order to moderate thermal noise and crosstalk. In addition, this elaborate procedure is repeated under various operational modes such as temperatures, voltage levels, and output loads.