I. Introduction
According to the recent report predicted by ITRS, the overall area of memory devices in an SOC (system on chip) will occupy over 90% of the entire chip area very soon. SRAM has been widely used as caches in various processors, e.g., CPU, AP, etc. The SRAM power reduction certainly propels the advance of these processors. Particularly for the power and energy saving demand of SRAMs, three major design approaches were proposed.
Current mode sense amplification [1]: During a read operation, the SA (sense amplifier) pre-determines the output result by sensing the differential current on two bitlines such that the low power and high speed would be feasible. Notably, the output delay is irrelevant of the bitline capacitances in such a scenario.
Current compensation circuit [2]: When the SRAM begins to work, the current compensation circuit detects the leakage current of each bitline and then injects a proper current into the corresponding bitline. Although this way can’t reduce the leakage current, the access speed of the SRAM will be improved. This kind of approach does not show any advantage of energy saving, particularly for the standby cells.
Secondary supply [3]: By using another higher supply voltage, the access of the SRAM cell will be fastened. However, the penalty is that extra energy is needed. The standby power or energy of those un-accessed cells is also ignored.