I. Introduction
Strained silicon CMOSFETs have become a vital technology for the continued progression of transistor performance as laid out in the International Technology Roadmap for Semiconductors and already have been incorporated into the 90- and 65-nm CMOS technology nodes [1]–[3]. Therefore, strained Si or channel stress engineering is being actively pursued to significantly enhance carrier mobility and transistor performance in nanoscale CMOSFETs. Several approaches have been considered for introducing strain into CMOS transistors to improve electron and hole mobility. In one approach, transistors are fabricated on a biaxial tensile strained Si that is formed on a relaxed silicon–germanium (SiGe) buffer layer [4]. However, issues involving high defect density, Ge diffusion, complex process integration, and costs have hindered its adoption in nanoscale CMOSFETs [5], [6]. Therefore, recently, uniaxial tensile and compressive strain has been proposed to enhance the drive current of NMOS and PMOS separately. For example, a highly stressed silicon nitride (SiN) contact etch stopping layer (CESL) is used to induce uniaxial tensile or compressive strain in the transistor channel region [7]–[10]. SiC source/drain (S/D) stressors for NMOS and SiGe S/D stressors for PMOS enhance electron and hole mobility due to the tensile and compressive strain stresses in the Si channel, respectively [11], [12]. Along with the effort to improve device performance using stress engineering, the effect of stress on reliability has also been studied [13]–[19]. However, few reports address the concurrent investigation of the dependence of device performance and reliability [hot-carrier injection (HCI) and negative bias temperature instability (NBTI)] on applied stresses. Moreover, the mechanism of device degradation and interface trap variation on different film stresses has not yet been fully clarified.