I. Introduction
Robotic knee-ankle prostheses are developed to enable individuals with transfemoral amputations to navigate various walking environments they face daily. As community ambulation involves different walking modes, such as level walking, ramps, and stairs, recent prosthesis studies have been dedicated to generating lower limb dynamics that adjust to environments [1]. Conventional prosthesis systems recognize terrain conditions (i.e., ambulation mode) to switch to the corresponding mode-specific controller [2], [3], [4], [5]. However, as people take thousands of strides in various modes daily [3], a prosthesis user would encounter dozens of potential falls due to kinematic and kinetic mismatches induced by incorrect mode switches, even with a 99%-accurate mode classifier. Furthermore, Human dynamics vary not only across ambulation modes, but also within each mode [6], [7]. Thus, the traditional methods categorizing user intent as discrete modes cannot account for within-mode variations in prosthesis systems. Recent studies have addressed these issues by implementing continuously varying control parameters within specific subsets of modes, such as level ground and ramps [8], [9], [10] or level ground and stairs [11], [12]. However, these works were conducted in limited environmental combinations and haven't been yet demonstrated in a fully mode-unified environment.