I. Introduction
Lower-limb wearable robots, such as powered prostheses and exoskeletons, have great potential to enhance mobility for people with lower-limb disabilities. By inferring their user's intent (such as walking or climbing stairs) from onboard sensors and applying the corresponding torques to the user's biological or prosthetic joints, these devices aim to mechanically compensate for the disability and to allow users to perform the tasks comfortably. However, since typical onboard sensors of such robots, for example, their inertial measurement units (IMUs), joint encoders, and force/torque sensors, can only offer a limited picture of the user's true intent [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], the inference of human intent is challenging, and its failure can lead to unreliable device behavior. For safety-critical applications with a prosthetic leg, this unreliability is a major obstacle to system acceptance and adoption. As a result, reliably tracking the various activities of daily living has become a key challenge for lower-limb wearable robot controllers.