I. Introduction
Geometric constraints, such as the plane of a whiteboard during erasing or the arc of an opening door frequently occur in tasks. These motions are different from unconstrained motions as they restrict the available degrees of freedom, and require different control approaches for execution on a robot. Recognizing constraints in human demonstrations is therefore valuable in robot programming by demonstration (PbD) [1] as ultimately these demonstrations will be used for robot programming. While current PbD work uses kinesthetic demonstrations (i.e., users showing the task by moving the robot), newer approaches consider a more natural input method of recording the user's movement as they perform the task with their hands or a tool. Our goal is to provide methods for interpreting such natural demonstrations, specifically to identify the geometric constraints involved.