I. Introduction
Despite the rapid evolution of planning and robotics almost all of existing systems require at least occasional human support for achieving their goals and matching ethical requirements [2], [9], [17], [20]. All approaches agree in restricting the level of autonomy of the system is straightforward for ensuring that sufficient operator support is obtained before engaging in sensitive actions. However, yet intuitive, nearly none of these approaches account for the natural consequences of such restrictions when optimizing policies. These consequences are tied to dynamics of requesting human support: time and costs for obtaining and maintaining support, ease for this support to be withhold over time, risks and consequences if this support is unexpectedly denied while the system should be under tight human control. As a consequence, these systems produce human-insensitive ill-fit plans, such as taking the shortest path assuming an operator will be available for fixing the sensitive segments of the path; no matter the hour of the day or whether the connection is lossy and the segment time-sensitive. In this case, we want a system able of automatically adjusting problem-solving considerations to be sensitive to the support context, for instance by taking a slightly longer route if it allows for a level of support that is more affordable.