Loading [MathJax]/extensions/MathMenu.js
Long-Horizon Planning and Execution With Functional Object-Oriented Networks | IEEE Journals & Magazine | IEEE Xplore

Long-Horizon Planning and Execution With Functional Object-Oriented Networks


Abstract:

Following work on joint object-action representations, functional object-oriented networks (FOON) were introduced as a knowledge graph representation for robots. A FOON c...Show More

Abstract:

Following work on joint object-action representations, functional object-oriented networks (FOON) were introduced as a knowledge graph representation for robots. A FOON contains symbolic concepts useful to a robot's understanding of tasks and its environment for object-level planning. Prior to this work, little has been done to show how plans acquired from FOON can be executed by a robot, as the concepts in a FOON are too abstract for execution. We thereby introduce the idea of exploiting object-level knowledge as a FOON for task planning and execution. Our approach automatically transforms FOON into PDDL and leverages off-the-shelf planners, action contexts, and robot skills in a hierarchical planning pipeline to generate executable task plans. We demonstrate our entire approach on long-horizon tasks in CoppeliaSim and show how learned action contexts can be extended to never-before-seen scenarios.
Published in: IEEE Robotics and Automation Letters ( Volume: 8, Issue: 8, August 2023)
Page(s): 4513 - 4520
Date of Publication: 13 June 2023

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

An Ongoing trend in robotics research is the development of robots that can jointly understand human intention and action and execute manipulations for human domains. A key component for such robots is a knowledge representation that allows a robot to understand its actions in a way that mirrors how humans communicate about action [1]. Inspired by the theory of affordance [2] and prior work on joint object-action representation [3], the functional object-oriented network (FOON) was introduced as a knowledge graph representation for service robots [4], [5]. FOONs describe object-oriented manipulation actions through its nodes and edges and aims to be a high-level planning abstraction closer to human language and understanding. They can be automatically created from video demonstrations [6], and a set of FOONs can be merged into a single network from which knowledge can be quickly retrieved as plan sequences called task trees [4].

Select All
1.
D. Paulius and Y. Sun, "A survey of knowledge representation in service robotics", Rob. Aut. Syst., vol. 118, pp. 13-30, 2019.
2.
J. Gibson, "The theory of affordances" in Perceiving Acting and Knowing: Toward an Ecological Psychology, Hillsdale, NJ, USA:Erlbaum, 1977.
3.
Y. Sun, S. Ren and Y. Lin, "Object-object interaction affordance learning", Robot. Auton. Syst., vol. 62, pp. 487-496, 2013.
4.
D. Paulius, Y. Huang, R. Milton, W. D. Buchanan, J. Sam and Y. Sun, "Functional object-oriented network for manipulation learning", Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., pp. 2655-2662, 2016.
5.
D. Paulius, A. B. Jelodar and Y. Sun, "Functional object-oriented network: Construction and expansion", Proc. IEEE Int. Conf. Robot. Automat., pp. 5935-5941, 2018.
6.
A. B. Jelodar, D. Paulius and Y. Sun, "Long activity video understanding using functional object-oriented network", IEEE Trans. Multimedia, vol. 21, no. 7, pp. 1813-1824, Jul. 2019.
7.
D. McDermott et al., "PDDL–The planning domain definition language", 1998.
8.
D. Paulius, "Object-level planning and abstraction", Proc. Conf. Robot Learn. Workshop Learn. Percep. Abstraction Long- Horiz. Plan., pp. 1-4, 2022.
9.
G. Konidaris, "On the necessity of abstraction", Curr. Opin. Behav. Sci., vol. 29, pp. 1-7, 2019.
10.
O. Kroemer, S. Niekum and G. Konidaris, "A review of robot learning for manipulation: Challenges representations and algorithms", J. Mach. Learn. Res., vol. 22, no. 30, pp. 1-82, 2021.
11.
A. Agostini, M. Saveriano, D. Lee and J. Piater, "Manipulation planning using object-centered predicates and hierarchical decomposition of contextual actions", IEEE Robot. Automat. Lett., vol. 5, no. 4, pp. 5629-5636, Oct. 2020.
12.
M. Ghallab, D. Nau and P. Traverso, Automated Planning and Acting, Cambridge, U.K.:Cambridge Univ. Press, 2016.
13.
M. Tenorth and M. Beetz, "Representations for robot knowledge in the KnowRob framework", Artif. Intell., vol. 247, pp. 151-169, 2017.
14.
K. Ramirez-Amaro, M. Beetz and G. Cheng, "Transferring skills to humanoid robots by extracting semantic representations from observations of human activities", Artif. Intell., vol. 247, pp. 95-118, 2017.
15.
L. P. Kaelbling and T. Lozano-Pérez, "Hierarchical task and motion planning in the now", Proc. IEEE Int. Conf. Robot. Automat., pp. 1470-1477, 2011.
16.
M. Toussaint, "Logic-geometric programming: An optimization-based approach to combined task and motion planning", Proc. Int. Joint Conf. Artif. Intell., pp. 1930-1936, 2015.
17.
N. T. Dantam, Z. K. Kingston, S. Chaudhuri and L. E. Kavraki, "An incremental constraint-based framework for task and motion planning", Int. J. Robot. Res., vol. 37, no. 10, pp. 1134-1151, 2018.
18.
C. R. Garrett, T. Lozano-Pérez and L. P. Kaelbling, "PDDLStream: Integrating symbolic planners and blackbox samplers via optimistic adaptive planning", Proc. Int. Conf. Automated Plan. Scheduling, vol. 30, pp. 440-448, 2020.
19.
A. Agostini, E. Celaya, C. Torras and F. Wörgötter, "Action rule induction from cause-effect pairs learned through robot-teacher interaction", Proc. Int. Conf. Cogn. Syst., pp. 213-218, 2008.
20.
B. Quack, F. Wörgötter and A. Agostini, "Simultaneously learning at different levels of abstraction", Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., pp. 4600-4607, 2015.
21.
Y. Yang, A. Guha, C. Fermuller and Y. Aloimonos, "Manipulation action tree bank: A knowledge resource for humanoids", Proc. IEEE-RAS Int. Conf. Humanoid Robots, pp. 987-992, 2014.
22.
H. Zhang and S. Nikolaidis, "Robot learning and execution of collaborative manipulation plans from YouTube cooking videos", 2019.
23.
A. Agostini and D. Lee, "Efficient state abstraction using object-centered predicates for manipulation planning", 2020.
24.
A. B. Jelodar and Y. Sun, "Joint object and state recognition using language knowledge", Proc. IEEE Int. Conf. Image Process., pp. 3352-3356, 2019.
25.
M. Helmert, "The fast downward planning system", J. Artif. Intell. Res., vol. 26, pp. 191-246, 2006.
26.
A. J. Ijspeert, J. Nakanishi, H. Hoffmann, P. Pastor and S. Schaal, "Dynamical movement primitives: Learning attractor models for motor behaviors", Neural Computation, vol. 25, pp. 328-373, 2013.
27.
A. Agostini, C. Torras and F. Wörgötter, "Efficient interactive decision-making framework for robotic applications", Artif. Intell., vol. 247, pp. 187-212, 2017.
28.
T. Kulvicius, K. Ning, M. Tamosiunaite and F. Worgötter, "Joining movement sequences: Modified dynamic movement primitives for robotics applications exemplified on handwriting", IEEE Trans. Robot., vol. 28, no. 1, pp. 145-157, Feb. 2012.
29.
E. Rohmer, S. P. N. Singh and M. Freese, "CoppeliaSim (formerly V-REP): A versatile and scalable robot simulation framework", Proc. Int. Conf. Intell. Robots Syst., pp. 1321-1326, 2013, [online] Available: http://www.coppeliarobotics.com.
30.
D. Höller et al., "HDDL: An extension to PDDL for expressing hierarchical planning problems", Proc. AAAI Conf. Artif. Intell., vol. 34, no. 06, pp. 9883-9891, 2020.
Contact IEEE to Subscribe

References

References is not available for this document.