Loading [a11y]/accessibility-menu.js
Enhancing Vision and Language Navigation With Prompt-Based Scene Knowledge | IEEE Journals & Magazine | IEEE Xplore

Enhancing Vision and Language Navigation With Prompt-Based Scene Knowledge


Abstract:

A challenging task in embodied artificial intelligence is enabling the robot to carry out a navigational task following natural language instruction. In the task, the nav...Show More

Abstract:

A challenging task in embodied artificial intelligence is enabling the robot to carry out a navigational task following natural language instruction. In the task, the navigator needs to understand objects, directions, as well as room types, which serve as landmarks for navigation. Although it is easy to encode objects and directions with an external encoder like an object detector, current navigators struggle to encode room type information properly due to the low accuracy offered by existing classifiers. This inadequacy poses confusion that navigators find difficult to overcome. Even humans may sometimes fail to determine the exact type of a room since multiple room types may exist in one panorama. To mitigate this problem, we propose to encode room type information in a prompt manner. Specifically, we first establish multi-modal, learnable prompt pools containing knowledge of room types. By querying the prompt pools, the navigator can obtain room-type prompts of the current view, and incorporate them into the navigator using a prompt-based learning method. Experimental results on the REVERIE, R2R and SOON datasets demonstrate the effectiveness of our approach.
Page(s): 9745 - 9756
Date of Publication: 15 May 2024

ISSN Information:

Funding Agency:


I. Introduction

Vision and Language Navigation (VLN) tasks [5], [6] require an agent to navigate through an unseen environment following a natural language instruction, replicating the communication between humans and domestic robots. As vision and language technologies continue to rapidly develop, researchers have conducted extensive research on VLN from various perspectives, including model structure [4], [5], [30], [33], representation learning [23], [29], [24], [31], and data augmentation [8], [32]. This has resulted in a greater understanding of effective approaches for VLN, with advancements being made towards improving the accuracy and efficiency of navigation.

Contact IEEE to Subscribe

References

References is not available for this document.