Gaze-Guided Robotic Vascular Ultrasound Leveraging Human Intention Estimation | IEEE Journals & Magazine | IEEE Xplore

Gaze-Guided Robotic Vascular Ultrasound Leveraging Human Intention Estimation


Abstract:

Medical ultrasound has been widely used to examine vascular structure in modern clinical practice. However, traditional ultrasound examination often faces challenges rela...Show More

Abstract:

Medical ultrasound has been widely used to examine vascular structure in modern clinical practice. However, traditional ultrasound examination often faces challenges related to inter- and intra-operator variation. The robotic ultrasound system (RUSS) appears as a potential solution for such challenges because of its superiority in stability and reproducibility. Given the complex anatomy of human vasculature, multiple vessels often appear in ultrasound images, or a single vessel bifurcates into branches, complicating the examination process. To tackle this challenge, this work presents a gaze-guided RUSS for vascular applications. A gaze tracker captures the eye movements of the operator. The extracted gaze signal guides the RUSS to follow the correct vessel when it bifurcates. Additionally, a gaze-guided segmentation network is proposed to enhance segmentation robustness by exploiting gaze information. However, gaze signals are often noisy, requiring interpretation to accurately discern the operator's true intentions. To this end, this study proposes a stabilization module to process raw gaze data. The inferred attention heatmap is utilized as a region proposal to aid segmentation and serve as a trigger signal when the operator needs to adjust the scanning target, such as when a bifurcation appears. To ensure appropriate contact between the probe and surface during scanning, an automatic ultrasound confidence-based orientation correction method is developed. In experiments, we demonstrated the efficiency of the proposed gaze-guided segmentation pipeline by comparing it with other methods. Besides, the performance of the proposed gaze-guided RUSS was also validated as a whole on a realistic arm phantom with an uneven surface.
Published in: IEEE Robotics and Automation Letters ( Volume: 10, Issue: 4, April 2025)
Page(s): 3078 - 3085
Date of Publication: 07 February 2025

ISSN Information:

Funding Agency:

Description

This video presents a gaze-guided robotic ultrasound system. The gaze signal of the operator is captured by a gaze camera mounted on the screen. The actual intention of the operator is estimated through an estimation module and utilized as attention to guide the vessel segmentation. The robot is controlled to centralize the target vessel while optimizing the orientation to have proper contact with the scanning surface, thus achieving a better image quality. This video mainly consists of five parts: 1. The performance of the system when the confidence-based orientation correction is not activated. 2. The performance of the system when the confidence-based orientation correction is activated. 3. The performance of the system when switching between vessels. 4. Qualitative segmentation results comparison of different methods. 5. Representative ultrasound images from each volunteer (V1-V5).
Review our Supplemental Items documentation for more information.

I. Introduction

Medical ultrasound, valued for its non-invasiveness, portability, real-time performance, and affordability, is widely used in clinical practice for screening and intra-operative guidance. However, traditional free-hand ultrasound suffers from inter- and intra-operator variances. Image quality depends on acquisition parameters like contact forces, angles, and probe positioning [1], [2], making it highly operator-dependent and reducing result reproducibility [3]. To tackle such a dilemma, robotic ultrasound systems (RUSS) offer a promising solution to address these challenges [4], [5], [6].

Description

This video presents a gaze-guided robotic ultrasound system. The gaze signal of the operator is captured by a gaze camera mounted on the screen. The actual intention of the operator is estimated through an estimation module and utilized as attention to guide the vessel segmentation. The robot is controlled to centralize the target vessel while optimizing the orientation to have proper contact with the scanning surface, thus achieving a better image quality. This video mainly consists of five parts: 1. The performance of the system when the confidence-based orientation correction is not activated. 2. The performance of the system when the confidence-based orientation correction is activated. 3. The performance of the system when switching between vessels. 4. Qualitative segmentation results comparison of different methods. 5. Representative ultrasound images from each volunteer (V1-V5).
Review our Supplemental Items documentation for more information.
Contact IEEE to Subscribe

References

References is not available for this document.