Loading [MathJax]/extensions/MathMenu.js
Multi-sensor Energy Efficient Obstacle Detection | IEEE Conference Publication | IEEE Xplore

Multi-sensor Energy Efficient Obstacle Detection


Abstract:

With the improvement in technology, both the cost and the power requirement of cameras, as well as other sensors have come down significantly. It has allowed these sensor...Show More

Abstract:

With the improvement in technology, both the cost and the power requirement of cameras, as well as other sensors have come down significantly. It has allowed these sensors to be integrated into portable as well as wearable systems. Such systems are usually operated in a hands-free and always-on manner where they need to function continuously in a variety of scenarios. In such situations, relying on a single sensor or a fixed sensor combination can be detrimental to both performance as well as energy requirements. Consider the case of an obstacle detection task. Here using an RGB camera helps in recognizing the obstacle type but takes much more energy than an ultrasonic sensor. Infrared cameras can perform better than RGB camera at night but consume twice the energy. Therefore, an efficient system must use a combination of sensors, with an adaptive control that ensures the use of the sensors appropriate to the context. In this adaptation, one needs to consider both performance and energy and their trade-off. In this paper, we explore the strengths of different sensors as well their trade-off for developing a deep neural network based wearable device. We choose a specific case study in the context of a mobility assistance device for the visually impaired. The device detects obstacles in the path of a visually impaired person and is required to operate both at day and night with minimal energy to increase the usage time on a single charge. The device employs multiple sensors: ultrasonic sensor, RGB Camera, and NIR Camera along with a deep neural network accelerator for speeding up computation. We show that by adaptively choosing the appropriate sensor for the context, we can achieve up to 90% reduction in energy while maintaining comparable performance to a single sensor system.
Date of Conference: 28-30 August 2019
Date Added to IEEE Xplore: 21 October 2019
ISBN Information:
Conference Location: Kallithea, Greece
References is not available for this document.

I. Introduction

The design of a portable real-time vision system which continuously monitors the surroundings is complex. Such a system must achieve an acceptable level of performance while making sure that the energy consumption is also within limits. Further, due to their always-on nature, these systems go through a lot more variations in the environment than a typical static system, which makes it imperative to employ a variety of sensors to perform efficiently under different conditions. Therefore, the system designers must classify various contexts, and choose the corresponding best performing and least power consuming sensor in that context, to ensure that the performance levels are met while also saving energy. Examples of such systems include robotic systems working in collaboration with humans, factory floor robots, assistive devices for visually impaired and even battery operated autonomous vehicles.

Select All
1.
Intel Movidius Neural Compute Stick, [online] Available: software.intel.com/en-us/movidius-ncs.
2.
Gyrfalcon Technology PLAI Plug, [online] Available: www.gyrfalcontech.ai/solutions/plai/.
3.
Orange Pi AI Stick, [online] Available: www.orangepi.org/Orange%20Pi%20AI%20Stick%202801.
4.
Raspberry Pi 3B, [online] Available: www.raspberrypi.org/products/raspberry-pi-3-model-b/.
5.
Maxbotix MB7383 Ultrasonic Sensors, [online] Available: www.maxbotix.com/Ultrasonic_Sensors/MB7383.htm.
6.
Waveshare RPi IR-CUT Camera, [online] Available: www.waveshare.com/wiki/RPi_IR-CUT_Camera.
7.
PiCamera Documentation, [online] Available: picamera.readthedocs.io/en/release-1.13.
8.
J. Dai, Y. Li, K. He and J. Sun, "R-fcn: Object detection via region-based fully convolutional networks" in Advances in neural information processing systems, pp. 379-387, 2016.
9.
P. Dutta, M. Grimmer, A. Arora, S. Bibyk and D. Culler, "Design of a wireless sensor network platform for detecting rare random and ephemeral events", Proceedings of the 4th international symposium on Information processing in sensor networks, pp. 70, 2005.
10.
W. Elmannai and K. Elleithy, "Sensor-based assistive devices for visually-impaired people: current status challenges and future directions", Sensors, vol. 17, no. 3, pp. 565, 2017.
11.
M. Everingham, L. Van Gool, C. K. Williams, J. Winn and A. Zisserman, "The pascal visual object classes (voc) challenge", International journal of computer vision, vol. 88, no. 2, pp. 303-338, 2010.
12.
R. Girshick, J. Donahue, T. Darrell and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation", Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580-587, 2014.
13.
S. Hengstler, D. Prashanth, S. Fong and H. Aghajan, "Mesheye: a hybrid-resolution smart camera mote for applications in distributed intelligent surveillance", Proceedings of the 6th international conference on Information processing in sensor networks, pp. 360-369, 2007.
14.
A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, et al., "Mobilenets: Efficient convolutional neural networks for mobile vision applications", 2017.
15.
A. Krizhevsky, I. Sutskever and G. E. Hinton, "Imagenet classification with deep convolutional neural networks" in Advances in neural information processing systems, pp. 1097-1105, 2012.
16.
J. Li, F. Zhang, L. Wei, T. Yang and Z. Lu, "Nighttime foreground pedestrian detection based on three-dimensional voxel surface model", Sensors, vol. 17, no. 10, pp. 2354, 2017.
17.
T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, et al., "Microsoft coco: Common objects in context", European conference on computer vision, pp. 740-755, 2014.
18.
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, et al., "Ssd: Single shot multibox detector", European conference on computer vision, pp. 21-37, 2016.
19.
Y. P. Loh and C. S. Chan, "Getting to know low-light images with the exclusively dark dataset", Computer Vision and Image Understanding, vol. 178, pp. 30-42, 2019.
20.
B. Mocanu, R. Tapu and T. Zaharia, "When ultrasonic sensors and computer vision join forces for efficient obstacle detection and recognition", Sensors, vol. 16, no. 11, pp. 1807, 2016.
21.
R. Possas, S. Pinto Caceres and F. Ramos, "Egocentric activity recognition on a budget", Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5967-5976, 2018.
22.
T. Rault, A. Bouabdallah, Y. Challal and F. Marin, "A survey of energy-efficient context recognition systems using wearable sensors for healthcare applications", Pervasive and Mobile Computing, vol. 37, pp. 23-44, 2017.
23.
J. Redmon, S. Divvala, R. Girshick and A. Farhadi, "You only look once: Unified real-time object detection", Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779-788, 2016.
24.
S. Ren, K. He, R. Girshick and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks" in Advances in neural information processing systems, pp. 91-99, 2015.
25.
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., "Imagenet large scale visual recognition challenge", International journal of computer vision, vol. 115, no. 3, pp. 211-252, 2015.
26.
V. Singh, R. Paul, D. Mehra, A. Gupta, V. D. Sharma, S. Jain, C. Agarwal, A. Garg, S. S. Gujral, M. Balakrishnan et al., "’Smart’Cane for the Visually Impaired: Design and Controlled Field Testing of an Affordable Obstacle Detection System", TRANSED 2010: 12th International Conference on Mobility and Transport for Elderly and Disabled PersonsHong Kong Society for RehabilitationS K Yee Medical FoundationTransportation Research Board, 2010.
27.
A. Sobti, C. Arora and M. Balakrishnan, "Object detection in real-time systems: Going beyond precision", 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1020-1028, 2018.
28.
F.-T. Sun, C. Kuo and M. Griss, "Pear: Power efficiency through activity recognition (for ecg-based sensing)", 2011 5th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth) and Workshops, pp. 115-122, 2011.
29.
R. Tan, G. Xing, X. Liu, J. Yao and Z. Yuan, "Adaptive calibration for fusion-based cyber-physical systems", ACM Transactions on Embedded Computing Systems (TECS), vol. 11, no. 4, pp. 80, 2012.
30.
C. R. Wren, U. M. Erdem and A. J. Azarbayejani, "Functional calibration for pan-tilt-zoom cameras in hybrid sensor networks", Multimedia Systems, vol. 12, no. 3, pp. 255-268, 2006.

Contact IEEE to Subscribe

References

References is not available for this document.