Abstract:
Object detection is a critical task in computer vision that involves identifying and localizing objects in images. Deep learning-based object detectors have achieved impr...Show MoreNotes: This DOI was registered to an article that was not presented by the author(s) at this conference. As per section 8.2.1.B.13 of IEEE's "Publication Services and Products Board Operations Manual," IEEE has chosen to exclude this article from distribution. We regret any inconvenience.
Metadata
Abstract:
Object detection is a critical task in computer vision that involves identifying and localizing objects in images. Deep learning-based object detectors have achieved impressive performance, but their computational requirements limit their deployment in resource-constrained scenarios. Knowledge distillation has emerged as a technique to transfer knowledge from complex teacher models to lightweight student models. However, existing methods have not extensively explored knowledge distillation for object detection. In this paper, we propose a Multiscale Attention-based Knowledge Distillation (MAD) method for object detection. Our approach leverages multiscale feature maps and attention mechanisms to distill knowledge effectively. Experimental results demonstrate that our method enhances the performance of student models while maintaining computational efficiency, making it suitable for real-time applications.
Notes: This DOI was registered to an article that was not presented by the author(s) at this conference. As per section 8.2.1.B.13 of IEEE's "Publication Services and Products Board Operations Manual," IEEE has chosen to exclude this article from distribution. We regret any inconvenience.
Published in: ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Date of Conference: 14-19 April 2024
Date Added to IEEE Xplore: 18 March 2024
ISBN Information: