Loading [MathJax]/extensions/MathZoom.js
Fidelity Estimation Improves Noisy-Image Classification With Pretrained Networks | IEEE Journals & Magazine | IEEE Xplore

Fidelity Estimation Improves Noisy-Image Classification With Pretrained Networks


Abstract:

Image classification has significantly improved using deep learning. This is mainly due to convolutional neural networks (CNNs) that are capable of learning rich feature ...Show More

Abstract:

Image classification has significantly improved using deep learning. This is mainly due to convolutional neural networks (CNNs) that are capable of learning rich feature extractors from large datasets. However, most deep learning classification methods are trained on clean images and are not robust when handling noisy ones, even if a restoration preprocessing step is applied. While novel methods address this problem, they rely on modified feature extractors and thus necessitate retraining. We instead propose a method that can be applied on a pretrained classifier. Our method exploits a fidelity map estimate that is fused into the internal representations of the feature extractor, thereby guiding the attention of the network and making it more robust to noisy data. We improve the noisy-image classification (NIC) results by significantly large margins, especially at high noise levels, and come close to the fully retrained approaches. Furthermore, as proof of concept, we show that when using our oracle fidelity map we even outperform the fully retrained methods, whether trained on noisy or restored images.
Published in: IEEE Signal Processing Letters ( Volume: 28)
Page(s): 1719 - 1723
Date of Publication: 13 August 2021

ISSN Information:

Funding Agency:


I. Introduction

Traditionally, image classification methods have relied on a limited set of hand-crafted features [1]–[6]. Deep learning approaches, especially supervised methods using convolutional neural networks (CNNs) [7], [8] trained on large annotated datasets [9], are capable of extracting rich feature representations [9], [10] and thus improve performance. However, obtaining reliable and generalizable feature extractors remains a challenge [11]–[14]. It is thus desirable to exploit existing, pretrained feature extractors in a modular way across datasets or applications without the need to retrain them. Here, we exploit the feature extractor pretrained on clean images to work with noisy input.

Contact IEEE to Subscribe

References

References is not available for this document.