Abstract:
The fundus image is often used in clinical diagnosis due to ease and safety of acquisition, but the quality may be affected by environment and onsite operations. As low-q...Show MoreMetadata
Abstract:
The fundus image is often used in clinical diagnosis due to ease and safety of acquisition, but the quality may be affected by environment and onsite operations. As low-quality medical images may lead to misinterpretation in diagnosis and analysis, it is important to improve quality of the improperly-acquired fundus images. Unfortunately, the existing fundus image enhancement methods require task-specific prior knowledge or suffer from insufficient generalization ability. To cope with this issue, a generative adversarial network (GAN) based model is proposed, namely the semi-supervised GAN with anatomical structure preservation (SSGAN-ASP). Specifically, an anatomical structure extraction component is employed in the generator to guide the enhancement process by preserving both retinal and lesion structures, while color information in the fundus image is also preserved. The SSGAN-ASP model is evaluated and compared with the state-of-the-art methods for medical image enhancement on three popular datasets. In addition, it is applied in the pre-processing of retinal vessel segmentation and diabetic retinopathy grading tasks to show efficacy in computer-aided diagnosis. Experimental results demonstrate that visual quality of the enhanced image can be improved while better performance in clinical diagnosis is achieved with our proposed model by adopting the anatomical structure extraction component and preserving color information as well.
Published in: IEEE Transactions on Emerging Topics in Computational Intelligence ( Volume: 8, Issue: 1, February 2024)
Funding Agency:
References is not available for this document.
Select All
1.
W. Liao, B. Zou, R. Zhao, Y. Chen, Z. He and M. Zhou, "Clinical interpretable deep learning model for glaucoma diagnosis", IEEE J. Biomed. Health Inform., vol. 24, no. 5, pp. 1405-1412, May 2020.
2.
T. J. MacGillivray, J. Cameron, Q. Zhang and A. E. Medany, "Suitability of U.K. biobank retinal images for automatic analysis of morphometric properties of the vasculature", PLoS One, vol. 10, no. 5, pp. 1-10, 2015.
3.
L. Li, M. Verma, Y. Nakashima, H. Nagahara and R. Kawasaki, "Iternet: Retinal image segmentation utilizing structural redundancy in vessel networks", Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis., pp. 3656-3665, 2020.
4.
H. Fu, J. Cheng, Y. Xu, D. W. K. Wong, J. Liu and X. Cao, "Joint optic disc and cup segmentation based on multi-label deep network and polar transformation", IEEE Trans. Med. Imag., vol. 37, no. 7, pp. 1597-1605, Jul. 2018.
5.
A. He, T. Li, N. Li, K. Wang and H. Fu, "CABNet: Category attention block for imbalanced diabetic retinopathy grading", IEEE Trans. Med. Imag., vol. 40, no. 1, pp. 143-153, Jan. 2021.
6.
A. D. Pérez, O. Perdomo, H. Rios, F. Rodríguez and F. A. González, "A conditional generative adversarial network-based method for eye fundus image quality enhancement", Proc. Ophthalmic Med. Image Anal.: 7th Int. Workshop, pp. 185-194, 2020.
7.
Z. Shen, H. Fu, J. Shen and L. Shao, "Modeling and enhancing low-quality retinal fundus images", IEEE Trans. Med. Imag., vol. 40, no. 3, pp. 996-1006, Mar. 2021.
8.
A. W. Setiawan, T. R. Mengko, O. S. Santoso and A. B. Suksmono, "Color retinal image enhancement using CLAHE", Proc. IEEE Int. Conf. ICT Smart Soc., pp. 1-3, 2013.
9.
H.-T. Wu, K. Zheng, Q. Huang and J. Hu, "Contrast enhancement of multiple tissues in MR brain images with reversibility", IEEE Signal Process. Lett., vol. 28, pp. 160-164, 2021.
10.
H.-T. Wu et al., "Reversible contrast enhancement for medical images with background segmentation", IET Image Process., vol. 14, no. 2, pp. 327-336, 2020.
11.
H.-T. Wu, X. Cao, R. Jia and Y.-M. Cheung, "Reversible data hiding with brightness preserving contrast enhancement by two-dimensional histogram modification", IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 11, pp. 7605-7617, Nov. 2022.
12.
K. He, J. Sun and X. Tang, "Single image haze removal using dark channel prior", IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2341-2353, Dec. 2011.
13.
P. Dai, H. Sheng, J. Zhang, L. Li, J. Wu and M. Fan, "Retinal fundus image enhancement using the normalized convolution and noise removing", Int. J. Biomed. Imag., vol. 5075612, pp. 1-12, 2016.
14.
J. Wang, Y.-J. Li and K.-F. Yang, "Retinal fundus image enhancement with image decomposition and visual adaptation", Comput. Biol. Med., vol. 128, 2021.
15.
K. G. Lore, A. Akintayo and S. Sarkar, "LLNet: A deep autoencoder approach to natural low-light image enhancement", Pattern Recognit., vol. 61, pp. 650-662, 2017.
16.
C. Chen, Q. Chen, J. Xu and V. Koltun, "Learning to see in the dark", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 3291-3300, 2018.
17.
A. Raj, N. A. Shah and A. K. Tiwari, "A novel approach for fundus image enhancement", Biomed. Signal Process. Control, vol. 71, 2022.
18.
X. Liu, Z. Hu, H. Ling and Y.-M. Cheung, "MTFH: A matrix tri-factorization hashing framework for efficient cross-modal retrieval", IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 3, pp. 964-981, Mar. 2021.
19.
H. Li et al., "An annotation-free restoration network for cataractous fundus images", IEEE Trans. Med. Imag., vol. 41, no. 7, pp. 1699-1710, Jul. 2022.
20.
Y. Jiang et al., "EnlightenGan: Deep light enhancement without paired supervision", IEEE Trans. Image Process., vol. 30, pp. 2340-2349, 2021.
21.
Z. Ni, W. Yang, S. Wang, L. Ma and S. Kwong, "Towards unsupervised deep image enhancement with generative adversarial network", IEEE Trans. Image Process., vol. 29, pp. 9140-9151, 2020.
22.
L. Li et al., "Semi-supervised image dehazing", IEEE Trans. Image Process., vol. 29, pp. 2766-2779, 2020.
23.
Y. Wei et al., "Semi-deraingan: A new semi-supervised single image deraining", Proc. IEEE Int. Conf. Multimedia Expo, pp. 1-6, 2021.
24.
X. Yao, Z. Zhu, C. Kang, S.-H. Wang, J. M. Gorriz and Y.-D. Zhang, "AdaD-FNN for chest CT-Based COVID-19 diagnosis", IEEE Trans. Emerg. Topics Comput. Intell., vol. 7, no. 1, pp. 5-14, Feb. 2023.
25.
H. Li, X. Shi, X. Zhu, S. Wang and Z. Zhang, "FSNet: Dual interpretable graph convolutional network for Alzheimer's disease analysis", IEEE Trans. Emerg. Topics Comput. Intell., vol. 7, no. 1, pp. 15-25, Feb. 2023.
26.
M. Li et al., "Explainable COVID-19 infections identification and delineation using calibrated pseudo labels", IEEE Trans. Emerg. Topics Comput. Intell., vol. 7, no. 1, pp. 26-35, Feb. 2023.
27.
R. Zhang, L. Guo, S. Huang and B. Wen, "ReLLIE: Deep reinforcement learning for customized low-light image enhancement", Proc. 29th ACM Int. Conf. Multimedia, pp. 2429-2437, 2021.
28.
C. Ma, Y. Rao, Y. Cheng, C. Chen, J. Lu and J. Zhou, "Structure-preserving super resolution with gradient guidance", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 7769-7778, 2020.
29.
S. Zhang, C. A. Webers and T. T. Berendschot, "A double-pass fundus reflection model for efficient single retinal image enhancement", Signal Process., vol. 192, 2022.
30.
E. H. Land, "The retinex theory of color vision", Sci. Amer., vol. 237, no. 6, pp. 108-129, 1977.