Abstract:
Scanning electron microscopy (SEM) has been widely used for the semiconductor industry since it provides high-resolution (HR) details of the semiconductor. However, there...Show MoreMetadata
Abstract:
Scanning electron microscopy (SEM) has been widely used for the semiconductor industry since it provides high-resolution (HR) details of the semiconductor. However, there is a gap in research for various tasks (i.e., image restoration (IR) and structure prediction) in SEM datasets collected under various conditions. Therefore, we introduce a new SEM dataset with diverse characteristics such as energy, noise, current with various levels for IR, and structure prediction. Furthermore, we propose a new deep-learning-based method for this dataset. The method consists of two stages: IR stage and structure prediction stage. In the IR stage, we design the transformer-based architecture to use pixel information widely. In the structure prediction stage, we introduce a novel training algorithm, SEMixup, and a novel CNN-based network, SEM structure prediction network (SEM-SPNet). Specifically, SEMixup overcomes the generalization and robustness of SEM-SPNet by implicitly interpolating a pair of samples and their labels. Experiments demonstrate that our method achieves state-of-the-art results across all dataset conditions. This work expands the possibilities of SEM image analysis using deep learning, contributing to the semiconductor industry.
Published in: IEEE Transactions on Instrumentation and Measurement ( Volume: 73)
Funding Agency:
References is not available for this document.
Select All
1.
C. A. Schneider, W. S. Rasband and K. W. Eliceiri, "NIH image to ImageJ: 25 years of image analysis", Nature Methods, vol. 9, no. 7, pp. 671-675, Jun. 2012.
2.
S. Mondini, A. M. Ferretti, A. Puglisi and A. Ponti, "Pebbles and PebbleJuggler: Software for accurate unbiased and fast measurement and analysis of nanoparticle morphology from transmission electron microscopy (TEM) micrographs", Nanoscale, vol. 4, no. 17, pp. 5356-5372, 2012.
3.
U. Phromsuwan, C. Sirisathitkul, Y. Sirisathitkul, B. Uyyanonvara and P. Muneesawang, "Application of image processing to determine size distribution of magnetic nanoparticles", J. Magn., vol. 18, no. 3, pp. 311-316, Sep. 2013.
4.
M. H. Modarres, R. Aversa, S. Cozzini, R. Ciancio, A. Leto and G. P. Brandino, "Neural network for nanoscience scanning electron microscope image recognition", Sci. Rep., vol. 7, no. 1, pp. 13282, Oct. 2017.
5.
F. Yu, T. Lu, B. Han and C. Xue, "A quantitative study of aggregation behaviour and integrity of spray-dried microcapsules using three deep convolutional neural networks with transfer learning", J. Food Eng., vol. 300, Jul. 2021.
6.
T. Houben, T. Huisman, M. Pisarenco, F. van der Sommen and P. H. N. de With, "Depth estimation from a single SEM image using pixel-wise fine-tuning with multimodal data", Mach. Vis. Appl., vol. 33, pp. 56, Jul. 2022.
7.
A. Dosovitskiy et al., "An image is worth 16×16 words: Transformers for image recognition at scale", Proc. Int. Conf. Learn. Represent., pp. 1-22, 2020.
8.
H. Zhang, M. Cisse, Y. N. Dauphin and D. Lopez-Paz, "Mixup: Beyond empirical risk minimization", arXiv:1710.09412, 2017.
9.
H. Kim, J. Han and T. Y.-J. Han, "Machine vision-driven automatic recognition of particle size and morphology in SEM images", Nanoscale, vol. 12, no. 37, pp. 19461-19469, 2020.
10.
G. Kavuran, "SEM-Net: Deep features selections with binary particle swarm optimization method for classification of scanning electron microscope images", Mater. Today Commun., vol. 27, Jun. 2021.
11.
H. Iwata, Y. Hayashi, A. Hasegawa, K. Terayama and Y. Okuno, "Classification of scanning electron microscope images of pharmaceutical excipients using deep convolutional neural networks with transfer learning", Int. J. Pharmaceutics X, vol. 4, Dec. 2022.
12.
J. Bals and M. Epple, "Deep learning for automated size and shape analysis of nanoparticles in scanning electron microscopy", RSC Adv., vol. 13, no. 5, pp. 2795-2802, 2023.
13.
R. G. Keys, "Cubic convolution interpolation for digital image processing", IEEE Trans. Acoust. Speech Signal Process., vol. ASSP-29, no. 6, pp. 1153-1160, Dec. 1981.
14.
J. Allebach and P. W. Wong, "Edge-directed interpolation", Proc. 3rd IEEE Int. Conf. Image Process., pp. 707-710, Sep. 1996.
15.
K. I. Kim and Y. Kwon, "Single-image super-resolution using sparse regression and natural image prior", IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 6, pp. 1127-1133, Jun. 2010.
16.
H. Chang, D.-Y. Yeung and Y. Xiong, "Super-resolution through neighbor embedding", Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. I, Jun. 2004.
17.
J. Yang, J. Wright, T. S. Huang and Y. Ma, "Image super-resolution via sparse representation", IEEE Trans. Image Process., vol. 19, no. 11, pp. 2861-2873, Nov. 2010.
18.
C. Dong, C. C. Loy, K. He and X. Tang, "Image super-resolution using deep convolutional networks", IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 2, pp. 295-307, Feb. 2015.
19.
C. Dong, C. C. Loy, K. He and X. Tang, "Learning a deep convolutional network for image super-resolution", Proc. Eur. Conf. Comput. Vis., pp. 184-199, 2014.
20.
Y. Tai, J. Yang and X. Liu, "Image super-resolution via deep recursive residual network", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 3147-3155, Jul. 2017.
21.
Z. Hui, X. Wang and X. Gao, "Fast and accurate single image super-resolution via information distillation network", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 723-731, Jun. 2018.
22.
W. Han, S. Chang, D. Liu, M. Yu, M. Witbrock and T. S. Huang, "Image super-resolution via dual-state recurrent networks", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 1654-1663, Jun. 2018.
23.
N. Ahn, B. Kang and K.-A. Sohn, "Fast accurate and lightweight super-resolution with cascading residual network", Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 252-268, 2018.
24.
Z. Wang, D. Liu, J. Yang, W. Han and T. Huang, "Deep networks for image super-resolution with sparse prior", Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 370-378, Dec. 2015.
25.
W.-S. Lai, J.-B. Huang, N. Ahuja and M.-H. Yang, "Deep Laplacian pyramid networks for fast and accurate super-resolution", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 624-632, Jul. 2017.
26.
Y. Zhang, Y. Tian, Y. Kong, B. Zhong and Y. Fu, "Residual dense network for image restoration", IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 7, pp. 2480-2495, Jul. 2020.
27.
T. Tong, G. Li, X. Liu and Q. Gao, "Image super-resolution using dense skip connections", Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 4799-4807, Oct. 2017.
28.
T. Dai, J. Cai, Y. Zhang, S.-T. Xia and L. Zhang, "Second-order attention network for single image super-resolution", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11065-11074, Jun. 2019.
29.
Y. Zhang, K. Li, K. Li, B. Zhong and Y. Fu, "Residual non-local attention networks for image restoration", arXiv:1903.10082, 2019.
30.
R. Dahl, M. Norouzi and J. Shlens, "Pixel recursive super resolution", Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 5449-5458, Oct. 2017.