I. Introduction
Models based on deep learning have developed rapidly in recent years, and have achieved impressive performance in many fields such as automatic driving [1]–[3], assisted medical diagnosis [4]–[6] and so on. Medical image aided diagnosis system based on artificial intelligence can assist clinical diagnosis and reduce the probability of misdiagnosis. Through the recognition of artificial intelligence engine, the suspicious lesions in the image can be identified and then the doctor can read the slice, so as to improve the efficiency of the doctor. The common tasks of medical image assisted diagnosis include the detection and segmentation of lesion areas. Medical image segmentation, a fundamental step in the computer-aided diagnosis, aims to segment Region of interest (RoI) as to assisting doctors to make objective decisions. Training a medical image segmentation model is different from the case of a natural image. This is because of an issue inherent in medical images, which is the problem of data imbalance. The data imbalance in medical images can be roughly divided into two categories i.e. the imbalance between foreground and background examples, and that between easy and hard examples. Foreground and background examples usually refer to diseased and non-diseased examples in medical images. The latter is distinguished based on the classification difficulty of a single example. When the data is seriously unbalanced, even a very powerful network may lead to a relatively poor predicted result, so it is particularly important to solve the above two data imbalance problems.