I. Introduction
Knowledge base is the product of the combination of artificial intelligence and database technology. With the development of computer technology, its application has entered a wide range of non-numerical processing fields. With the enlargement of knowledge base, the level of knowledge is enriched, including common sense knowledge, rational knowledge, empirical knowledge and meta-knowledge, etc. Knowledge management has become a prominent problem, for example, how to effectively utilize, organize, store, manage, maintain and update large-scale knowledge; How to make effective use of stored knowledge for reasoning and problem solving. Therefore, knowledge maintenance is an important part of knowledge base management, which is directly related to the quality of knowledge system. Natural language processing (NLP) technology is booming, driven by deep learning, and has surpassed statistics-based machine learning methods in machine translation, text classification, etc. On the one handlist and other recurrent neural network models cooperated with word embedding models such as Word2vec and Glove, which could excavate the information of text timing level and avoid the feature loss of traditional machine learning models [1]. On the other hand, the pretraining language model represented by BERT and XLNet, through the use of massive data sets and unsupervised training methods, has deep network structure and larger scale, further solves the polysemous problems such as the word, and has comparable performance in multiple NLP tasks, becoming a new milestone in the field of NLP. From the perspective of natural language processing, concept matching is based on semantic similarity, and concept matching is the key to text understanding and online information retrieval. The basis of semantic similarity calculation is to have a huge thesaurus and corpus support. This paper proposes an algorithm for semantic similarity between natural language concepts based on well-designed domain ontology.