Abstract:
Natural Language Inference (NLI) is a branch of Natural Language Processing (NLP) whose main task is to determine the relationship between two sentences. Such tasks essen...Show MoreMetadata
Abstract:
Natural Language Inference (NLI) is a branch of Natural Language Processing (NLP) whose main task is to determine the relationship between two sentences. Such tasks essentially use pre-trained models to ensure accuracy, and many applications are found in human-computer dialog and question-answering systems. However, in some cases where small devices are deployed offline, lightweight model implementations are often required to save computational resources. To address this problem and ensure more efficient inference capabilities, this paper describes how to improve the preexisting Transformer and proposes a Multi-Feature Fusion Transformer Network for NLI processing. The model integrates sequence features and introduces a priori information to enhance local features. It also achieves a comprehensive fusion of sequence information, local features, and non-local features, thus compensating for many potential shortcomings of Transformer. At the same time, it maintains the lightweight feature of the model and facilitates the implementation of downstream tasks. We verified the above point by experiments, our model has better inference performance compared to previous models used for the same task. The accuracy in the SNLI dataset reached 89.0%, while the matched and mismatched versions of MultiNLI reached 79.3% and 78.7%.
Published in: 2023 5th International Conference on Robotics, Intelligent Control and Artificial Intelligence (RICAI)
Date of Conference: 01-03 December 2023
Date Added to IEEE Xplore: 11 April 2024
ISBN Information: