ReasonChainQA: Text-based Complex Question Answering with Explainable Evidence Chains | IEEE Conference Publication | IEEE Xplore

ReasonChainQA: Text-based Complex Question Answering with Explainable Evidence Chains


Abstract:

The ability of reasoning over evidence has received increasing attention in question answering (QA). Recently, natural language database (NLDB) conducts complex QA in kno...Show More

Abstract:

The ability of reasoning over evidence has received increasing attention in question answering (QA). Recently, natural language database (NLDB) conducts complex QA in knowledge base with textual evidences rather than structured representations, this task attracts a lot of attention because of the flexibility and richness of textual evidence. However, existing text-based complex question answering datasets fail to provide explicit reasoning process, while it’s important for retrieval effectiveness and reasoning interpretability. Therefore, we present a benchmark ReasonChainQA with explanatory and explicit evidence chains. ReasonChainQA consists of two subtasks: answer generation and evidence chains extraction, it also contains higher diversity for multi-hop questions with varying depths, 12 reasoning types and 78 relations. To obtain high-quality textual evidences for answering complex question. Additional experiment on supervised and unsupervised retrieval fully indicates the significance of ReasonChainQA. Dataset and codes will be made publicly available upon accepted.
Date of Conference: 25-27 November 2022
Date Added to IEEE Xplore: 13 March 2023
ISBN Information:

ISSN Information:

Conference Location: Xiamen, China
References is not available for this document.

I. Introduction

Developing systems that can reason over explicit knowledge has attracted substantial attention in current AI research [1]. Complex Question Answering (Complex QA) tasks provide a comprehensive and quantitative way to measure these abilities, with evidence provided by structured knowledge bases (e.g.WikiData) or natural language texts (e.g. Wikipedia). Considering the high cost of constructing structured knowledge bases, this paper focuses on complex QA over textual evidence.

Select All
1.
P. Clark, O. Tafjord and K. Richardson, "Transformers as soft reasoners over language", Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence IJCAI-20, pp. 3882-3890, 7 2020.
2.
Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. Cohen, R. Salakhutdinov, et al., "HotpotQA: A dataset for diverse explainable multi-hop question answering", Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2369-2380, Oct.-Nov. 2018, [online] Available: https://aclanthology.org/D18-1259.
3.
J. Thorne, M. Yazdani, M. Saeidi, F. Silvestri, S. Riedel and A. Halevy, "Database reasoning over text", Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 3091-3104, 2021.
4.
P. Qi, H. Lee, T. Sido and C. Manning, "Answering open-domain questions of varying reasoning steps from text", Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3599-3614, Nov. 2021, [online] Available: https://aclanthology.org/2021.emnlp-main.292.
5.
W. Xiong, X. L. Li, S. Iyer, J. Du, P. Lewis, W. Y. Wang, et al., "Answering complex open-domain questions with multi-hop dense retrieval", International Conference on Learning Representations, 2021.
6.
E. M. Voorhees, "The trec-8 question answering track report", 2000.
7.
J. Weston, A. Bordes, S. Chopra, A. M. Rush, B. van Merriënboer, A. Joulin, et al., "Towards ai-complete question answering: A set of prerequisite toy tasks", 2015.
8.
V. Karpukhin, B. Oguz, S. Min, P. Lewis, L. Wu, S. Edunov, et al., "Dense passage retrieval for open-domain question answering", Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 6769-6781, Nov. 2020.
9.
A. Asai, K. Hashimoto, H. Hajishirzi, R. Socher and C. Xiong, "Learning to retrieve reasoning paths over wikipedia graph for question answering", International Conference on Learning Representations, 2020.
10.
S. Saha, S. Ghosh, S. Srivastava and M. Bansal, "PRover: Proof generation for interpretable reasoning over rules", Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 122-136, Nov. 2020, [online] Available: https://aclanthology.org/2020.emnlp-main.9.
11.
S. Saha, P. Yadav and M. Bansal, "multiPRover: Generating multiple proofs for improved interpretability in rule reasoning", Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 3662-3677, Jun. 2021.
12.
Y. Feldman and R. El-Yaniv, "Multi-hop paragraph retrieval for open-domain question answering", meeting of the association for computational linguistics, 2019.
13.
J. Shi, S. Cao, L. Pan, Y. Xiang, L. Hou, J. Li, et al., "Kqa pro: A dataset with explicit compositional programs for complex question answering over knowledge base", 2022.
14.
C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, et al., "Exploring the limits of transfer learning with a unified text-to-text transformer", Journal of Machine Learning Research, vol. 21, no. 140, pp. 1-67, 2020, [online] Available: http://jmlr.org/papers/v21/20-074.html.
Contact IEEE to Subscribe

References

References is not available for this document.