Knowledge facts are typically represented by relational triples, while we observe that some commonsense facts are represented by triples whose forms are inconsistent with the corresponding language expressions. For commonsense mining tasks, this inconsistency raises a challenge for the prevailing methods using pre-trained language models that learn the expression of language. However, there are few studies which focus on this inconsistency issue. To fill this empty, in this paper, we term the commonsense knowledge whose triple form is heavily inconsistent with the language expression as deep commonsense knowledge and first conduct extensive exploratory experiments to study deep commonsense knowledge. We show that deep commonsense knowledge occupies a significant part of commonsense knowledge, while the conventional methods based on pre-trained language models fail to capture it effectively. We further propose a novel method to mine the deep commonsense knowledge from raw text that is exactly language expression, alleviating the reliance of conventional methods on the triple representation form. Experiments demonstrate that our proposed method substantially improves the performance in mining deep commonsense knowledge.
Alleviating the Knowledge-Language Inconsistency: A Study for Deep Commonsense Knowledge
Yi Zhang,Lei Li,Yunfang Wu,Qi Su,Xu Sun
Published 2021 in IEEE/ACM Transactions on Audio Speech and Language Processing
ABSTRACT
PUBLICATION RECORD
- Publication year
2021
- Venue
IEEE/ACM Transactions on Audio Speech and Language Processing
- Publication date
2021-05-28
- Fields of study
Philosophy, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-25 of 25 references · Page 1 of 1
CITED BY
Showing 1-5 of 5 citing papers · Page 1 of 1