In the relation extraction (RE) task, large language models (LLMs) have shown remarkable capabilities in predicting unknown relations, offering significant improvements in efficiency and flexibility over traditional methods. However, the probabilistic nature of the generation process in LLMs may lead to the occurrence of hallucinations, causing inaccurate relation triples to be generated. To mitigate this problem, this paper proposes a novel weakly supervised method, Cross-Attention Contrastive Relation Extraction (CACRE), which aims at detecting erroneous relation triples generated by LLMs and then effectively distinguishing valid ones. The CACRE leverages contrastive learning and cross-attention mechanisms. Specifically, contrastive learning is applied to distinguish between positive and negative relation triples, enhancing the model’s feature extraction capability by learning discriminative features. Subsequently, a cross-attention mechanism is employed to capture the semantic associations between texts and triples, thereby improving the model’s ability to understand and extract information from the input content. Experiment results on the DuIE2.0 dataset and the TACRED dataset demonstrate that CACRE significantly outperforms baseline LLMs, with average improvements of 12% and 8% in precision, respectively.
CACRE: A Weakly Supervised Method for Cross-Attention Contrastive Relation Extraction With Large Language Models
Zhikui Hu,Kangli Zi,Tianyu Luo,Yuwei Huang,Shi Wang
Published 2026 in IEEE Access
ABSTRACT
PUBLICATION RECORD
- Publication year
2026
- Venue
IEEE Access
- Publication date
Unknown publication date
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-33 of 33 references · Page 1 of 1
CITED BY
- No citing papers are available for this paper.
Showing 0-0 of 0 citing papers · Page 1 of 1