In multi-step Retrieval-Augmented Generation (RAG) systems for Question Answering (QA), the re-ranking phase following initial fast document retrieval is critical in determining the final answer quality by providing the most relevant context passages. This step typically relies on the ranking of candidate documents using deep neural network classification models that assign relevance scores based on their prediction probabilities. However, conventional Deep Neural Network (DNN) classifiers often generate poorly calibrated probability estimates when minimizing prediction loss. This miscalibration leads to inaccurate document rankings, undermining the reliability of the re-ranking stage. To address this issue, we propose the Evidential Document Re-Ranking (EDRR) model, which leverages Evidential Deep Learning (EDL) to better calibrate the prediction probabilities to true accuracies and thus provide better ranking performances. EDRR enables better calibration by simply modeling the full posterior distribution over class probabilities, which alleviates the overconfidence problem observed in DNN models. In addition, this approach facilitates active learning by identifying uncertain predictions, allowing for more effective sampling of diverse or underrepresented instances. Experimental results on the Wikipedia-NQ, MS MARCO, and HotpotQA datasets show that EDRR consistently outperforms Cross-Encoder, ColBERT, and Dense Passage Retrieval, achieving the highest mAP@10 scores across all three benchmarks. This demonstrates that incorporating well-calibrated uncertainty estimates can meaningfully enhance retrieval performance in the QA systems.
Document Re-Ranking With Evidential Neural Networks
Published 2025 in IEEE Access
ABSTRACT
PUBLICATION RECORD
- Publication year
2025
- Venue
IEEE Access
- Publication date
Unknown publication date
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-25 of 25 references · Page 1 of 1
CITED BY
- No citing papers are available for this paper.
Showing 0-0 of 0 citing papers · Page 1 of 1