Interpreting embedding models of knowledge bases.

A. C. Gusmão

Published 2018 in Unknown venue

ABSTRACT

Knowledge bases are employed in a variety of applications, from natural language processing to semantic web search; alas, in practice, their usefulness is hurt by their incompleteness. To address this issue, several techniques aim at performing knowledge base completion, of which embedding models are efficient, attain state-of-the-art accuracy, and eliminate the need for feature engineering. However, embedding models predictions are notoriously hard to interpret. In this work, we propose model-agnostic methods that allow one to interpret embedding models by extracting weighted Horn rules from them. More specifically, we show how the so-called “pedagogical techniques”, from the literature on neural networks, can be adapted to take into account the large-scale relational aspects of knowledge bases, and show experimentally their strengths and weaknesses.

PUBLICATION RECORD

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-46 of 46 references · Page 1 of 1

CITED BY

  • No citing papers are available for this paper.

Showing 0-0 of 0 citing papers · Page 1 of 1