While neural networks are good at learning unspecified functions from training samples, they cannot be directly implemented in hardware and are often not interpretable or formally verifiable. On the other hand, logic circuits are implementable, verifiable, and interpretable but are not able to learn from training data in a generalizable way. We propose a novel logic learning pipeline that combines the advantages of neural networks and logic circuits. Our pipeline first trains a neural network on a classification task, and then translates this, first to random forests, and then to AND-Inverter logic. We show that our pipeline maintains greater accuracy than naive translations to logic, and minimizes the logic such that it is more interpretable and has decreased hardware cost. We show the utility of our pipeline on a network that is trained on biomedical data. This approach could be applied to patient care to provide risk stratification and guide clinical decision-making.
Making Logic Learnable With Neural Networks
Tobias Brudermueller,Dennis L. Shung,L. Laine,A. Stanley,S. Laursen,H. Dalton,Jeffrey Ngu,M. Schultz,J. Stegmaier,Smita Krishnaswamy
Published 2020 in arXiv.org
ABSTRACT
PUBLICATION RECORD
- Publication year
2020
- Venue
arXiv.org
- Publication date
2020-02-10
- Fields of study
Mathematics, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-48 of 48 references · Page 1 of 1
CITED BY
Showing 1-3 of 3 citing papers · Page 1 of 1