A fundamental challenge in diagnostics is integrating multiple modalities to develop a joint characterization of physiological state. Using the heart as a model system, we develop a cross-modal autoencoder framework for integrating distinct data modalities and constructing a holistic representation of cardiovascular state. In particular, we use our framework to construct such cross-modal representations from cardiac magnetic resonance images (MRIs), containing structural information, and electrocardiograms (ECGs), containing myoelectric information. We leverage the learned cross-modal representation to (1) improve phenotype prediction from a single, accessible phenotype such as ECGs; (2) enable imputation of hard-to-acquire cardiac MRIs from easy-to-acquire ECGs; and (3) develop a framework for performing genome-wide association studies in an unsupervised manner. Our results systematically integrate distinct diagnostic modalities into a common representation that better characterizes physiologic state. A challenge in diagnostics is integrating different data modalities to characterize physiological state. Here, the authors show, using the heart as a model system, that cross-modal autoencoders can integrate and translate modalities to improve diagnostics and identify associated genetic variants.
Cross-modal autoencoder framework learns holistic representations of cardiovascular state
Adityanarayanan Radhakrishnan,S. Friedman,S. Khurshid,Kenney Ng,P. Batra,S. Lubitz,A. Philippakis,Caroline Uhler
Published 2022 in bioRxiv
ABSTRACT
PUBLICATION RECORD
- Publication year
2022
- Venue
bioRxiv
- Publication date
2022-05-28
- Fields of study
Biology, Medicine, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar, PubMed
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-65 of 65 references · Page 1 of 1
CITED BY
Showing 1-84 of 84 citing papers · Page 1 of 1