Sepsis is known as a common syndrome in intensive care units (ICU), and severe sepsis and septic shock are among the leading causes of death worldwide. The purpose of this study is to develop a deep learning model that supports clinicians in efficiently managing sepsis patients in the ICU by predicting mortality, ICU length of stay (>14 days), and hospital length of stay (>30 days). The proposed model was developed using 591 retrospective data with 16 tabular data related to a sequential organ failure assessment (SOFA) score. To analyze tabular data, we designed the modified architecture of the transformer that has achieved extraordinary success in the field of languages and computer vision tasks in recent years. The main idea of the proposed model is to use a skip-connected token, which combines both local (feature-wise token) and global (classification token) information as the output of a transformer encoder. The proposed model was compared with four machine learning models (ElasticNet, Extreme Gradient Boosting [XGBoost]), and Random Forest) and three deep learning models (Multi-Layer Perceptron [MLP], transformer, and Feature-Tokenizer transformer [FT-Transformer]) and achieved the best performance (mortality, area under the receiver operating characteristic (AUROC) 0.8047; ICU length of stay, AUROC 0.8314; hospital length of stay, AUROC 0.7342). We anticipate that the proposed model architecture will provide a promising approach to predict the various clinical endpoints using tabular data such as electronic health and medical records.
Prognostic prediction of sepsis patient using transformer with skip connected token for tabular data
Jee-Woo Choi,Minuk Yang,Jae-Woo Kim,Yoon Mi Shin,Y. Shin,Seung Park
Published 2024 in Artif. Intell. Medicine
ABSTRACT
PUBLICATION RECORD
- Publication year
2024
- Venue
Artif. Intell. Medicine
- Publication date
2024-03-01
- Fields of study
Medicine, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar, PubMed
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-37 of 37 references · Page 1 of 1
CITED BY
Showing 1-6 of 6 citing papers · Page 1 of 1