Anomaly detection is now a staple of newer intelligent systems, and can be used in industrial automation, environmental monitoring, healthcare and cyber-security. Conventional anomaly detectors can be thought of as black box models, and they generate non-interpretable opaque alerts that are not trustworthy or reliable. This makes their application in real-world systems have a constrained application, because besides having the timely notification, human operators need to know the causal factor and confidence level of the anomaly that the detector identifies. In this paper, we present a novel architectural architecture of anomaly detection framework named TrustGuardAI, which incorporates both explainability and uncertainty quantification with a stream of real-time analysis. TrustGuardAI is not the first method to focus solely on its accuracy as Soviet predecessors did but is actually built based on a human-centered philosophy. It is a combination of a lightweight LSTM autoencoder based on streaming detection and an attention mechanism based on feature attribution and Bayesian dropout based on confidence estimation. We directly measure two types of uncertainty: aleatoric uncertainty, that it can represent inherent data noise and it is modeled by the predictive variance of our likelihood, and the epistemic uncertainty, which is model (parameter) uncertainty, and can be estimated using Monte Carlo (MC) dropout at inference time. Both the uncertainties are combined to generate the calibrated confidence score of anomaly alerts. Empirical studies of benchmark data sets prove that TrustGuardAI delivers competitive performance (up to 0.92 AUC) with feature-level output and $<6$ ms latency per window performance with a single CPU, as well as, can be easily deployed to edges on low-stress network edges. The more significant potential of TrustGuardAI is that it can be used to overcome the barrier of complex AI algorithms and apply it to making decisions in practice, which can be a step towards reliable, transparent, and effective anomaly detectors.
TrustGuardAI: A Human-Centered Explainable Real-Time Anomaly Detection Framework for Time-Series Sensor Data
Vidhya Lavanya Ramachandran,Kondru Charwick Hamesh,S. P
Published 2025 in 2025 Second International Conference on Pioneering Developments in Computer Science & Digital Technologies (IC2SDT)
ABSTRACT
PUBLICATION RECORD
- Publication year
2025
- Venue
2025 Second International Conference on Pioneering Developments in Computer Science & Digital Technologies (IC2SDT)
- Publication date
2025-12-04
- Fields of study
Not labeled
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-10 of 10 references · Page 1 of 1
CITED BY
- No citing papers are available for this paper.
Showing 0-0 of 0 citing papers · Page 1 of 1